Life inside an Aggregate Root, part 1

One of the most important concepts in Domain Driven Design is the Aggregate Root — a consistency boundary around a group of related objects that move together. To keep things as simple as possible, we apply the following rules to them:

  1. Entities can only hold references to aggregate roots, not entities or value objects within
  2. Access to any entity or value object is only allowed via the root
  3. The entire aggregate is locked, versioned and persisted together

It’s not too hard to implement these restrictions when you’re using a good object-relational mapper. But there are a couple of other rules that are worth mentioning because they’re easy to overlook.

Real-life example: training programme

Here’s a snippet from an app I am building at work (altered slightly to protect the innocent). Domain concepts are in bold:

Training Programme is comprised of Skills, arranged in Skill GroupsSkill Groupscan contain Sub Groups with as many levels deep as you like. Skills can be used for multiple Training Programmes, but you can’t have the same Skill twice under the same Training Programme. When a Skill is removed from a Training ProgrammeIndividuals should no longer have to practice it.

Here’s what it looks like, with our two aggregate roots, Training Programme and Skill:

Pretty simple right? Let’s see how we can implement the two behaviours from the snippet using aggregate roots.

Rule #4: All objects have a reference back to the aggregate root

Let’s look at the first behaviour from the spec:

…you can’t have the same Skill twice under the same Training Programme.

Our first skill group implementation looked this like:

public class TrainingProgramme
{
    public IEnumerable<SkillGroup> SkillGroups { get; }

    ...
}

public class SkillGroup
{
    public SkillGroup(string name) { ... }

    public void Add(Skill skill)
    {
        // Error if the Skill is already added to this Skill Group.
        if (Contains(skill))
            throw new DomainException("Skill already added");

        skills.Add(skill);
    }

    public bool Contains(Skill skill)
    {
        return skills.Contains(skill);
    }

    ...

    private IList<Skill> skills;
}

What’s the problem here? Have a look at the SkillGroup’s Add() method. If you try to have the same Skill twice under a Skill Group, it will throw an exception. But the spec says you can’t have the same Skill twice anywhere in the same Training Programme.

The solution is to have a reference back from the Skill Group to it’s parent Training Programme, so you can check the whole aggregate instead of just the current entity.

public class TrainingProgramme
{
    public IEnumerable<SkillGroup> SkillGroups { get; }

    // Recursively search through all Skill Groups for this Skill.
    public bool Contains(Skill skill) { ... }

    ...
}

public class SkillGroup
{
    public SkillGroup(string name, TrainingProgramme programme)
    {
        ...
    }

    public void Add(Skill skill)
    {
        // Error if the Skill is already added under this Training Programme.
        if (programme.Contains(skill))
            throw new DomainException("Skill already added");

        skills.Add(skill);
    }

    ...

    private TrainingProgramme programme;
    private IList<Skill> skills;
}

Introducing circular coupling like this feels wrong at first, but is totally acceptable in DDD because the AR restrictions make it work. Entities can be coupled tightly to aggregate roots because nothing else is allowed to use them!

Does your Visual Studio run slow?

Recently I’ve been getting pretty annoyed by my Visual Studio 2008, which has been taking longer and longer to do my favorite menu item, Window > Close All Documents. Today was the last straw — I decided 20 seconds to close four C# editor windows really isn’t acceptable for a machine with four gigs of ram, and so I went to look for some fixes.

Here are some of the good ones I found that worked. Use at your own risk of course!

Disable the customer feedback component

In some scenarios Visual Studio may try to collect anonymous statistics about your code when closing a project, even if you opted out of the customer feedback program. To stop this time-consuming behaviour, find this registry key:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\VisualStudio\9.0\Packages\{2DC9DAA9-7F2D-11d2-9BFC-00C04F9901D1}

and rename it to something invalid:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\VisualStudio\9.0\Packages\Disabled-{2DC9DAA9-7F2D-11d2-9BFC-00C04F9901D1}

Clear Visual Studio temp files

Deleting the contents of the following temp directories can fix a lot of performance issues with Visual Studio and web projects:

C:\Users\richardd\AppData\Local\Microsoft\WebsiteCache
C:\Users\richardd\AppData\Local\Temp\Temporary ASP.NET Files\siteName

Clear out the project MRU list

Apparently Visual Studio sometimes accesses the files in your your recent projects list at random times, e.g. when saving a file. I have no idea why it does this, but it can have a big performance hit, especially if some are on a network share that is no longer available.

To clear your recent project list out, delete any entries from the following path in the registry:

HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\8.0\ProjectMRUList

Disable AutoRecover

In nearly four years, I have never used Visual Studio’s AutoRecover feature to recover work. These days, source control and saving regularly almost entirely eliminates the need for it.

To disable it and gain some performance (particularly with large solutions), go to Tools > Options > Environment > AutoRecover and uncheck Save AutoRecovery information. (Cheers Jake for the tip)

ASP.NET MVC, TDD and Fluent Validation

Yesterday I wrote about ASP.NET MVC, TDD and AutoMapper, and how you can use them together in a DDD application. Today I thought I would follow up and explain how to apply these techniques to another important (but boring) part of any web application: user input validation.

To achieve this, we are using Fluent Validation, a validation framework that lets you easily set up validation rules using a fluent syntax:

public class UserRegistrationFormValidator : AbstractValidator<UserRegistrationForm>
{
    public UserRegistrationFormValidator()
    {
        RuleFor(f => f.Username).NotEmpty()
            .WithMessage("You must choose a username!");
        RuleFor(f => f.Email).EmailAddress()
            .When(f => !String.IsNullOrEmpty(f.Email))
            .WithMessage("This doesn't look like a valid e-mail address!");
        RuleFor(f => f.Url).MustSatisfy(new ValidWebsiteUrlSpecification())
            .When(f => !String.IsNullOrEmpty(f.Url))
            .WithMessage("This doesn't look like a valid URL!");
    }
}

If you think about it, validation and view model mapping have similar footprints in the application. They both:

  • Live in the application services layer
  • May invoke domain services
  • Use third-party libraries
  • Have standalone fluent configurations
  • Have standalone tests
  • Are injected into the application services

Let’s see how it all fits together starting at the outermost layer, the controller.

public class AccountController : Controller
{
    readonly IUserRegistrationService registrationService;
    readonly IFormsAuthentication formsAuth;
    ...
    [AcceptVerbs(HttpVerbs.Post)]
    public ActionResult Register(UserRegistrationForm user)
    {
        if (user == null)
            throw new ArgumentNullException("user");
        try
        {
            this.registrationService.RegisterNewUser(user);
            this.formsAuth.SignIn(user.Username, false);
            return RedirectToAction("Index", "Home");
        }
        catch (ValidationException e)
        {
            e.Result.AddToModelState(this.ModelState, "user");
            return View("Register", user);
        }
    }
    ...
}

As usual, the controller is pretty thin, delegating all responsibility (including performing any required validation) to an application service that handles new user registration. If validation fails, all our controller has to do is catch an exception and append the validation messages contained within to the model state to tell the user any mistakes they made.

The UserRegistrationForm validator is injected into the application service along with any others. Just like AutoMapper, we can now test both the controller, validator and application service separately.

public class UserRegistrationService : IUserRegistrationService
{
    readonly IUserRepository users;
    readonly IValidator<UserRegistrationForm> validator;
    ...
    public void RegisterNewUser(UserRegistrationForm form)
    {
        if (form == null)
            throw new ArgumentNullException("form");
        this.validator.ValidateAndThrow(form);
        User user = new UserBuilder()
            .WithUsername(form.Username)
            .WithAbout(form.About)
            .WithEmail(form.Email)
            .WithLocation(form.Location)
            .WithOpenId(form.OpenId)
            .WithUrl(form.Url);
        this.users.Save(user);
    }
}

Testing the user registration form validation rules

Fluent Validation has some nifty helper extensions that make unit testing a breeze:

[TestFixture]
public class When_validating_a_new_user_form
{
    IValidator<UserRegistrationForm> validator = new UserRegistrationFormValidator();
    [Test]
    public void The_username_cannot_be_empty()
    {
        validator.ShouldHaveValidationErrorFor(f => f.Username, "");
    }
    [Test]
    public void A_valid_email_address_must_be_provided()
    {
        validator.ShouldHaveValidationErrorFor(f => f.Email, "");
    }
    [Test]
    public void The_url_must_be_valid()
    {
        validator.ShouldNotHaveValidationErrorFor(f => f.Url, "http://foo.bar");
    }
}

You can even inject dependencies into the validator and mock them out for testing. For example, in this app the validator calls an IUsernameAvailabilityService to make sure the chosen username is still available.

Testing the user registration service

This validation code is now completely isolated, and we can mock out the entire thing when testing the application service:

[TestFixture]
public class When_registering_a_new_user
{
    IUserRegistrationService registrationService;
    Mock<IUserRepository> repository;
    Mock<IValidator<UserRegistrationForm>> validator;
    [Test, ExpectedException(typeof(ValidationException))]
    public void Should_throw_a_validation_exception_if_the_form_is_invalid()
    {
        validator.Setup(v => v.Validate(It.IsAny<UserRegistrationForm>()))
            .Returns(ObjectMother.GetFailingValidationResult());
        service.RegisterNewUser(ObjectMother.GetNewUserForm());
    }
    [Test]
    public void Should_add_the_new_user_to_the_repository()
    {
        var form = ObjectMother.GetNewUserForm();
        registrationService.RegisterNewUser(form);
        service.Verify(
            r => r.Save(It.Is<User>(u => u.Username.Equals(form.Username))));
    }
}

Testing the accounts controller

With validation out of the way, all we have to test on the controller is whether or not it appends the validation errors to the model state. Here are the fixtures for the success/failure scenarios:

[TestFixture]
public class When_successfully_registering_a_new_user : AccountControllerTestContext
{
    [SetUp]
    public override void SetUp()
    {
        ...
        result = controller.Register(form);
    }
    [Test]
    public void Should_register_the_new_user()
    {
        registrationService.Verify(s => s.RegisterNewUser(form), Times.Exactly(1));
    }
    [Test]
    public void Should_sign_in()
    {
        formsAuth.Verify(a => a.SignIn(user.Username, false));
    }
}
[TestFixture]
public class When_registering_an_invalid_user :  AccountControllerTestContext
{
    [SetUp]
    public override void SetUp()
    {
        ...
        registrationService.Setup(s => s.RegisterNewUser(form)).Throws(
            new ValidationException(
                ObjectMother.GetFailingValidationResult()));
        result = controller.Register(form);
    }
    [Test]
    public void Should_not_sign_in()
    {
        formsAuth.Verify(a => a.SignIn(It.IsAny<string>(),
            It.IsAny<bool>()), Times.Never());
    }
    [Test]
    public void Should_redirect_back_to_the_register_view_with_the_form_contents()
    {
        result.AssertViewRendered().ForView("Register")
            .WithViewData<UserRegistrationForm>().ShouldEqual(form);
    }
}

This post has been a bit heavier on code than usual, but hopefully it is enough to get an idea of how easy it is to implement Fluent Validation in your ASP.NET MVC application.

A programmers secret weapon: the humble to-do list

A while ago I wrote a post how to lose traction on a personal software project, based on mistakes I have made myself in the past that have slowed down or even completely stopped progress. Today I want to share a tip that has greatly improved my time management since I’ve started doing it, and helped combat many “programmer’s block” moments I am notorious for.

Coming back to work

After you take a break from a coding project for a while, getting back into the groove can be difficult. Stopping for a while can be a great way to gain perspective and maybe reevaluate your goals of what the application should and shouldn’t be, but it can also be really hard starting again where you left off.

If you know you will be leaving for a while, and want to return later, you can try leaving a test failing, or some other obvious problem that needs fixing (like a syntax error), but these only keep you going in the short term. It can be difficult choosing the next big piece of functionality to work on.

Keeping an eye on the big picture

Have you ever been in the situation where, working on an application, you surround yourself with a forest of rich infrastructure code — configuration, data access, logging, UI widgets etc — and then stop and realise that’s all your application is: an empty shell that doesn’t do anything useful yet?

Or alternatively, do you ever find yourself getting lost in detail, giving unfair attention to making one little component perfect while neglecting the other 90% needed to be up and running for the alpha release?

All those little extra “clean up” tasks you want to do

While coding I’m always spotting things I want to clean up like refactorings, missing comments, source code formatting, etc. I want to finish my current task before starting something new (and can’t until I check in my current work anyway), but I do want to do these little clean ups at some stage.

All these problems really boil down to one simple question — what should I work on next?

The programmers’ to-do list

I have found from own personal experience that sitting down to cut code without a well-thought-out plan of attack is asking for trouble. As well as the examples above, I generally skip from task to task, dabbling in things I find interesting, but not gaining much real traction towards a useful application.

Over the past few months, I have discovered that keeping a detailed to-do list is a great tool for combating these problems and staying on track. The premise is very simple: take a high-level block of work — e.g. implementing a single user story — and break it down into all the little programming steps you need to do to get there, no matter how insignificant or obvious.

Here’s a dumb example snippet of a to-do list for a web app I am working on in my spare time (the whole thing runs about four pages total). You can see the ‘high level’ tasks get vaguer towards the end, because I haven’t planned them out yet, and right at the bottom there are unimportant cleanup tasks.

The key here is to be fine-grained; you want to see all the little tasks that needs to be done, and then tick them off to see your progress. Even if they are always assumed — like validation — it’s like a calendar, only useful if you know all your appointments are in there. And it doesn’t really matter where your high-level tasks start, as long as together they add up to a program that will be useful to a user.

Defer little tasks

If you think of an easy little ‘clean up’ task you’d like to do, don’t do it now — just write it down instead. Why? Because:

  • If you do it now, it could sidetrack you from your current focus
  • It might be a low priority and a poor use of your time right now
  • Instead of doing it now, it could be an easy way to get back into the groove later

For example, if you’ve got half an hour free before going somewhere, and want to spend it on your application, you don’t want to be starting fresh on some giant new piece of work. Why not spend the minutes crossing off one or two of those little clean-up tasks you’ve been meaning to do?

Planning sessions

Alternatively, if you’re not in the mood to code, you could use the time as a planning sessionand try to think of the next big task you want to achieve, and all the small steps that make it up. This is just as helpful as cutting coding, and makes you think about the big picture.

To-do list software

I’m a geek, so naturally, I want some flash tool for managing my to-do list. There are a lot to-do list applications on the web like Todoist and Remember the Milk. I tried a few, but eventually just went back to a bulleted, indented Word document because I found it by far the quickest to work with.

Using NUnit to check your IoC container is set up right

One small problem I encountered when getting into Dependency Injection (DI) and Inversion of Control (IoC) was that even though all my services were now beautifully SOLID and test-driven, quite often it was all wasted because I forgot to register them in my IoC container, causing massive errors that wouldn’t be detected until runtime (oops).

Luckily, it is very easy to write a test to check everything has been registered properly. All you need is a bit of reflection to find all the interfaces in an assembly, then try to resolve them all:

[Test]
public void Should_be_able_to_resolve_all_interfaces_in_domain()
{
    var domain = Assembly.GetAssembly(typeof(Book));
    var interfaces = domain.GetTypes().Where(t => t.IsInterface);
    foreach (Type @interface in interfaces)
    {
        Assert.IsNotNull(ServiceLocator.Current.GetInstance(@interface));
    }
}

We can make this even nicer with NUnit 2.5. Instead of a for loop with multiple asserts within one test case (hmm) that can only show one failure at a time (ugh), we can use NUnit parameterized tests to automagically generate a separate test case for each interface we need to resolve:

That’s heaps easier to read (otherwise Unity’s exceptions are very terse), and we can see multiple failures in one run. To make NUnit generate this without typing them all out by hand, all you need is the new ValueSource attribute, which lets you choose a method, field or property or method that returns an IEnumerable set of objects:

public IEnumerable<Type> GetDomainInterfaces()
{
    var domain = Assembly.GetAssembly(typeof(Book));
    return domain.GetTypes().Where(t => t.IsInterface);
}

[Test]
public void Should_be_able_to_resolve_domain_service(
    [ValueSource("GetDomainInterfaces")]Type @interface)
{
    Assert.IsNotNull(ServiceLocator.Current.GetInstance(@interface));
}

Note I include this with my integration tests, because it can take a few secs to run (e.g. I keep my NHibernate ISession in the container, and building the session factory takes a long time).

Fluent Builder Pattern for classes with long-ish constructors

Last week I discovered a rather wonderful construct for objects with long constructors, e.g. immutable value types:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
public class UserProfile
{
    public string City { getprotected set; }
    public string Country { getprotected set; }
    public Uri Url { getprotected set; }
    public string Email { getprotected set; }
    public string Tagline { getprotected set; }
    public UserProfile(string city, string country, Uri url, string email,
        string tagline)
    {
        ...
    }
}

This constructor has bad Connascence of Position (CoP); to construct a UserProfile instance, users have to know the position of each parameter. Otherwise they might mix up the city with the country for example:

1
2
3
// Spot the bug!
var profile = new UserProfile("NZ""Wellington",
    new Uri("http://richarddingwall.name"), "rdingwall@gmail.com"".NET guy");

This won’t be a problem with named parameters in C# 4.0, but until then, a nice alternative is a fluent builder class as described:

1
2
3
4
5
6
UserProfile profile = new UserProfileBuilder()
    .WithCity("Wellington")
    .WithCountry("NZ")
    .WithUrl(new Uri("http://richarddingwall.name"))
    .WithEmail("rdingwall@gmail.com")
    .WithTagline(".NET guy");

Builders are very easy to implement. Each With method records its value and returns the current builder instance. Then we provide an implicit cast operator that finally constructs a UserProfile with all the parameters in the right places.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
public class UserProfileBuilder
{
    internal string City { getset; }
    internal string Country { getset; }
    // ... etc
    public UserProfileBuilder WithCity(string city)
    {
        this.City = city;
        return this;
    }
    // ... etc
    public static implicit operator UserProfile(UserProfileBuilder builder)
    {
        return new UserProfile(builder.City, builder.Country, builder.Url,
            builder.Email, builder.Tagline);
    }
}

I really like this!

WPF: How to Combine Multiple Assemblies into a Single exe

At work, I am currently working on a small WPF app that we want to keep as a simple standalone exe. However, it has dependencies on several third-party assemblies: SharpZipLib, Unity, ServiceLocator, etc.

Microsoft has handy tool called ILMerge that merges multiple .NET assemblies into a single dll or exe. Unfortunately, it doesn’t support WPF applications, because of the way XAML is compiled.

Instead, you can use another approach — include all your referenced third-party assemblies as embedded resources of the exe:

 

public partial class App : Application
{
    private void OnStartup(object sender, StartupEventArgs e)
    {
        AppDomain.CurrentDomain.AssemblyResolve +=
            new ResolveEventHandler(ResolveAssembly);
        // proceed starting app...
    }
    static Assembly ResolveAssembly(object sender, ResolveEventArgs args)
    {
        Assembly parentAssembly = Assembly.GetExecutingAssembly();
        var name = args.Name.Substring(0, args.Name.IndexOf(',')) + ".dll";
        var resourceName = parentAssembly.GetManifestResourceNames()
            .First(s => s.EndsWith(name));
        using (Stream stream = parentAssembly.GetManifestResourceStream(resourceName))
        {
            byte[] block = new byte[stream.Length];
            stream.Read(block, 0, block.Length);
            return Assembly.Load(block);
        }
    }
}

Whenever .NET can’t find a referenced assembly, it will call our code and we can provide an Assembly instance ourselves. Note this code expects a sane DLL-naming convention 🙂

TDD: How to Supersede a Single System Library Call

This morning I read an article this morning by Karl Seguin on allowing clients to replace system calls with delegates (function pointers) for testing purposes — making the untestable testable.

It is a pattern I have used myself recently, but under a different name with a more formalized syntax. Imagine, for example, you have created an interface to decouple e-mail sending dependencies:

1
2
3
4
public interface IMailSender
{
    void Send(MailMessage message);
}

This allows me to swap out my IMailSender (via Dependency Injection) with a fake implementation for testing purposes – e.g. FakeMailSender or a mock.

Otherwise, production code uses SmtpMailSender, a concrete class I wrote that implements IMailSender via a call to System.Net.Mail.SmtpClient.Send():

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
public class SmtpMailSender : IMailSender
{
    // The MailSent event is raised every time an email is sent.
    public event EventHandler<MailSentEventArgs> MailSent;
    public void Send(MailMessage message)
    {
        if (message == null)
            throw new ArgumentNullException("message");
        // Send the message using System.Net.Mail.SmtpClient()
        new SmtpClient().Send(message);
        // Notify observers that we just sent a msg.
        MailSent(new MailSentEventArgs(message));
    }
}

Note this class is a little bit special — it also has an event MailSent that gets raised every time a message is sent. This lets me attach stuff like global e-mail logging very easily, but how can we write a unit test that asserts this event gets raised at the correct time? Because of the dependency on SmtpClient.Send(), my SmtpMailClient class is now facing the exact same problem IEmailSender was designed to solve. For this issue you can check out this article as well.

Injecting an Action in the constructor

System.Net.Mail.SmtpClient.Send() isn’t virtual, so we can’t mock it, and I don’t really want another interface just for this situation. One solution Karl suggests is injecting an Action that does the actual sending:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
public class SmtpMailSender : IMailSender
{
    // The MailSent event is raised every time an email is sent.
    public event EventHandler<MailSentEventArgs> MailSent;
    // method that actually sends the message.
    readonly Action<MailMessage> send;
    public SmtpMailSender(Action<MailMessage> send)
    {
        this.send = send;
    }
    public void Send(MailMessage message)
    {
        if (message == null)
            throw new ArgumentNullException("message");
        // Send the message using System.Net.Mail.SmtpClient()
        send(message);
        // Notify observers that we just sent a msg.
        MailSent(new MailSentEventArgs(message));
    }
}

This solves the dependency problem effectively and without creating new classes and interfaces. However, now users of the SmtpMailSender class are forced to provide the send action in the constructor.

1
2
var sender = new SmtpMailSender(new SmtpClient().Send);
sender.Send(...);

If you have a good IoC container it can take care of this, but other users may not be so lucky. There are a few other things that I didn’t like as well:

  • 99% of this class’s functionality lies in this single action. A class where 99% of it’s functionality is injected in a single parameter raises the question why it really needs to exist at all.
  • The default implementation, SmtpClient.Send(), only needs to be overriden in a few test cases. Everyone else shouldn’t have to care about it.
  • Putting random delegates in a constructor makes me feel uncomfortable. Unless it is an intention-revealing named delegate, I don’t think this is a good pattern to be promoting.

Using Supersede Instance Variable instead

In Working Effectively with Legacy Code, Michael Feathers discusses a pattern called Supercede Instance Variable. Although it is used for technical reasons (replacing global dependencies in non-virtual C++ classes, where methods cannot be overridden via subclassing), I believe the pattern fits this usage example well.

Really, the only difference is that, here, is that unless a user performs the optional step of overriding the action via a special SupersedeSendActionmethod, a default is used:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
public class SmtpMailSender : IMailSender
{
    public event EventHandler<MailSentEventArgs> MailSent;
    Action<MailMessage> send;
    public Action<MailMessage> DefaultSend
    {
        get return new SmtpClient().Send; }
    }
    public SmtpMailSender()
    {
        this.send = DefaultSend;
    }
    public void Send(MailMessage message)
    {
        if (message == nullthrow new ArgumentNullException("message");
        this.send(message);
        MailSent(new MailSentEventArgs(message));
    }
    /// <summary>
    /// Supersede the method that is used to send a MailMessage for testing
    /// purposes (supersede static variable to break dependency on non-
    /// mockable SmtpClient.Send() method).
    /// </summary>
    public void SupersedeSendAction(Action<MailMessage> newSend)
    {
        this.send = newSend;
    }
}

It is only a small difference, but as Feathers states, using the supersede term indicates it is a rare deviation from default behaviour, and for special cases only:

One nice thing about using the word supersede as the method prefix is that it is kind of fancy and uncommon. If you ever get concerned about whether people are using the superceding methods in production code, you can do a quick search to make sure they aren’t.

Only supersede when you can’t inject

Remember though, that Supersede Instance Variable should only be used for very special cases like individual system library calls in low-level wrapper classes. Dependency Injection and intention-revealing interfaces is still a much better pattern, and should still be your primary tool in all other situations.

How to Lose Traction on a Personal Software Project

So you’ve started writing your first program — great stuff! The software development can be very tricky, especially if you are not absolutely sure what you are doing. In this blog we are exploring different software development methodologies. Here are a few tips and traps to watch out for along the way.

Long periods of time when your program doesn’t compile and/or run at all

You enthusiastically start work on some big changes (e.g. re-architecting your application), but stop because you hit a brick wall, or don’t have the time to finish it. The source code is left in a crippled and unfinished state. You can’t do anything with any of your code until this is fixed, and the longer you leave it, the more you’ll forget and the harder it will be to get started again.

In Continuous Integration this is called a broken build, and is a big no-no because it impacts other peoples ability to work. Without the pressure of a team environment pushing you forward, having a roadblock like this in your path makes it very easy to lose faith and motivation.

Make massive changes on a whim without revision control or a backup

There’s nothing worse than changing your mind about the design of something after you’ve already thrown away the old one. When you do something that involves changing or deleting existing code, be sure to make some sort of backup in case things don’t work out.

If you haven’t taken the plunge with revision control yet, I highly recommend looking at some of the free SVN or GIT hosting services out there — once you get into it, you’ll never look back.

Ignore the YAGNI principle and focus on the fun/easy bits

Focusing on things like validation, eye candy or even general purpose ‘utility’ functions is a great way to build up a large complex code base that doesn’t do anything useful yet. Focus on the core functionality of your software first — your main features should be complete before you start thinking about nice-to-haves.

Throw your code away and start from scratch

As Netscape famously discovered a few years ago, throwing away existing code to start afresh is almost never a good idea. Resist the urge and make a series of small, manageable code refactorings instead.

Start programming before you have a clear goal in mind

Instead of a command line tool, maybe my application would be better as a simple GUI application? Or, I was originally writing my homebrew game for my old Xbox360, but now that we’ve bought a Wii, I’ll port it to that instead!

What are you actually trying to achieve? Spend some time with a pen and some paper coming up with a really clear vision of what you’re trying to create — e.g. screen mock-ups. If you don’t know what you’re writing from the start, the goal posts will keep moving as you change your mind, and you’ll have no chance of finishing it.

Get carried away with project hype before you’ve actually got anything to show for yourself

Spending hours trying to think of the perfect name for your software, designing an icon, choosing the perfect open-source license and making a website won’t get you any closer to having a working application. Get something up and running first, and worry about telling people about it later.

Start a million new features and don’t finish any of them

Jumping from one idea to another without finishing anything is like spinning a car’s wheels and not going anywhere. Make a list of features your program should have, and put them in order of most-to-least important. Work on them, one-at-a-time, in that order.

Capture the Output from a Scheduled Task

Today I had to diagnose a problem with a Windows Scheduled Task that was sporadically failing with a non-zero return code. The exe in question was a .NET console application that was throwing an exception before Main() got called; it was outside our try-catch block.

Anyway, if you ran the .exe from a command line yourself, you would see the error written to stderr. If you ran the Scheduled Task, the error was not logged anywhere.

To capture the output of the scheduled task, I redirected it to a text file with the following command:

1
2
3
before: NightlyBatchJob.exe
after: cmd /C NightlyBatchJob.exe >> NightlyBatchJob.output.txt 2>&1

The > symbol redirects the output to a file; >> makes it append instead of creating a new blank file each time it runs. 2>&1 makes it include the output from stderr with stdout — without it you won’t see any errors in your logs.

The whole command is run in a new cmd.exe instance, because just running an .exe directly from a scheduled task doesn’t seem to produce any console output at all. For more posts on this subject, go back to the homepage.