A wee test helper for setting private ID fields

A wee test helper for setting private ID fields

Every now and then I need to write tests that depend on the ID of a persistent object. IDs are usually private and read-only, assigned internally by your ORM, so I use a little fluent extension method to help set them via reflection.

public static class ObjectExtensions{    // Helper to set private ID    public static T WithId<T>(this T obj, object id)    {        typeof(T)            .GetField("id", BindingFlags.Instance | BindingFlags.NonPublic)            .SetValue(obj, id);        return obj;    }}

Tweak as desired to suit your naming conventions or base classes. Usage is as follows:

[TestFixture]public class When_comparing_two_foos{    [Test]    public void Should_not_be_the_same_if_they_have_different_ids()    {        var a = new Foo().WithId(4);        var b = new Foo().WithId(42);        a.Should().Not.Be.EqualTo(b);    }}

Memory leak with Enterprise Library 4 Data block and Execute Reader

Yesterday we had a very strange problem in production with an old sproc-based ASP.NET web application. We had just switched from using a handful of “SqlHelper” classes to the Microsoft Enterprise Library Data Access Application block (DAAB).

Strangely though, after releasing these changes to production, the application became plagued by connection and memory leaks, causing connection pool overflows (over 200 connections to the SQL box at one point) and the IIS application pool to fall over every few minutes.

Memory leaks are notoriously difficult to diagnose. We eventually were able to narrow it down to this offending piece of code, used for running FOR XML queries:

public XmlDocument ExecuteNativeXml(string sprocName, params object[] parameters){    SqlDatabase db = (SqlDatabase)DatabaseFactory.CreateDatabase();    using (DbCommand command = db.GetStoredProcCommand(sprocName, parameters))    {        XmlDocument document = new XmlDocument();        using (XmlReader reader = db.ExecuteXmlReader(command))            document.Load(reader);        return document;    }}

Can you spot the memory leak? Everything that implements IDisposable is wrapped in a using block, and DAAB takes care of opening/closing SqlConnections — our code doesn’t have any contact with them at all. So what’s the problem then?

This article about connection pools in SQLMag from 2003 explains it:

…My test application shows that even when you use this [CommandBehavior.CloseConnection] option, if you don’t explicitly close the DataReader (or SqlConnection), the pool overflows. The application then throws an exception when the code requests more connections than the pool will hold.

Some developers insist that if you set the CommandBehavior.CloseConnection option, the DataReader and its associated connection close automatically when the DataReader finishes reading the data. Those developers are partially right—but the option works this way only when you’re using a complex bound control in an ASP.NET Web application. Looping through a DataReader result set to the end of its rowset (that is, when —the DataReader’s Read method— returns false) isn’t enough to trigger automatic connection closing. However, if you bind to a complex bound control such as the DataGrid, the control closes the DataReader and the connection— but only if you’ve set the CommandBehavior.CloseConnection option.

If you execute a query by using another Execute method (e.g., ExecuteScalar, ExecuteNonQuery, ExecuteXMLReader), you are responsible for opening the SqlConnection object and, more importantly, closing it when the query finishes. If you miss a close, orphaned connections quickly accumulate.

The fix is pretty bit ugly and unexpected — you have to reach inside the DBCommand and explicitly close its connection yourself:

public XmlDocument ExecuteNativeXml(string sprocName, params object[] parameters){    SqlDatabase db = (SqlDatabase)DatabaseFactory.CreateDatabase();    using (DbCommand command = db.GetStoredProcCommand(sprocName, parameters))    {        XmlDocument document = new XmlDocument();        using (XmlReader reader = db.ExecuteXmlReader(command))            document.Load(reader);        // If you do not explicitly close the connection here, it will leak!        if (command.Connection.State == ConnectionState.Open)            command.Connection.Close();        return document;    }}

So the bug was not the fault of DAAB, but it was caused by its use of CommandBehavior.CloseConnection — the recommended technique. A nice trap for young players!

Increase font size, get to the point quicker

Increase font size, get to the point quicker

A few months ago, I changed all my IDEs and text editors (Visual Studio, SQL Management Studio, Notepad2, Textmate) from their default font sizes up to a much larger 14-16 points.

A bigger font size is easier on the eyes, and encourages you to write shorter lines and smaller methods that fit on one screen so you don’t have to scroll to read them.

It’s not a massive change, but does provide two useful side-effects: it promotes highly cohesive code (lots of small units of code doing specific little tasks) and also reduces cognitive load (i.e. stress).

Recently I installed a nice new WordPress theme for my blog that looks great and has a much bigger font size than the previous one. And after two weeks I have found similar effects creeping across into my writing style.

As well as writing bite-sized units of code, I am writing bite-sized blog posts that get to the point a lot quicker. Good stuff.

SOLID ugly code

SOLID ugly code

Today we are working on a system that, among other things, sends notification e-mails to employees when their attention is required. Getting an employee’s e-mail address is normally pretty simple, but this organisation has are around 10,000 staff out in the field, many of whom don’t have access to a computer let alone a work e-mail account.

To counter this problem we use some simple chain-of-command rules:

  1. If the Employee has an e-mail address, send it to that.
  2. If he doesn’t have one, send it to his immediate manager. If his manager doesn’t have an e-mail address, keep backtracking up the organisation until you find someone that does.
  3. If still no email address is found, send the message to a system administrator, and they can get the word out via other channels.

The interface for this service is pretty simple. It takes an employee, and returns an email address:

/// <summary>/// Service that can find an email address for an Employee... or the next best/// alternative if they don't have one./// </summary>public interface IEmailAddressResolver{   string GetEmailAddressFor(IEmployee employee);}

So how am I going implement it? With a T-SQL stored procedure, of course.

What? That may sound like a pretty bad idea — stored procedures are notorious for leaking application + domain logic into the persistence layer, and they are practically impossible to write tests for. But here is my justification:

  • This is a database-driven legacy app, and only one bounded context has been modeled using DDD so far. The organisational hierarchy is only accessible via SQL, and modeling and mapping the legacy schema with NHibernate would take a couple of weeks at least. Therefore the simplest way to query it is via stored procedure, or stored-procedure backed services.
  • I don’t want to add an e-mail property to Employee because that is an application concern, not part of the domain model. This needs to be done in a different layer, along with usernames, passwords and UI state, and we haven’t really thought about that yet.
  • We’re getting close to the final release date for this project and we have a massive backlog of work remaining. A stored procedure is about the quickest thing I can think of to implement, and everyone in the team is well-versed in SQL.

Putting it to practice, here’s the concrete implementation we wrote. It’s called via NHibernate so at least we get caching:

// Implements IEmailAddresssResolver using a stored proc.public class EmailAddressResolver : IEmailAddressResolver{    readonly ISession session;    ...    public string GetEmailAddressFor(IEmployee employee)    {        if (employee == null)            throw new ArgumentNullException("employee");        return this.session.GetNamedQuery("employeeEmailAddress")            .SetParameter("employee", employee)            .SetCacheable(true)            .UniqueResult<string>();    }}

I’m not even going to show you the stored proc.

SOLID lets you write ugly code when you have to

The point of this story is that sometimes you have to write ugly code. But when you do, SOLID lets you do so in a neat decoupled manner. None of the callers of IEmailAddressResolver have any idea it’s actually just a dirty stored procedure because the implementation details are all hidden behind an intention-revealing interface. One day we can write a better implementation, swap them out in the IoC container, and no-one will be any wiser.

Using NUnit to check domain event handlers are registered

Using NUnit to check domain event handlers are registered

We’ve been using Udi Dahan’s excellent Domain Events pattern in a project at work. It’s best to keep them as coarse-grained as possible, but we have already identified a dozen or so events that need to be raised by the domain and processed by our services layer.

Naturally, however, I am forgetting to register some of the event handlers in our IoC container. So, as before with our domain services, I decided to write some integration tests to check everything is set up properly. This is very simple to achieve using NUnit’s trusty parameterized tests, ServiceLocator and a sprinkling of generics:

IEnumerable<Type> GetDomainEvents(){    var domain = Assembly.GetAssembly(typeof(Employee));    return domain.GetTypes()        .Where(typeof(IDomainEvent).IsAssignableFrom)        .Where(t => !t.Equals(typeof(IDomainEvent)));}[Test]public void Should_be_able_to_resolve_handlers_for_domain_event(    [ValueSource("GetDomainEvents")]Type @event){    var handler = typeof (IHandles<>).MakeGenericType(@event);    ServiceLocator.Current.GetAllInstances(handler).Should().Not.Be.Empty();}

This reveals a nice todo list of all the handlers we haven’t implemented yet. And the ones I forgot to register!

NUnit Should_be_able_to_resolve_handlers_for_domain_event

Law of Demeter is easy to spot when you need extra mocks

Law of Demeter is easy to spot when you need extra mocks

In code, the Law of Demeter (aka the one-dot rule) is a principle that basically states:

void DoSomething(IFoo foo){    foo.GetStatus(); // good}void DoSomething(IFoo foo){    foo.Profile.GetStatus(); // bad}

In this example, DoSomething() knows intimate details about what an IFoo’s insides look like. This is bad because it’s additional coupling that will bog us down later — if we ever want to change the internal composition of IFoo, we will have to also update all pervert methods like DoSomething() that depend on it.

In some situations, this rule really doesn’t matter, e.g. for trivial or built-in types (datasets come to mind). But other times we definitely want to avoid it. One trick I have discovered for identifying trouble spots is a code smell you might encounter when isolating something for unit testing:

var foo = new Mock<IFoo>();var profile = new Mock<IProfile>();profile.Setup(p => p.GetStatus()).Returns(/* the thing we are testing */);foo.SetupGet(f => f.Profile).Returns(profile);DoSomething(foo.Object); // assert etc

Does this code look familiar? In it we are setting up two mocks:

  • The Profile instance, which has a method we want to stub out and verify
  • The parent IFoo, which only exists to return the child Profile

The code smell is all the extra setup cruft required — two levels of nested mocks for just one parameter we are testing. If we do move method and provide a GetStatus() method on IFoo (that internally delegates to Profile), our test immediately becomes a lot clearer:

var foo = new Mock<IFoo>();foo.Setup(p => p.GetStatus()).Returns(/* the thing we are testing */);DoSomething(foo.Object); // assert etc

Does your Visual Studio run slow?

Recently I’ve been getting pretty annoyed by my Visual Studio 2008, which has been taking longer and longer to do my favorite menu item, Window > Close All Documents. Today was the last straw — I decided 20 seconds to close four C# editor windows really isn’t acceptable for a machine with four gigs of ram, and so I went to look for some fixes.

Here are some of the good ones I found that worked. Use at your own risk of course!

Disable the customer feedback component

In some scenarios Visual Studio may try to collect anonymous statistics about your code when closing a project, even if you opted out of the customer feedback program. To stop this time-consuming behaviour, find this registry key:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\VisualStudio\9.0\Packages\{2DC9DAA9-7F2D-11d2-9BFC-00C04F9901D1}

and rename it to something invalid:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\VisualStudio\9.0\Packages\Disabled-{2DC9DAA9-7F2D-11d2-9BFC-00C04F9901D1}

Clear Visual Studio temp files

Deleting the contents of the following temp directories can fix a lot of performance issues with Visual Studio and web projects:

C:\Users\richardd\AppData\Local\Microsoft\WebsiteCache
C:\Users\richardd\AppData\Local\Temp\Temporary ASP.NET Files\siteName

Clear out the project MRU list

Apparently Visual Studio sometimes accesses the files in your your recent projects list at random times, e.g. when saving a file. I have no idea why it does this, but it can have a big performance hit, especially if some are on a network share that is no longer available.

To clear your recent project list out, delete any entries from the following path in the registry:

HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\8.0\ProjectMRUList

Disable AutoRecover

In nearly four years, I have never used Visual Studio’s AutoRecover feature to recover work. These days, source control and saving regularly almost entirely eliminates the need for it.

To disable it and gain some performance (particularly with large solutions), go to Tools > Options > Environment > AutoRecover and uncheck Save AutoRecovery information. (Cheers Jake for the tip)

Three reasons why a unit test might fail

Here is a little convention I have adopted for check-in comments when fixing red lights in NUnit:

Red traffic light

  • A failing test is when the issue lies in production code. It fails because the code it is testing has bugs or isn’t fully implemented to spec yet.
  • A buggy test is when the test itself has an issue, and the production code is totally fine. For us these most commonly occur when we forget to set something up e.g. null test objects or unconfigured mocks.
  • A deprecated test fails because the behaviour of the code is now intentionally different than before, and the assertions it makes are no longer valid.

A fourth, more insidious case is the missing test, where possible paths through production code exist without any tests that specify what their behaviour should be. If you’re doing TDD this should never happen, but when it does it can be quite difficult to track down. A coverage tool like NCover can help, but only if you miss a whole block or method. Otherwise you need to use your programmer spidey sense and stay on the look-out for uncovered situations.

Hopefully most of your red lights are due to failing and deprecated tests. If you get a lot of buggy or missing ones, then it probably means you are skipping the red step of red-green-refactor.

ASP.NET MVC, TDD and Fluent Validation

Yesterday I wrote about ASP.NET MVC, TDD and AutoMapper, and how you can use them together in a DDD application. Today I thought I would follow up and explain how to apply these techniques to another important (but boring) part of any web application: user input validation.

To achieve this, we are using Fluent Validation, a validation framework that lets you easily set up validation rules using a fluent syntax:

public class UserRegistrationFormValidator : AbstractValidator<UserRegistrationForm>
{
    public UserRegistrationFormValidator()
    {
        RuleFor(f => f.Username).NotEmpty()
            .WithMessage("You must choose a username!");
        RuleFor(f => f.Email).EmailAddress()
            .When(f => !String.IsNullOrEmpty(f.Email))
            .WithMessage("This doesn't look like a valid e-mail address!");
        RuleFor(f => f.Url).MustSatisfy(new ValidWebsiteUrlSpecification())
            .When(f => !String.IsNullOrEmpty(f.Url))
            .WithMessage("This doesn't look like a valid URL!");
    }
}

If you think about it, validation and view model mapping have similar footprints in the application. They both:

  • Live in the application services layer
  • May invoke domain services
  • Use third-party libraries
  • Have standalone fluent configurations
  • Have standalone tests
  • Are injected into the application services

Let’s see how it all fits together starting at the outermost layer, the controller.

public class AccountController : Controller
{
    readonly IUserRegistrationService registrationService;
    readonly IFormsAuthentication formsAuth;
    ...
    [AcceptVerbs(HttpVerbs.Post)]
    public ActionResult Register(UserRegistrationForm user)
    {
        if (user == null)
            throw new ArgumentNullException("user");
        try
        {
            this.registrationService.RegisterNewUser(user);
            this.formsAuth.SignIn(user.Username, false);
            return RedirectToAction("Index", "Home");
        }
        catch (ValidationException e)
        {
            e.Result.AddToModelState(this.ModelState, "user");
            return View("Register", user);
        }
    }
    ...
}

As usual, the controller is pretty thin, delegating all responsibility (including performing any required validation) to an application service that handles new user registration. If validation fails, all our controller has to do is catch an exception and append the validation messages contained within to the model state to tell the user any mistakes they made.

The UserRegistrationForm validator is injected into the application service along with any others. Just like AutoMapper, we can now test both the controller, validator and application service separately.

public class UserRegistrationService : IUserRegistrationService
{
    readonly IUserRepository users;
    readonly IValidator<UserRegistrationForm> validator;
    ...
    public void RegisterNewUser(UserRegistrationForm form)
    {
        if (form == null)
            throw new ArgumentNullException("form");
        this.validator.ValidateAndThrow(form);
        User user = new UserBuilder()
            .WithUsername(form.Username)
            .WithAbout(form.About)
            .WithEmail(form.Email)
            .WithLocation(form.Location)
            .WithOpenId(form.OpenId)
            .WithUrl(form.Url);
        this.users.Save(user);
    }
}

Testing the user registration form validation rules

Fluent Validation has some nifty helper extensions that make unit testing a breeze:

[TestFixture]
public class When_validating_a_new_user_form
{
    IValidator<UserRegistrationForm> validator = new UserRegistrationFormValidator();
    [Test]
    public void The_username_cannot_be_empty()
    {
        validator.ShouldHaveValidationErrorFor(f => f.Username, "");
    }
    [Test]
    public void A_valid_email_address_must_be_provided()
    {
        validator.ShouldHaveValidationErrorFor(f => f.Email, "");
    }
    [Test]
    public void The_url_must_be_valid()
    {
        validator.ShouldNotHaveValidationErrorFor(f => f.Url, "http://foo.bar");
    }
}

You can even inject dependencies into the validator and mock them out for testing. For example, in this app the validator calls an IUsernameAvailabilityService to make sure the chosen username is still available.

Testing the user registration service

This validation code is now completely isolated, and we can mock out the entire thing when testing the application service:

[TestFixture]
public class When_registering_a_new_user
{
    IUserRegistrationService registrationService;
    Mock<IUserRepository> repository;
    Mock<IValidator<UserRegistrationForm>> validator;
    [Test, ExpectedException(typeof(ValidationException))]
    public void Should_throw_a_validation_exception_if_the_form_is_invalid()
    {
        validator.Setup(v => v.Validate(It.IsAny<UserRegistrationForm>()))
            .Returns(ObjectMother.GetFailingValidationResult());
        service.RegisterNewUser(ObjectMother.GetNewUserForm());
    }
    [Test]
    public void Should_add_the_new_user_to_the_repository()
    {
        var form = ObjectMother.GetNewUserForm();
        registrationService.RegisterNewUser(form);
        service.Verify(
            r => r.Save(It.Is<User>(u => u.Username.Equals(form.Username))));
    }
}

Testing the accounts controller

With validation out of the way, all we have to test on the controller is whether or not it appends the validation errors to the model state. Here are the fixtures for the success/failure scenarios:

[TestFixture]
public class When_successfully_registering_a_new_user : AccountControllerTestContext
{
    [SetUp]
    public override void SetUp()
    {
        ...
        result = controller.Register(form);
    }
    [Test]
    public void Should_register_the_new_user()
    {
        registrationService.Verify(s => s.RegisterNewUser(form), Times.Exactly(1));
    }
    [Test]
    public void Should_sign_in()
    {
        formsAuth.Verify(a => a.SignIn(user.Username, false));
    }
}
[TestFixture]
public class When_registering_an_invalid_user :  AccountControllerTestContext
{
    [SetUp]
    public override void SetUp()
    {
        ...
        registrationService.Setup(s => s.RegisterNewUser(form)).Throws(
            new ValidationException(
                ObjectMother.GetFailingValidationResult()));
        result = controller.Register(form);
    }
    [Test]
    public void Should_not_sign_in()
    {
        formsAuth.Verify(a => a.SignIn(It.IsAny<string>(),
            It.IsAny<bool>()), Times.Never());
    }
    [Test]
    public void Should_redirect_back_to_the_register_view_with_the_form_contents()
    {
        result.AssertViewRendered().ForView("Register")
            .WithViewData<UserRegistrationForm>().ShouldEqual(form);
    }
}

This post has been a bit heavier on code than usual, but hopefully it is enough to get an idea of how easy it is to implement Fluent Validation in your ASP.NET MVC application.

ASP.NET MVC, TDD and AutoMapper

This post is in response to a question on a recent article I wrote about mapping domain entities to presentation models with AutoMapper, an object-object mapper for .NET. Today I will give a brief example of how we can tie it all together in an ASP.NET MVC application using dependency injection and application services.

First, let’s start with the controller and the application service it talks to:

public class TasksController : Controller{    readonly ITaskService tasks;    public TasksController(ITaskService tasks)    {        if (tasks == null)            throw new ArgumentNullException("tasks");        this.tasks = tasks;    }    public ActionResult Index()    {        IEnumerable<TaskView> results = this.tasks.GetCurrentTasks();        return View(results);    }        ...}
public interface ITaskService{    IEnumerable<TaskView> GetCurrentTasks();    TaskView AddTask(TaskForm task);    TaskView SaveTask(TaskForm task);    void DeleteTask(int id);}

Note the service’s inputs and outputs are defined in terms of view models (TaskView) and edit models (TaskForm). Performing this mapping in the application services layer keeps our controllers nice and simple. Remember we want to keep them as thin as possible.

Inside the tasks service

public class TaskService : ITaskService{    readonly ITaskRepository taskRepository;    readonly IMappingEngine mapper;    public TaskService(ITaskRepository taskRepository, IMappingEngine mapper)    {        if (taskRepository == null)            throw new ArgumentNullException("taskRepository");        if (mapper == null)            throw new ArgumentNullException("mapper");        this.taskRepository = taskRepository;        this.mapper = mapper;    }    public IEnumerable<TaskView> GetCurrentTasks()    {        IEnumerable<Task> tasks = this.taskRepository.GetAll();        return tasks.Select(t => this.mapper.Map<Task, TaskView>(t));    }        ...}

The tasks service has two dependencies: the tasks repository* and AutoMapper itself. Injecting a repository into a service is simple enough, but for AutoMapper we have to inject an IMappingEngine instance to break the static dependency on AutoMapper.Mapper as discussed in this post.

* Note this is a very simple example — in a bigger app we might use CQS instead of querying the repository directly.

Testing the service

We are using Moq to isolate the tasks service from its repository and AutoMapper dependencies, which always return a known result from the Object Mother. Here are our test cases for all the different things that should occur when retrieving the current tasks:

[TestFixture]public class When_getting_all_current_tasks{    Mock<ITaskRepository> repository;    Mock<IMappingEngine> mapper;    ITaskService service;    IEnumerable<Task> tasks;    [SetUp]    public void SetUp()    {        repository = new Mock<ITaskRepository>();        mapper = new Mock<IMappingEngine>();        service = new TaskService(repository.Object, mapper.Object);        tasks = ObjectMother.GetListOfTasks();        repository.Setup(r => r.GetAll()).Returns(tasks);        mapper.Setup(m => m.Map<Task, TaskView>(It.IsAny<Task>()))            .Returns(ObjectMother.GetTaskView());    }    [Test]    public void Should_get_all_the_tasks_from_the_repository()    {        service.GetCurrentTasks();        repository.Verify(r => r.GetAll());    }    [Test]    public void Should_map_tasks_to_view_models()    {        service.GetCurrentTasks();        foreach (Task task in tasks)            mapper.Verify(m => m.Map<Task, TaskView>(task));    }    [Test]    public void Should_return_mapped_tasks()    {        IEnumerable<TaskView> results = service.GetCurrentTasks();        results.Should().Not.Be.Empty();    }}

Enter AutoMapper

As you can see, we have both the controller and service under test without needing to involve AutoMapper yet. Remember it is being tested separately as discussed in my previous post.

To wire up AutoMapper so it gets injected into the TaskService, all we have to do is register IMappingEngine in the IoC container:

container.RegisterInstance<IMappingEngine>(Mapper.Engine);

Putting the mapping step in the application service and then mocking out AutoMapper like this allows us to easily test everything in isolation, without having to set up the mapper first.

I hope this answers your question Paul!