Powershell to recursively strip C# regions from files

Here’s a super quick little powershell snippet to strip regions out of all C# files in a directory tree. Useful for legacy code where people hide long blocks in regions rather than encapsulate it into smaller methods/objects.

dir -recurse -filter *.cs $src foreach ($_) {
    $file = $_.fullname
    echo $file
    (get-content $file) | where {$_ -notmatch "^.*\#(end)?region.*$" } | out-file $file
}

Run this in your solution folder and support the movement against C# regions!

Dogfooding: how to build a great API

It’s common these days for web applications to provide some sort of RESTful API for third party integration, but too many of them are built after the fact, as a lacklustre afterthought — with holes and limitations that makes them useless for all but the most trivial applications. Here’s some common pitfalls I’ve seen (and been partially responsible for!) arising from this style of development:

  • Only a narrow set of features supported: The application may have a hundred different tasks a user can do in the GUI, but only a handful are supported through the API.
  • Impractical for real-world use: APIs that fail to consider things like minimizing the number of requests through and batching and bulk APIs. Just think, how you would implement GMail’s “mark all as read” button if you could only update one conversation at a time?
  • API features lag UI features: For example a new field might be added to the UI, but there is no way to access it in the API until someone requests it.
  • Weird/buggy APIs: Bad APIs don’t get as much developer attention if no one’s using them, making them a natural place for bugs and design quirks to harbour.

Instead of restricting developers to a limited, locked-down set of operations, the goal is to empower developers to be as creative and productive as possible with your API.

A better way to design an API

Don’t try to build the sort of API you think people will want. Flip the problem on its head and eat your own dog food: make a rule that, from now on, all new UI features can only be based on the public API. Abandon your traditional, internal back-end interfaces and start using the public API as an interface for all your front-end applications.

“All Google tools use the API. There is no backdoor. The web UI is built on Google App Engine, for example.”

This quote from High Scalability beautifully summarizes the design of their API. There is no backdoor, no privileged internal interface — Google developers have to use the exact same API as you at home.

Twitter is another great example of an application that dogfoods its own API – click view source on twitter.com and you’ll see, apart from tracking and advertising, it’s mostly AJAX calls to their public developer API – the very same API all the thousands of third party apps use.

Running Mocha browser tests in TeamCity

Mocha is a great javascript testing framework that supports TeamCity out-of-the-box for testing node.js-based apps on your build server. Here’s a quick guide on how to get it running in TeamCity for browser-based apps as well.

Configuring Mocha’s TeamCity reporter

First we need to configure Mocha to emit specially formatted messages to console.log() that TeamCity can detect and parse. This is easy because Mocha supports pluggable reporters for displaying test progress, and it provides a TeamCity reporter out of the box. Set this up in your mocha.html page:

<script lang="text/javascript">
	mocha.setup({
		ui: 'bdd',
		reporter: function(runner) {
			// Note I am registering both an HTML and a TeamCity reporter here
			// so I can use the same HTML page for local browser development and
			// TeamCity.
			new mocha.reporters.HTML(runner);
			new mocha.reporters.Teamcity(runner);
		},
		ignoreLeaks: true,
		timeout: 5000 // ms
	});

	$(function(){
		mocha.run();
	})
</script>

Executing Mocha tests from the command line

For running our tests we will a special browser, PhantomJS, which is “headless” and has no GUI — you simply invoke it from the command line and interact with it via javascript. Here’s how you can load an HTML page containing Mocha tests — from either the local file system or a web server — passing the URL as a command-line argument:

(function () {
    "use strict";
    var system = require("system");
    var url = system.args[1];

    phantom.viewportSize = {width: 800, height: 600};

    console.log("Opening " + url);

    var page = new WebPage();

    // This is required because PhantomJS sandboxes the website and it does not
    // show up the console messages form that page by default
    page.onConsoleMessage = function (msg) {
        console.log(msg);

        // Exit as soon as the last test finishes.
        if (msg && msg.indexOf("##teamcity[testSuiteFinished name='mocha.suite'") !== -1) {
            phantom.exit();
        }
    };

    page.open(url, function (status) {
        if (status !== 'success') {
            console.log('Unable to load the address!');
            phantom.exit(-1);
        } else {
            // Timeout - kill PhantomJS if still not done after 2 minutes.
            window.setTimeout(function () {
                phantom.exit();
            }, 120 * 1000);
        }
    });
}());

You can then invoke your tests from the command line, and you should see a bunch of TeamCity messages scroll past.

phantomjs.exe phantomjs-tests.js http://localhost:88/jstests/mocha.html

Note in my example I am running tests on a local web server on port 81, but PhantomJS also supports local file:// URLs.

Setting up a build step for PhantomJS

Finally the last thing we need to do is to set up a new Command Line build step in TeamCity to run PhantomJS.

And that’s it! You should now see your test passes and fails showing up in TeamCity:

Acknowledgements

Thanks to Dan Merino for his original article on running Jasmine tests under TeamCity, which formed the basis for most of this post.

One NHibernate session per WCF operation, the easy way

This week I’ve been working on a brownfield Castle-powered WCF service that was creating a separate NHibernate session on every call to a repository object.

Abusing NHibernate like this was playing all sorts of hell for our app (e.g. TransientObjectExceptions), and prevented us from using transactions that matched with a logical unit of work, so I set about refactoring it.

Goals

  • One session per WCF operation
  • One transaction per WCF operation
  • Direct access to ISession in my services
  • Rely on Castle facilities as much as possible
  • No hand-rolled code to plug everything together

There are a plethora of blog posts out there to tackle this problem, but most of them require lots of hand-rolled code. Here are a couple of good ones — they both create a custom WCF context extension to hold the NHibernate session, and initialize/dispose it via WCF behaviours:

  • NHibernate Session Per Request Using Castles WcfFacility
  • NHibernate’s ISession, scoped for a single WCF-call

These work well, but actually, there is a much simpler way that only requires the NHibernate and WCF Integration facilities.

Option one: manual Session.BeginTransaction() / Commit()

The easiest way to do this is to register NHibernate’s ISession in the container, with a per WCF operation lifestyle:

windsor.AddFacility<WcfFacility>();
windsor.AddFacility("nhibernate"new NHibernateFacility(...));
windsor.Register(
    Component.For<ISession>().LifeStyle.PerWcfOperation()
        .UsingFactoryMethod(x => windsor.Resolve<ISessionManager>().OpenSession()),
    Component.For<MyWcfService>().LifeStyle.PerWcfOperation()));

If you want a transaction, you have to manually open and commit it. (You don’t need to worry about anything else because NHibernate’s ITransaction rolls back automatically on dispose):

[ServiceBehavior]
public class MyWcfService : IMyWcfService
{
    readonly ISession session;
    public MyWcfService(ISession session)
    {
        this.session = session;
    }
    public void DoSomething()
    {
        using (var tx = session.BeginTransaction())
        {
            // do stuff
            session.Save(...);
            tx.Commit();
        }
    }
}

(Note of course we are using WindsorServiceHostFactory so Castle acts as a factory for our WCF services. And disclaimer: I am not advocating putting data access and persistence directly in your WCF services here; in reality ISession would more likely be injected into query objects and repositories each with a per WCF operation lifestyle (you can use this to check for lifestyle conflicts). It is just an example for this post.)

Anyway, that’s pretty good, and allows a great deal of control. But developers must remember to use a transaction, or remember to flush the session, or else changes won’t be saved to the database. How about some help from Castle here?

Option two: automatic [Transaction] wrapper

Castle’s Automatic Transaction Facility allows you to decorate methods as [Transaction] and it will automatically wrap a transaction around it. IoC registration becomes simpler:

windsor.AddFacility<WcfFacility>();
windsor.AddFacility("nhibernate"new NHibernateFacility(...));
windsor.AddFacility<TransactionFacility>();
windsor.Register(
    Component.For<MyWcfService>().LifeStyle.PerWcfOperation()));

And using it:

[ServiceBehavior, Transactional]
public class MyWcfService : IMyWcfService
{
    readonly ISessionManager sessionManager;
    public MyWcfService(ISessionManager sessionManager)
    {
        this.sessionManager = sessionManager;
    }
    [Transaction]
    public virtual void DoSomething()
    {
        // do stuff
        sessionManager.OpenSession.Save(...);
    }
}

What are we doing here?

  • We decorate methods with [Transaction] (remember to make them virtual!) instead of manually opening/closing transactions. I put this attribute on the service method itself, but you could put it anywhere — for example on a CQRS command handler, or domain event handler etc. Of course this requires that the class with the [Transactional] attribute is instantiated via Windsor so it can proxy it.
  • Nothing in the NHibernateFacility needs to be registered per WCF operation lifestyle. I believe this is because NHibernateFacility uses the CallContextSessionStore by default, which in a WCF service happens to be scoped to the duration of a WCF operation.
  • Callers must not dispose the session — that will be done by castle after the transaction is commited. To discourage this I am using it as a method chain — sessionManager.OpenSession().Save() etc.
  • Inject ISessionManager, not ISession. The reason for this is related to transactions: NHibernateFacility must construct the session after the transaction is opened, otherwise it won’t know to enlist it. (NHibernateFacility knows about ITransactionManger, but ITransactionManager doesn’t know about NHibernateFacility). If your service depends on ISession, Castle will construct the session when MyWcfService and its dependencies are resolved (time of object creation) before the transaction has started (time of method dispatch). Using ISessionManager allows you to lazily construct the session after the transaction is opened.
  • In fact, for this reason, ISession is not registered in the container at all — it is only accessible via ISessionManager (which is automatically registered by the NHibernate Integration Facility).

This gives us an NHibernate session per WCF operation, with automatic transaction support, without the need for any additional code.

Update: there is one situation where this doesn’t work — if your WCF service returns a stream that invokes NHibernate, or otherwise causes NHibernate to load after the end of the method, this doesn’t work. A workaround for these methods is simply to omit the [Transaction] attribute (hopefully you’re following CQS and not writing the DB in your query!).

Correctness vs Robustness

In programming, correctness and robustness are two high-level principles from which a number of other principles can be traced back to.

Correctness Robustness
Design by Contract
Defensive programming
Assertions
Invariants
Fail Fast
Populating missing parameters
Sensible defaults
Getting out of the users way
Anticorruption Layer
Backwards-compatible APIs

Robustness adds built-in tolerance for common and non-critical mistakes, while correctness throws an error when it encounters anything less than perfect input. Although they are contradictory, both principles play an important role in most software, so it’s important to know when each is appropriate, and why.

Robustness

“Robustness” is well known as one of the founding principles of the internet, and is probably one of the major contributing factors to its success. Postel’s Law summarizes it simply:

Be conservative in what you do; be liberal in what you accept from others.

Postel’s Law originally referred to other computers on a network, but these days, it can be applied to files, configuration, third party code, other developers, and even users themselves. For example:

Problem Robust approach Correct approach
A rogue web browser that adds trailing whitespace to HTTP headers. Strip whitespace, process request as normal. Return HTTP 400 Bad Request error status to client.
A video file with corrupt frames. Skip over corrupt area to next playable section. Stop playback, raise “Corrupt video file” error.
A config file with lines commented out using the wrong character. Internally recognize most common comment prefixes, ignore them. Terminate on startup with “bad configuration” error.
A user who enters dates in a strange format. Try parsing the string against a number of different date formats. Render the correct format back to the user. Invalid date error.

(Important side note: Postel’s law doesn’t suggest skipping validation entirely, but that errors in non-essential items simply be logged and/or warned about instead of throwing fatal exceptions.)

In many cases, relentless pursuit of correctness results in a pretty bad user experience. One recent annoyance of mine is companies that can’t handle spaces in credit card numbers. Computers are pretty good at text processing, so wasting the user’s time by forcing them to retype their credit card numbers in strange formats is pure laziness on behalf of the developer. Especially if you consider the validation error probably took more code than simply stripping the spaces out.

Compare this to Google Maps, where you can enter just about anything in the search box and it will figure out a street address. I know which I prefer using.

Robustness makes users’ lives easier, and by doing so promotes uptake. Not only amongst end users, but also developers — if you’re developing a standard/library/service/whatever, and can build in a bit of well thought-out flexibility, it’s going to grant a second chance for users with clients that aren’t quite compliant, instead of kicking them out cold.

Adam Bosworth noted a couple of examples of this in a recent post what makes successful standards:

If there is something in HTTP that the receiver doesn’t understand it ignores it. It doesn’t break. If there is something in HTML that the browser doesn’t understand, it ignores it. It doesn’t break. See Postel’s law. Assume the unexpected. False precision is the graveyard of successful standards. XML Schema did very badly in this regard.

HTTP and HTML are displaying robustness here; if they see anything they don’t recognize they simply skip past it. XML Schema on the other hand fails validation if there are any extra elements/attributes present other than precisely those specified in the XSD.

Correctness

So robustness makes life easier for users and third-party developers. But correctness, on the other hand, makes life easier for your developers according to the books on software development methodologies — instead of bogging down checking/fixing parameters and working around strange edge cases, they can focus on the one single model where all assumptions are guaranteed. Any states outside the main success path can thus be ignored (by failing noisily) — producing code that is briefer, easier to understand, and easier to maintain.

For example, consider parsing XHTML vs regular HTML. If XHTML isn’t well formed, you can simply throw a parser error and give up. HTML on the other hand has thoroughly documented graceful error handling and recovery procedures that you need to implement — much harder than simply validating it as XML.

Which should you use then?

We have a conflict here between robustness or correctness as a guiding principle. I remember one paralyzing moment of indecision I had when I was a bit younger with this very question. The specific details aren’t important, but the 50/50 problem basically boiled down to this: If my code doesn’t technically require this value to be provided, should I check it anyway?

To answer this question, you need to be aware of where you are in the code base, and who it is serving.

  • External interfaces (UI, input files, configuration, API etc) exist primarily to serve users and third parties. Make them robust, and as accommodating as possible, with the expectation that people will input garbage.
  • An application’s internal model (i.e. domain model) should be as simple as possible, and always be in a 100% valid state. Use invariants and assertions to make safe assumptions, and just throw a big fat exception whenever you encounter anything that isn’t right.
  • Protect the internal model from external interfaces with an anti-corruption layer which maps and corrects invalid input where possible, before passing it to the internal model.

Remember if you ignore users’ needs, no one will want to use your software. And if you ignore programmers’ needs, there won’t be any software. So make your external interfaces robust. Make your internal model correct.

Or in other words: internally, seek correctness; externally, seek robustness. A successful application needs both.

Domain-Driven Documentation

Here’s a couple of real-life documentation examples from a system I’ve been building for a client:

Monitored Individual is a role played by certain Employees. Each Monitored Individual is required to be proficient in a number of Competencies, according to [among other things] what District they’re stationed in.

Training Programme is comprised of Skills, arranged in Skill GroupsSkill Groups can contain Sub Groups with as many levels deep as you like. Skills can be used for multiple Training Programmes, but you can’t have the same Skill twice under the same Training Programme. When a Skill is removed from a Training ProgrammeIndividuals should no longer have to practice it.

This is the same style Evans uses himself in the blue DDD book. A colleague jokingly called it Domain-Driven Documentation.

I adopted it after noticing a couple of problems with my documentation:

  • I was using synonyms — different words with the same meaning — interchangeably to refer to the same thing in different places.
  • Sentences talking about the code itself looked messy and inconsistent when mixing class names with higher-level concepts.

It’s a pretty simple system. There are only three rules to remember: when referring to domain concepts, use capital letters, write them in full, and write them in bold.

Highlighting the names of domain concepts like this is a fantastic way to hammer down the ubiquitous language — the vocabulary shared between business and developers.

Since adopting it, I’ve noticed improvements in both the quality of my documentation, and of the communication in our project meetings — non-technical business stakeholders are starting to stick to the ubiquitous language now, where in the past they would fall back to talking about purely UI artifacts. This is really encouraging to see — definitely a success for DDD.

Repositories Don’t Have Save Methods

Here’s a repository from an application I’ve been working on recently. It has a pretty significant leaky abstraction problem that I shall be fixing tomorrow:

public interface IEmployeeRepository
{
    void Add(Employee employee);
    void Remove(Employee employee);
    void GetById(int id);
    void Save(Employee employee);
}

What’s Wrong with this Picture?

Let me quote the DDD step by step wiki on what exactly a repository is:

Repositories behave like a collection of an Aggregate Root, and act as a facade between your Domain and your Persistence mechanism.

The Add and Remove methods are cool — they provide the collection semantics. GetById is cool too — it enables the lookup of an entity by a special handle that external parties can use to refer to it.

Save on the other hand signals that an object’s state has changed (dirty), and these changes need to be persisted.

What? Dirty tracking? That’s a persistence concern, nothing to do with the domain. Dirty tracking is the exclusive responsibility of a Unit of Work — an application-level concept that most good ORMs provide for free. Don’t let it leak into your domain model! Stay tune for more info on this topic here!

Life inside an Aggregate Root, part 1

One of the most important concepts in Domain Driven Design is the Aggregate Root — a consistency boundary around a group of related objects that move together. To keep things as simple as possible, we apply the following rules to them:

  1. Entities can only hold references to aggregate roots, not entities or value objects within
  2. Access to any entity or value object is only allowed via the root
  3. The entire aggregate is locked, versioned and persisted together

It’s not too hard to implement these restrictions when you’re using a good object-relational mapper. But there are a couple of other rules that are worth mentioning because they’re easy to overlook.

Real-life example: training programme

Here’s a snippet from an app I am building at work (altered slightly to protect the innocent). Domain concepts are in bold:

Training Programme is comprised of Skills, arranged in Skill GroupsSkill Groupscan contain Sub Groups with as many levels deep as you like. Skills can be used for multiple Training Programmes, but you can’t have the same Skill twice under the same Training Programme. When a Skill is removed from a Training ProgrammeIndividuals should no longer have to practice it.

Here’s what it looks like, with our two aggregate roots, Training Programme and Skill:

Pretty simple right? Let’s see how we can implement the two behaviours from the snippet using aggregate roots.

Rule #4: All objects have a reference back to the aggregate root

Let’s look at the first behaviour from the spec:

…you can’t have the same Skill twice under the same Training Programme.

Our first skill group implementation looked this like:

public class TrainingProgramme
{
    public IEnumerable<SkillGroup> SkillGroups { get; }

    ...
}

public class SkillGroup
{
    public SkillGroup(string name) { ... }

    public void Add(Skill skill)
    {
        // Error if the Skill is already added to this Skill Group.
        if (Contains(skill))
            throw new DomainException("Skill already added");

        skills.Add(skill);
    }

    public bool Contains(Skill skill)
    {
        return skills.Contains(skill);
    }

    ...

    private IList<Skill> skills;
}

What’s the problem here? Have a look at the SkillGroup’s Add() method. If you try to have the same Skill twice under a Skill Group, it will throw an exception. But the spec says you can’t have the same Skill twice anywhere in the same Training Programme.

The solution is to have a reference back from the Skill Group to it’s parent Training Programme, so you can check the whole aggregate instead of just the current entity.

public class TrainingProgramme
{
    public IEnumerable<SkillGroup> SkillGroups { get; }

    // Recursively search through all Skill Groups for this Skill.
    public bool Contains(Skill skill) { ... }

    ...
}

public class SkillGroup
{
    public SkillGroup(string name, TrainingProgramme programme)
    {
        ...
    }

    public void Add(Skill skill)
    {
        // Error if the Skill is already added under this Training Programme.
        if (programme.Contains(skill))
            throw new DomainException("Skill already added");

        skills.Add(skill);
    }

    ...

    private TrainingProgramme programme;
    private IList<Skill> skills;
}

Introducing circular coupling like this feels wrong at first, but is totally acceptable in DDD because the AR restrictions make it work. Entities can be coupled tightly to aggregate roots because nothing else is allowed to use them!

Does your Visual Studio run slow?

Recently I’ve been getting pretty annoyed by my Visual Studio 2008, which has been taking longer and longer to do my favorite menu item, Window > Close All Documents. Today was the last straw — I decided 20 seconds to close four C# editor windows really isn’t acceptable for a machine with four gigs of ram, and so I went to look for some fixes.

Here are some of the good ones I found that worked. Use at your own risk of course!

Disable the customer feedback component

In some scenarios Visual Studio may try to collect anonymous statistics about your code when closing a project, even if you opted out of the customer feedback program. To stop this time-consuming behaviour, find this registry key:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\VisualStudio\9.0\Packages\{2DC9DAA9-7F2D-11d2-9BFC-00C04F9901D1}

and rename it to something invalid:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\VisualStudio\9.0\Packages\Disabled-{2DC9DAA9-7F2D-11d2-9BFC-00C04F9901D1}

Clear Visual Studio temp files

Deleting the contents of the following temp directories can fix a lot of performance issues with Visual Studio and web projects:

C:\Users\richardd\AppData\Local\Microsoft\WebsiteCache
C:\Users\richardd\AppData\Local\Temp\Temporary ASP.NET Files\siteName

Clear out the project MRU list

Apparently Visual Studio sometimes accesses the files in your your recent projects list at random times, e.g. when saving a file. I have no idea why it does this, but it can have a big performance hit, especially if some are on a network share that is no longer available.

To clear your recent project list out, delete any entries from the following path in the registry:

HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\8.0\ProjectMRUList

Disable AutoRecover

In nearly four years, I have never used Visual Studio’s AutoRecover feature to recover work. These days, source control and saving regularly almost entirely eliminates the need for it.

To disable it and gain some performance (particularly with large solutions), go to Tools > Options > Environment > AutoRecover and uncheck Save AutoRecovery information. (Cheers Jake for the tip)

ASP.NET MVC, TDD and Fluent Validation

Yesterday I wrote about ASP.NET MVC, TDD and AutoMapper, and how you can use them together in a DDD application. Today I thought I would follow up and explain how to apply these techniques to another important (but boring) part of any web application: user input validation.

To achieve this, we are using Fluent Validation, a validation framework that lets you easily set up validation rules using a fluent syntax:

public class UserRegistrationFormValidator : AbstractValidator<UserRegistrationForm>
{
    public UserRegistrationFormValidator()
    {
        RuleFor(f => f.Username).NotEmpty()
            .WithMessage("You must choose a username!");
        RuleFor(f => f.Email).EmailAddress()
            .When(f => !String.IsNullOrEmpty(f.Email))
            .WithMessage("This doesn't look like a valid e-mail address!");
        RuleFor(f => f.Url).MustSatisfy(new ValidWebsiteUrlSpecification())
            .When(f => !String.IsNullOrEmpty(f.Url))
            .WithMessage("This doesn't look like a valid URL!");
    }
}

If you think about it, validation and view model mapping have similar footprints in the application. They both:

  • Live in the application services layer
  • May invoke domain services
  • Use third-party libraries
  • Have standalone fluent configurations
  • Have standalone tests
  • Are injected into the application services

Let’s see how it all fits together starting at the outermost layer, the controller.

public class AccountController : Controller
{
    readonly IUserRegistrationService registrationService;
    readonly IFormsAuthentication formsAuth;
    ...
    [AcceptVerbs(HttpVerbs.Post)]
    public ActionResult Register(UserRegistrationForm user)
    {
        if (user == null)
            throw new ArgumentNullException("user");
        try
        {
            this.registrationService.RegisterNewUser(user);
            this.formsAuth.SignIn(user.Username, false);
            return RedirectToAction("Index", "Home");
        }
        catch (ValidationException e)
        {
            e.Result.AddToModelState(this.ModelState, "user");
            return View("Register", user);
        }
    }
    ...
}

As usual, the controller is pretty thin, delegating all responsibility (including performing any required validation) to an application service that handles new user registration. If validation fails, all our controller has to do is catch an exception and append the validation messages contained within to the model state to tell the user any mistakes they made.

The UserRegistrationForm validator is injected into the application service along with any others. Just like AutoMapper, we can now test both the controller, validator and application service separately.

public class UserRegistrationService : IUserRegistrationService
{
    readonly IUserRepository users;
    readonly IValidator<UserRegistrationForm> validator;
    ...
    public void RegisterNewUser(UserRegistrationForm form)
    {
        if (form == null)
            throw new ArgumentNullException("form");
        this.validator.ValidateAndThrow(form);
        User user = new UserBuilder()
            .WithUsername(form.Username)
            .WithAbout(form.About)
            .WithEmail(form.Email)
            .WithLocation(form.Location)
            .WithOpenId(form.OpenId)
            .WithUrl(form.Url);
        this.users.Save(user);
    }
}

Testing the user registration form validation rules

Fluent Validation has some nifty helper extensions that make unit testing a breeze:

[TestFixture]
public class When_validating_a_new_user_form
{
    IValidator<UserRegistrationForm> validator = new UserRegistrationFormValidator();
    [Test]
    public void The_username_cannot_be_empty()
    {
        validator.ShouldHaveValidationErrorFor(f => f.Username, "");
    }
    [Test]
    public void A_valid_email_address_must_be_provided()
    {
        validator.ShouldHaveValidationErrorFor(f => f.Email, "");
    }
    [Test]
    public void The_url_must_be_valid()
    {
        validator.ShouldNotHaveValidationErrorFor(f => f.Url, "http://foo.bar");
    }
}

You can even inject dependencies into the validator and mock them out for testing. For example, in this app the validator calls an IUsernameAvailabilityService to make sure the chosen username is still available.

Testing the user registration service

This validation code is now completely isolated, and we can mock out the entire thing when testing the application service:

[TestFixture]
public class When_registering_a_new_user
{
    IUserRegistrationService registrationService;
    Mock<IUserRepository> repository;
    Mock<IValidator<UserRegistrationForm>> validator;
    [Test, ExpectedException(typeof(ValidationException))]
    public void Should_throw_a_validation_exception_if_the_form_is_invalid()
    {
        validator.Setup(v => v.Validate(It.IsAny<UserRegistrationForm>()))
            .Returns(ObjectMother.GetFailingValidationResult());
        service.RegisterNewUser(ObjectMother.GetNewUserForm());
    }
    [Test]
    public void Should_add_the_new_user_to_the_repository()
    {
        var form = ObjectMother.GetNewUserForm();
        registrationService.RegisterNewUser(form);
        service.Verify(
            r => r.Save(It.Is<User>(u => u.Username.Equals(form.Username))));
    }
}

Testing the accounts controller

With validation out of the way, all we have to test on the controller is whether or not it appends the validation errors to the model state. Here are the fixtures for the success/failure scenarios:

[TestFixture]
public class When_successfully_registering_a_new_user : AccountControllerTestContext
{
    [SetUp]
    public override void SetUp()
    {
        ...
        result = controller.Register(form);
    }
    [Test]
    public void Should_register_the_new_user()
    {
        registrationService.Verify(s => s.RegisterNewUser(form), Times.Exactly(1));
    }
    [Test]
    public void Should_sign_in()
    {
        formsAuth.Verify(a => a.SignIn(user.Username, false));
    }
}
[TestFixture]
public class When_registering_an_invalid_user :  AccountControllerTestContext
{
    [SetUp]
    public override void SetUp()
    {
        ...
        registrationService.Setup(s => s.RegisterNewUser(form)).Throws(
            new ValidationException(
                ObjectMother.GetFailingValidationResult()));
        result = controller.Register(form);
    }
    [Test]
    public void Should_not_sign_in()
    {
        formsAuth.Verify(a => a.SignIn(It.IsAny<string>(),
            It.IsAny<bool>()), Times.Never());
    }
    [Test]
    public void Should_redirect_back_to_the_register_view_with_the_form_contents()
    {
        result.AssertViewRendered().ForView("Register")
            .WithViewData<UserRegistrationForm>().ShouldEqual(form);
    }
}

This post has been a bit heavier on code than usual, but hopefully it is enough to get an idea of how easy it is to implement Fluent Validation in your ASP.NET MVC application.