Can’t decide whether to host your WCF service in IIS or a Windows Service? Consider the additional steps you’ll need to perform, explain, troubleshoot, and write documentation for if you follow the IIS route:

  • Ensure IIS is installed.
  • Run aspnet_regiis -i to install .NET ISAPI module.
  • Run ServiceModelReg –i to install handlers for *.svc file types.
  • Creating, starting a new App Pool running as a domain account.
  • Set your Application to use the new App Pool instead of DefaultAppPool.

Plus, EITHER: (II6/Server 2003)

OR: (IIS7/Server 2008)

If you are working in an environment like ours with developers in London, servers in Germany, and the ops team in India, where getting server access is harder than getting an appointment with the pope, I’d recommend sticking with Windows Services unless you really need IIS.

December 7th, 2010 | No Comments Yet

This week I’ve been working on a brownfield Castle-powered WCF service that was creating a separate NHibernate session on every call to a repository object.

Abusing NHibernate like this was playing all sorts of hell for our app (e.g. TransientObjectExceptions), and prevented us from using transactions that matched with a logical unit of work, so I set about refactoring it.

Goals

  • One session per WCF operation
  • One transaction per WCF operation
  • Direct access to ISession in my services
  • Rely on Castle facilities as much as possible
  • No hand-rolled code to plug everything together

There are a plethora of blog posts out there to tackle this problem, but most of them require lots of hand-rolled code. Here are a couple of good ones — they both create a custom WCF context extension to hold the NHibernate session, and initialize/dispose it via WCF behaviours:

These work well, but actually, there is a much simpler way that only requires the NHibernate and WCF Integration facilities.

Option one: manual Session.BeginTransaction() / Commit()

The easiest way to do this is to register NHibernate’s ISession in the container, with a per WCF operation lifestyle:

windsor.AddFacility<WcfFacility>();
windsor.AddFacility("nhibernate", new NHibernateFacility(...));

windsor.Register(
    Component.For<ISession>().LifeStyle.PerWcfOperation()
        .UsingFactoryMethod(x => windsor.Resolve<ISessionManager>().OpenSession()),
    Component.For<MyWcfService>().LifeStyle.PerWcfOperation()));

If you want a transaction, you have to manually open and commit it. (You don’t need to worry about anything else because NHibernate’s ITransaction rolls back automatically on dispose):

[ServiceBehavior]
public class MyWcfService : IMyWcfService
{
    readonly ISession session;

    public MyWcfService(ISession session)
    {
        this.session = session;
    }

    public void DoSomething()
    {
        using (var tx = session.BeginTransaction())
        {
            // do stuff
            session.Save(...);

            tx.Commit();
        }
    }
}

(Note of course we are using WindsorServiceHostFactory so Castle acts as a factory for our WCF services. And disclaimer: I am not advocating putting data access and persistence directly in your WCF services here; in reality ISession would more likely be injected into query objects and repositories each with a per WCF operation lifestyle (you can use this to check for lifestyle conflicts). It is just an example for this post.)

Anyway, that’s pretty good, and allows a great deal of control. But developers must remember to use a transaction, or remember to flush the session, or else changes won’t be saved to the database. How about some help from Castle here?

Option two: automatic [Transaction] wrapper

Castle’s Automatic Transaction Facility allows you to decorate methods as [Transaction] and it will automatically wrap a transaction around it. IoC registration becomes simpler:

windsor.AddFacility<WcfFacility>();
windsor.AddFacility("nhibernate", new NHibernateFacility(...));
windsor.AddFacility<TransactionFacility>();

windsor.Register(
    Component.For<MyWcfService>().LifeStyle.PerWcfOperation()));

And using it:

[ServiceBehavior, Transactional]
public class MyWcfService : IMyWcfService
{
    readonly ISessionManager sessionManager;

    public MyWcfService(ISessionManager sessionManager)
    {
        this.sessionManager = sessionManager;
    }

    [Transaction]
    public virtual void DoSomething()
    {
        // do stuff
        sessionManager.OpenSession.Save(...);
    }
}

What are we doing here?

  • We decorate methods with [Transaction] (remember to make them virtual!) instead of manually opening/closing transactions. I put this attribute on the service method itself, but you could put it anywhere — for example on a CQRS command handler, or domain event handler etc. Of course this requires that the class with the [Transactional] attribute is instantiated via Windsor so it can proxy it.
  • Nothing in the NHibernateFacility needs to be registered per WCF operation lifestyle. I believe this is because NHibernateFacility uses the CallContextSessionStore by default, which in a WCF service happens to be scoped to the duration of a WCF operation.
  • Callers must not dispose the session — that will be done by castle after the transaction is commited. To discourage this I am using it as a method chain — sessionManager.OpenSession().Save() etc.
  • Inject ISessionManager, not ISession. The reason for this is related to transactions: NHibernateFacility must construct the session after the transaction is opened, otherwise it won’t know to enlist it. (NHibernateFacility knows about ITransactionManger, but ITransactionManager doesn’t know about NHibernateFacility). If your service depends on ISession, Castle will construct the session when MyWcfService and its dependencies are resolved (time of object creation) before the transaction has started (time of method dispatch). Using ISessionManager allows you to lazily construct the session after the transaction is opened.
  • In fact, for this reason, ISession is not registered in the container at all — it is only accessible via ISessionManager (which is automatically registered by the NHibernate Integration Facility).

This gives us an NHibernate session per WCF operation, with automatic transaction support, without the need for any additional code.

August 17th, 2010 | 3 Comments

In the first two posts I talked about commands and command handlers. Now we need to wire them up to invoke them from your service endpoint.

Command Dispatcher

When a command arrives, you simply look up the corresponding handler from your IoC container and invoke it. This responsibility is delegated to a command dispatcher object:

public interface ICommandDispatcher
{
    void Dispatch<T>(T command) where T : ICommand;
}

public class CommandDispatcher : ICommandDispatcher
{
    private readonly IWindsorContainer container;

    public CommandDispatcher(IWindsorContainer container)
    {
        if (container == null) throw new ArgumentNullException("container");
        this.container = container;
    }

    public void Dispatch<T>(T command) where T : ICommand
    {
        if (command == null) throw new ArgumentNullException("command");

        var handler = container.Resolve<ICommandHandler<T>>();
        ErrorIfNoHandlerForCommandFound(handler);

        handler.Handle(command);
    }

    private static void ErrorIfNoHandlerForCommandFound<T>(
        ICommandHandler<T> handler) where T : ICommand
    {
        if (handler == null)
            throw new NoHandlerForCommandException(typeof(T));
    }
}

Then we simply inject the command dispatcher into the WCF service and invoke it whenever a command is received:

[ServiceBehavior]
public class BookingService : IBookingService
{
    private readonly ICommandDispatcher commands;

    [OperationContract]
    public void BookTable(BookTableCommand command)
    {
        if (command == null) throw new ArgumentNullException("command");
        commands.Dispatch(command);
    }
}

Many of you will note that this is very similar to Udi Dahan’s Domain Events aggregator — the only major difference is CQRS commands are only ever handled by one handler, where domain events are broadcast to anyone who’s listening.

Scaling out

Note this is a synchronous command dispatcher — commands are handled as soon as they arrive. An asynchronous/high-volume system may simply put them in a queue to be executed later by some other component.

Final thoughts

This really is a very introductory series to refactoring an existing application to move towards CQRS. We haven’t even touched on the main goal of CQRS yet — all we’ve done is put clear command/query contracts between our client and server.

It may not sound like much, but doing so allows us to mask non-CQRS components in our system — an anti-corruption layer of sorts — and allows us to proceed refactoring them internally to use different models and storage for commands and queries.

June 17th, 2010 | 10 Comments

In the previous two posts, I showed some simple patterns for commands and command handlers. Now let’s talk about the other half of the story: queries!

On our WCF service, each query method:

  • Returns one or more QueryResult objects — a DTO created exclusively for this query.
  • Takes a QueryParameter argument — if required, a DTO containing search criteria, paging options etc.

For example, to query for bookings:

[ServiceContract]
public interface IBookingService
{
    [OperationContract]
    IEnumerable<BookingQueryResult> SearchBookings(
        BookingSearchParameters parameters);
}

Query Parameters

Queries take simple DTO parameter objects just like commands. They carry both search criteria (what to look for) and things like paging options (how to return results). They can also define defaults. For example:

[DataContract]
public class BookingSearchParameters
{
    public BookingSearchParameters()
    {
        // default values
        NumberOfResultsPerPage = 10;
        Page = 1;
    }

    [DataMember]
    public Tuple<DateTime, DateTime> Period { get; set; }

    [DataMember]
    public int NumberOfResultsPerPage { get; set; }

    [DataMember]
    public int PageNumber { get; set; }
}

Query Object

Queries are then executed by a query object — an application service that queries your ORM, reporting store, or domain + automapper (if you’re still using a single model internally for commands and queries).

public interface IQuery<TParameters, TResult>
{
    public TResult Execute(TParameters parameters);
}

Query Results

Queries can return a single result (e.g. to look up the details of a specify item), or a sequence (searching):

public class BookingSearchQuery :
    IQuery<BookingSearchParameters, IEnumerable<BookingSearchResult>>
{
    public IEnumerable<BookingSearchResult> Execute(
        BookingSearchParameters parameters)
    {
        ...
    }
}

Query results are simple DTOs that provide all the information the client needs.

[DataContract]
public class BookingSearchResult
{
    [DataMember]
    public string PartyName { get; set; }

    [DataMember]
    public int PartySize { get; set; }

    [DataMember]
    public DateTime TimeAndDay { get; set; }

    [DataMember]
    public string SpecialRequests { get; set; }
}

Query Results should be able to be rendered directly on the UI, in one query. If the they require further mapping, or multiple calls (e.g. to get different aspects of an object) before you can use it on a view model, then they are most likely:

  • Too granular — query results should be big flattened/denormalized objects which contain everything you need in one hit.
  • Based on the wrong model (the domain or persistence model) — they should be based on the UI’s needs, and present a screenful of information per call.

As with commands, having a one-to-one mapping between query objects and queries makes it easy to add/remove functionality to a system.

Can’t I use the same object for sending commands and returning results?

That BookingSearchResult looks more or less identical to BookTableCommand we sent before — it has all the same properties. That doesn’t seem very DRY! Can’t I just create a generic DTO and use that in both cases?

Using the same class for commands and queries is called CRUD, and it leads to exactly the sort of situations we are trying to avoid — where commands needs to use different representations of objects than when querying, but can’t because they are tightly coupled to share the same object. As I said in part 1, commands are driven by business transactions, but queries are driven by UX needs, and often include projections, flattening and aggregation — more than just 1:1 mapping.

Next: Part 4 – Command Dispatcher

June 16th, 2010 | 7 Comments

In my previous post, I described command DTOs and service methods for booking a table at a restaurant. Now, we just need something to interpret this command, and do something useful with it.

To do this, we create a corresponding command handler for each command:

public interface ICommandHandler : IHandler<T> where T : ICommand
{
    void Handle(T command);
}

Command handlers are responsible for:

  • Performing any required validation on the command.
  • Invoking the domain — coordinating objects in the domain, and invoking the appropriate behaviour on them.

Command handlers are application services, and each execution represents a separate unit of work (e.g. a database transaction). There is only one command handler per command, because commands can only be handled once — they are not broadcast out to all interested listeners like event handlers.

Here’s an example for handling our BookTableCommand. A one-to-one handler/command mapping makes it easy to add/remove features from our service.

public class BookTableCommandHandler : ICommandHandler<BookTableCommand>
{
    IDinnerServiceRepository nights;

    public void Handle(BookTableCommand command)
    {
        var dinnerService = nights[command.TimeAndDay];
        var party = new Party(command.PartySize, command.PartyName);
        night.TakeBooking(party);
    }
}

Note each command implements ICommand — a simple explicit role marker interface that also allows us to use constraints on generic types and automate IoC registration of command handlers.

public interface ICommand { }

Command validation and errors

Aside from transient technical faults, there are two business reasons a command might fail:

  • The command was not valid — e.g. you tried to book a table for zero people.
  • The command could not succeed — e.g. the restaurant is fully booked that night.

Ideally, the client will have pre-checked these to save time, but if the command handler detects a problem, how do we report it back to the user, given commands are not allowed to have return values? How would we report success even?

Actually, this is not a problem at all — commands have no return value, but they can throw a detailed validation/command failed exception back to the client. If they didn’t throw anything, it is assumed to have succeeded.

What if you execute commands asynchronously — e.g. queued and executed at some later time? We can’t throw an exception back to the client in this case. But that’s fine — the client must always assume their asynchronous command will succeed. If it does fail, it will be reported back through some alternate channel (e.g. via e-mail or the query side). This is why it is important to pre-validate commands on the client as much as possible.

Next: Part 3 – Queries, Parameters and Results

June 16th, 2010 | 7 Comments