Deployment readiness

Deployment readiness

Have you ever seen this development cycle?

  1. Install new build
  2. App doesn’t start, doesn’t log any error why
  3. Go back to source code and add improved logging
  4. Install new build
  5. Read logs
  6. Fix the actual problem (usually some stupid configuration thing)
  7. Repeat steps 1-6 until all bugs are gone

Was it because of your code, or someone else’s? This sort of cycle is time consuming, frustrating, stressful, and makes you look helpless (or like a cowboy) in front of project managers and support staff.

If your app can’t produce logs your infrastructure staff can understand, then your app is not even close to production-readiness. If it doesn’t log clear errors, how can anyone hope to deploy and support it?

One NHibernate session per WCF operation, the easy way

This week I’ve been working on a brownfield Castle-powered WCF service that was creating a separate NHibernate session on every call to a repository object.

Abusing NHibernate like this was playing all sorts of hell for our app (e.g. TransientObjectExceptions), and prevented us from using transactions that matched with a logical unit of work, so I set about refactoring it.

Goals

  • One session per WCF operation
  • One transaction per WCF operation
  • Direct access to ISession in my services
  • Rely on Castle facilities as much as possible
  • No hand-rolled code to plug everything together

There are a plethora of blog posts out there to tackle this problem, but most of them require lots of hand-rolled code. Here are a couple of good ones — they both create a custom WCF context extension to hold the NHibernate session, and initialize/dispose it via WCF behaviours:

  • NHibernate Session Per Request Using Castles WcfFacility
  • NHibernate’s ISession, scoped for a single WCF-call

These work well, but actually, there is a much simpler way that only requires the NHibernate and WCF Integration facilities.

Option one: manual Session.BeginTransaction() / Commit()

The easiest way to do this is to register NHibernate’s ISession in the container, with a per WCF operation lifestyle:

windsor.AddFacility<WcfFacility>();
windsor.AddFacility("nhibernate"new NHibernateFacility(...));
windsor.Register(
    Component.For<ISession>().LifeStyle.PerWcfOperation()
        .UsingFactoryMethod(x => windsor.Resolve<ISessionManager>().OpenSession()),
    Component.For<MyWcfService>().LifeStyle.PerWcfOperation()));

If you want a transaction, you have to manually open and commit it. (You don’t need to worry about anything else because NHibernate’s ITransaction rolls back automatically on dispose):

[ServiceBehavior]
public class MyWcfService : IMyWcfService
{
    readonly ISession session;
    public MyWcfService(ISession session)
    {
        this.session = session;
    }
    public void DoSomething()
    {
        using (var tx = session.BeginTransaction())
        {
            // do stuff
            session.Save(...);
            tx.Commit();
        }
    }
}

(Note of course we are using WindsorServiceHostFactory so Castle acts as a factory for our WCF services. And disclaimer: I am not advocating putting data access and persistence directly in your WCF services here; in reality ISession would more likely be injected into query objects and repositories each with a per WCF operation lifestyle (you can use this to check for lifestyle conflicts). It is just an example for this post.)

Anyway, that’s pretty good, and allows a great deal of control. But developers must remember to use a transaction, or remember to flush the session, or else changes won’t be saved to the database. How about some help from Castle here?

Option two: automatic [Transaction] wrapper

Castle’s Automatic Transaction Facility allows you to decorate methods as [Transaction] and it will automatically wrap a transaction around it. IoC registration becomes simpler:

windsor.AddFacility<WcfFacility>();
windsor.AddFacility("nhibernate"new NHibernateFacility(...));
windsor.AddFacility<TransactionFacility>();
windsor.Register(
    Component.For<MyWcfService>().LifeStyle.PerWcfOperation()));

And using it:

[ServiceBehavior, Transactional]
public class MyWcfService : IMyWcfService
{
    readonly ISessionManager sessionManager;
    public MyWcfService(ISessionManager sessionManager)
    {
        this.sessionManager = sessionManager;
    }
    [Transaction]
    public virtual void DoSomething()
    {
        // do stuff
        sessionManager.OpenSession.Save(...);
    }
}

What are we doing here?

  • We decorate methods with [Transaction] (remember to make them virtual!) instead of manually opening/closing transactions. I put this attribute on the service method itself, but you could put it anywhere — for example on a CQRS command handler, or domain event handler etc. Of course this requires that the class with the [Transactional] attribute is instantiated via Windsor so it can proxy it.
  • Nothing in the NHibernateFacility needs to be registered per WCF operation lifestyle. I believe this is because NHibernateFacility uses the CallContextSessionStore by default, which in a WCF service happens to be scoped to the duration of a WCF operation.
  • Callers must not dispose the session — that will be done by castle after the transaction is commited. To discourage this I am using it as a method chain — sessionManager.OpenSession().Save() etc.
  • Inject ISessionManager, not ISession. The reason for this is related to transactions: NHibernateFacility must construct the session after the transaction is opened, otherwise it won’t know to enlist it. (NHibernateFacility knows about ITransactionManger, but ITransactionManager doesn’t know about NHibernateFacility). If your service depends on ISession, Castle will construct the session when MyWcfService and its dependencies are resolved (time of object creation) before the transaction has started (time of method dispatch). Using ISessionManager allows you to lazily construct the session after the transaction is opened.
  • In fact, for this reason, ISession is not registered in the container at all — it is only accessible via ISessionManager (which is automatically registered by the NHibernate Integration Facility).

This gives us an NHibernate session per WCF operation, with automatic transaction support, without the need for any additional code.

Update: there is one situation where this doesn’t work — if your WCF service returns a stream that invokes NHibernate, or otherwise causes NHibernate to load after the end of the method, this doesn’t work. A workaround for these methods is simply to omit the [Transaction] attribute (hopefully you’re following CQS and not writing the DB in your query!).

Merge redundant assemblies

Lately I have become a big opponent of a popular anti-pattern: people insisting on splitting up their application tiers/layers into 5-10 separate Visual Studio projects and adding references between them. Double that number of projects if you want corresponding unit test project for each layer.

In fact, removing them has become one of the first steps I take when inheriting a legacy code base. If I were writing a book on refactoring Visual Studio solutions, I would call it Merge redundant assemblies. Here’s a diagram:

You don’t need to split your code across 50 gazillion projects. Next time you think of creating a new project in your solution, please remember the following:

  • Visual Studio projects are for outputing assemblies. Namespaces are for organising code.
  • Assemblies only need to be split if your deployment scenario demands it. Putting a client API library into a separate assembly makes sense because the same API assembly may be used between many apps, or in different App Domains. Deploying your domain model or data layer into a separate assembly does not make sense, unless other apps need them too.
  • Each additional project slows down your build. A giant project with hundreds of classes will compile faster than a smaller number of classes split amongst multiple projects.
  • Crossing assembly boundaries hurts runtime performance. Your app will start up slower, the ability to perform inlining and OS optimization is reduced, and additional security overhead is enforced between assemblies. Assemblies are supposed to be big and heavy; loading lots of little one goes against the CLR.
  • You don’t need one test project for each assembly. One giant tests project is normally fine. The only case I have seen where it made sense to have separate test projects was for a client API which duplicated many of the server internal class names and we wanted to avoid overlap/namespace pollution.
  • Common sense should be used to enforce one-way dependencies. Not assembly references.

Patrick Smacchia has a good list of valid/invalid use cases where separate assemblies are appropriate here.

Powershell to recursively strip C# regions from files

Here’s a super quick little powershell snippet to strip regions out of all C# files in a directory tree. Useful for legacy code where people hide long blocks in regions rather than encapsulate it into smaller methods/objects.

dir -recurse -filter *.cs $src | foreach ($_) {    $file = $_.fullname    echo $file    (get-content $file) | where {$_ -notmatch "^.*#(end)?region.*$" } | out-file $file}

Run this in your solution folder and support the movement against C# regions!

Watch out for weaselly recruitment agents

wea·sel·ly (adj.)
Resembling or suggestive of a weasel.

Yesterday, Parcelforce called me at work to say they tried to deliver a package for me the day before, but couldn’t because I wasn’t around to sign for it.

I wasn’t expecting anything, but I like getting parcels, and thought maybe it was some long-forgotten internet purchase finally shipped. They wanted to arrange redelivery that afternoon, and asked for a couple of names of people who could sign for it just in case I was out (it was around 11am and I knew I would be stepping out briefly for lunch).

But twenty four hours later nothing has arrived. I think it was actually a recruitment agent, and the colleauges I gave as co-signatures will be getting some annoying calls soon 🙁

Exception handling no-nos

Just spotted this in a project I’m working on:

public static string GetExceptionAsString(this Exception exception){        return string.Format("Exception message: {0}. " + Environment.NewLine +                "StackTrace: {1}. {2}", exception.Message, exception.StackTrace,                GetAnyInnerExceptionsAsString(exception));}

Writing code like this should be a shooting offense.

But wait, there’s more! Check out its usage:

try{        ...}catch (Exception ex){        log.InfoFormat("Error occured:{0}.", ex.GetExceptionAsString());}

Agh!

Code like this demonstrates a complete misunderstanding of CLR exception basics — inner exceptions and formatting — and also log4net. All that hand-rolled formatting could simply be replaced with:

  • GetExceptionAsString() -> Exception.ToString()
  • ILog.InfoFormat(format, args) -> ILog.Info(message, exception)

Background event handling in Prism

Prism’s event aggregator is great for decoupling UI state changes when UI events occur, but sometimes you need to perform some larger, long-running task on a background thread — uploading a file, for example.

Here’s a quick example of an encapsulated event handler listening off the Prism event bus, and using Windsor’s IStartable facility to handle event subscription:

public class TradeCancelledEventHandler : ICompositePresentationEventHandler, IStartable{    private readonly IEventAggregator eventAggregator;    protected TradeCancelledEventHandler(IEventAggregator eventAggregator)    {        if (eventAggregator == null)            throw new ArgumentNullException("eventAggregator");        this.eventAggregator = eventAggregator;    }    public void Start()    {        // Register to receive events on the background thread        eventAggregator            .GetEvent<TradeCancelledEvent>()            .Subscribe(Handle, ThreadOption.BackgroundThread);    }    public void Stop()    {        eventAggregator            .GetEvent<TradeCancelledEvent>()            .Unsubscribe(Handle);    }    void Handle(TradeCancelledEventArgs eventArgs)    {        // ... do stuff with the event    }}

Each event handler is effectively a little service running in the container. Note ICompositePresentationEventHandler is a simple role interface that allows us to register them all at once in the IoC container:

public interface ICompositePresentationEventHandler {}...container.AddFacility<StartableFacility>();// Register event handlers in containercontainer.Register(    AllTypes        .Of<ICompositePresentationEventHandler>()        .FromAssembly(Assembly.GetExecutingAssembly()));

Brownfield CQRS part 4 – Command Dispatcher

In the first two posts I talked about commands and command handlers. Now we need to wire them up to invoke them from your service endpoint.

  • Brownfield CQRS part 1 – Commands
  • Brownfield CQRS part 2 – Command Handlers
  • Brownfield CQRS part 3 – Queries, Parameters and Results
  • Brownfield CQRS part 4 – Command Dispatcher

Command Dispatcher

When a command arrives, you simply look up the corresponding handler from your IoC container and invoke it. This responsibility is delegated to a command dispatcher object:

public interface ICommandDispatcher{    void Dispatch<T>(T command) where T : ICommand;}public class CommandDispatcher : ICommandDispatcher{    private readonly IWindsorContainer container;    public CommandDispatcher(IWindsorContainer container)    {        if (container == null) throw new ArgumentNullException("container");        this.container = container;    }    public void Dispatch<T>(T command) where T : ICommand    {        if (command == null) throw new ArgumentNullException("command");        var handler = container.Resolve<ICommandHandler<T>>();        ErrorIfNoHandlerForCommandFound(handler);        handler.Handle(command);    }    private static void ErrorIfNoHandlerForCommandFound<T>(        ICommandHandler<T> handler) where T : ICommand    {        if (handler == null)            throw new NoHandlerForCommandException(typeof(T));    }}

Then we simply inject the command dispatcher into the WCF service and invoke it whenever a command is received:

[ServiceBehavior]public class BookingService : IBookingService{    private readonly ICommandDispatcher commands;    [OperationContract]    public void BookTable(BookTableCommand command)    {        if (command == null) throw new ArgumentNullException("command");        commands.Dispatch(command);    }}

Many of you will note that this is very similar to Udi Dahan’s Domain Events aggregator — the only major difference is CQRS commands are only ever handled by one handler, where domain events are broadcast to anyone who’s listening.

Scaling out

Note this is a synchronous command dispatcher — commands are handled as soon as they arrive. An asynchronous/high-volume system may simply put them in a queue to be executed later by some other component.

Final thoughts

This really is a very introductory series to refactoring an existing application to move towards CQRS. We haven’t even touched on the main goal of CQRS yet — all we’ve done is put clear command/query contracts between our client and server.

It may not sound like much, but doing so allows us to mask non-CQRS components in our system — an anti-corruption layer of sorts — and allows us to proceed refactoring them internally to use different models and storage for commands and queries.

Brownfield CQRS part 3 – Queries, Parameters and Results

In the previous two posts, I showed some simple patterns for commands and command handlers. Now let’s talk about the other half of the story: queries!

  • Brownfield CQRS part 1 – Commands
  • Brownfield CQRS part 2 – Command Handlers
  • Brownfield CQRS part 3 – Queries, Parameters and Results
  • Brownfield CQRS part 4 – Command Dispatcher

On our WCF service, each query method:

  • Returns one or more QueryResult objects — a DTO created exclusively for this query.
  • Takes a QueryParameter argument — if required, a DTO containing search criteria, paging options etc.

For example, to query for bookings:

[ServiceContract]public interface IBookingService{    [OperationContract]    IEnumerable<BookingQueryResult> SearchBookings(        BookingSearchParameters parameters);}

Query Parameters

Queries take simple DTO parameter objects just like commands. They carry both search criteria (what to look for) and things like paging options (how to return results). They can also define defaults. For example:

[DataContract]public class BookingSearchParameters{    public BookingSearchParameters()    {        // default values        NumberOfResultsPerPage = 10;        Page = 1;    }    [DataMember]    public Tuple<DateTime, DateTime> Period { get; set; }    [DataMember]    public int NumberOfResultsPerPage { get; set; }    [DataMember]    public int PageNumber { get; set; }}

Query Object

Queries are then executed by a query object — an application service that queries your ORM, reporting store, or domain + automapper (if you’re still using a single model internally for commands and queries).

public interface IQuery<TParameters, TResult>{    public TResult Execute(TParameters parameters);}

Query Results

Queries can return a single result (e.g. to look up the details of a specify item), or a sequence (searching):

public class BookingSearchQuery :     IQuery<BookingSearchParameters, IEnumerable<BookingSearchResult>>{    public IEnumerable<BookingSearchResult> Execute(        BookingSearchParameters parameters)    {        ...    }}

Query results are simple DTOs that provide all the information the client needs.

[DataContract]public class BookingSearchResult{    [DataMember]    public string PartyName { get; set; }    [DataMember]    public int PartySize { get; set; }    [DataMember]    public DateTime TimeAndDay { get; set; }    [DataMember]    public string SpecialRequests { get; set; }}

Query Results should be able to be rendered directly on the UI, in one query. If the they require further mapping, or multiple calls (e.g. to get different aspects of an object) before you can use it on a view model, then they are most likely:

  • Too granular — query results should be big flattened/denormalized objects which contain everything you need in one hit.
  • Based on the wrong model (the domain or persistence model) — they should be based on the UI’s needs, and present a screenful of information per call.

As with commands, having a one-to-one mapping between query objects and queries makes it easy to add/remove functionality to a system.

Can’t I use the same object for sending commands and returning results?

That BookingSearchResult looks more or less identical to BookTableCommand we sent before — it has all the same properties. That doesn’t seem very DRY! Can’t I just create a generic DTO and use that in both cases?

Using the same class for commands and queries is called CRUD, and it leads to exactly the sort of situations we are trying to avoid — where commands needs to use different representations of objects than when querying, but can’t because they are tightly coupled to share the same object. As I said in part 1, commands are driven by business transactions, but queries are driven by UX needs, and often include projections, flattening and aggregation — more than just 1:1 mapping.

Brownfield CQRS part 2 – Command Handlers

In my previous post, I described command DTOs and service methods for booking a table at a restaurant. Now, we just need something to interpret this command, and do something useful with it.

  • Brownfield CQRS part 1 – Commands
  • Brownfield CQRS part 2 – Command Handlers
  • Brownfield CQRS part 3 – Queries, Parameters and Results
  • Brownfield CQRS part 4 – Command Dispatcher

To do this, we create a corresponding command handler for each command:

public interface ICommandHandler where T : ICommand{    void Handle(T command);}

Command handlers are responsible for:

  • Performing any required validation on the command.
  • Invoking the domain — coordinating objects in the domain, and invoking the appropriate behaviour on them.

Command handlers are application services, and each execution represents a separate unit of work (e.g. a database transaction). There is only one command handler per command, because commands can only be handled once — they are not broadcast out to all interested listeners like event handlers.

Here’s an example for handling our BookTableCommand. A one-to-one handler/command mapping makes it easy to add/remove features from our service.

public class BookTableCommandHandler : ICommandHandler<BookTableCommand>{    IDinnerServiceRepository nights;    public void Handle(BookTableCommand command)    {        var dinnerService = nights[command.TimeAndDay];        var party = new Party(command.PartySize, command.PartyName);        night.TakeBooking(party);    }}

Note each command implements ICommand — a simple explicit role marker interface that also allows us to use constraints on generic types and automate IoC registration of command handlers.

public interface ICommand { }

Command validation and errors

Aside from transient technical faults, there are two business reasons a command might fail:

  • The command was not valid — e.g. you tried to book a table for zero people.
  • The command could not succeed — e.g. the restaurant is fully booked that night.

Ideally, the client will have pre-checked these to save time, but if the command handler detects a problem, how do we report it back to the user, given commands are not allowed to have return values? How would we report success even?

Actually, this is not a problem at all — commands have no return value, but they can throw a detailed validation/command failed exception back to the client. If they didn’t throw anything, it is assumed to have succeeded.

What if you execute commands asynchronously — e.g. queued and executed at some later time? We can’t throw an exception back to the client in this case. But that’s fine — the client must always assume their asynchronous command will succeed. If it does fail, it will be reported back through some alternate channel (e.g. via e-mail or the query side). This is why it is important to pre-validate commands on the client as much as possible.