Background event handling in Prism

Prism’s event aggregator is great for decoupling UI state changes when UI events occur, but sometimes you need to perform some larger, long-running task on a background thread — uploading a file, for example.

Here’s a quick example of an encapsulated event handler listening off the Prism event bus, and using Windsor’s IStartable facility to handle event subscription:

public class TradeCancelledEventHandler : ICompositePresentationEventHandler, IStartable{    private readonly IEventAggregator eventAggregator;    protected TradeCancelledEventHandler(IEventAggregator eventAggregator)    {        if (eventAggregator == null)            throw new ArgumentNullException("eventAggregator");        this.eventAggregator = eventAggregator;    }    public void Start()    {        // Register to receive events on the background thread        eventAggregator            .GetEvent<TradeCancelledEvent>()            .Subscribe(Handle, ThreadOption.BackgroundThread);    }    public void Stop()    {        eventAggregator            .GetEvent<TradeCancelledEvent>()            .Unsubscribe(Handle);    }    void Handle(TradeCancelledEventArgs eventArgs)    {        // ... do stuff with the event    }}

Each event handler is effectively a little service running in the container. Note ICompositePresentationEventHandler is a simple role interface that allows us to register them all at once in the IoC container:

public interface ICompositePresentationEventHandler {}...container.AddFacility<StartableFacility>();// Register event handlers in containercontainer.Register(    AllTypes        .Of<ICompositePresentationEventHandler>()        .FromAssembly(Assembly.GetExecutingAssembly()));

Brownfield CQRS part 4 – Command Dispatcher

In the first two posts I talked about commands and command handlers. Now we need to wire them up to invoke them from your service endpoint.

  • Brownfield CQRS part 1 – Commands
  • Brownfield CQRS part 2 – Command Handlers
  • Brownfield CQRS part 3 – Queries, Parameters and Results
  • Brownfield CQRS part 4 – Command Dispatcher

Command Dispatcher

When a command arrives, you simply look up the corresponding handler from your IoC container and invoke it. This responsibility is delegated to a command dispatcher object:

public interface ICommandDispatcher{    void Dispatch<T>(T command) where T : ICommand;}public class CommandDispatcher : ICommandDispatcher{    private readonly IWindsorContainer container;    public CommandDispatcher(IWindsorContainer container)    {        if (container == null) throw new ArgumentNullException("container");        this.container = container;    }    public void Dispatch<T>(T command) where T : ICommand    {        if (command == null) throw new ArgumentNullException("command");        var handler = container.Resolve<ICommandHandler<T>>();        ErrorIfNoHandlerForCommandFound(handler);        handler.Handle(command);    }    private static void ErrorIfNoHandlerForCommandFound<T>(        ICommandHandler<T> handler) where T : ICommand    {        if (handler == null)            throw new NoHandlerForCommandException(typeof(T));    }}

Then we simply inject the command dispatcher into the WCF service and invoke it whenever a command is received:

[ServiceBehavior]public class BookingService : IBookingService{    private readonly ICommandDispatcher commands;    [OperationContract]    public void BookTable(BookTableCommand command)    {        if (command == null) throw new ArgumentNullException("command");        commands.Dispatch(command);    }}

Many of you will note that this is very similar to Udi Dahan’s Domain Events aggregator — the only major difference is CQRS commands are only ever handled by one handler, where domain events are broadcast to anyone who’s listening.

Scaling out

Note this is a synchronous command dispatcher — commands are handled as soon as they arrive. An asynchronous/high-volume system may simply put them in a queue to be executed later by some other component.

Final thoughts

This really is a very introductory series to refactoring an existing application to move towards CQRS. We haven’t even touched on the main goal of CQRS yet — all we’ve done is put clear command/query contracts between our client and server.

It may not sound like much, but doing so allows us to mask non-CQRS components in our system — an anti-corruption layer of sorts — and allows us to proceed refactoring them internally to use different models and storage for commands and queries.

Brownfield CQRS part 3 – Queries, Parameters and Results

In the previous two posts, I showed some simple patterns for commands and command handlers. Now let’s talk about the other half of the story: queries!

  • Brownfield CQRS part 1 – Commands
  • Brownfield CQRS part 2 – Command Handlers
  • Brownfield CQRS part 3 – Queries, Parameters and Results
  • Brownfield CQRS part 4 – Command Dispatcher

On our WCF service, each query method:

  • Returns one or more QueryResult objects — a DTO created exclusively for this query.
  • Takes a QueryParameter argument — if required, a DTO containing search criteria, paging options etc.

For example, to query for bookings:

[ServiceContract]public interface IBookingService{    [OperationContract]    IEnumerable<BookingQueryResult> SearchBookings(        BookingSearchParameters parameters);}

Query Parameters

Queries take simple DTO parameter objects just like commands. They carry both search criteria (what to look for) and things like paging options (how to return results). They can also define defaults. For example:

[DataContract]public class BookingSearchParameters{    public BookingSearchParameters()    {        // default values        NumberOfResultsPerPage = 10;        Page = 1;    }    [DataMember]    public Tuple<DateTime, DateTime> Period { get; set; }    [DataMember]    public int NumberOfResultsPerPage { get; set; }    [DataMember]    public int PageNumber { get; set; }}

Query Object

Queries are then executed by a query object — an application service that queries your ORM, reporting store, or domain + automapper (if you’re still using a single model internally for commands and queries).

public interface IQuery<TParameters, TResult>{    public TResult Execute(TParameters parameters);}

Query Results

Queries can return a single result (e.g. to look up the details of a specify item), or a sequence (searching):

public class BookingSearchQuery :     IQuery<BookingSearchParameters, IEnumerable<BookingSearchResult>>{    public IEnumerable<BookingSearchResult> Execute(        BookingSearchParameters parameters)    {        ...    }}

Query results are simple DTOs that provide all the information the client needs.

[DataContract]public class BookingSearchResult{    [DataMember]    public string PartyName { get; set; }    [DataMember]    public int PartySize { get; set; }    [DataMember]    public DateTime TimeAndDay { get; set; }    [DataMember]    public string SpecialRequests { get; set; }}

Query Results should be able to be rendered directly on the UI, in one query. If the they require further mapping, or multiple calls (e.g. to get different aspects of an object) before you can use it on a view model, then they are most likely:

  • Too granular — query results should be big flattened/denormalized objects which contain everything you need in one hit.
  • Based on the wrong model (the domain or persistence model) — they should be based on the UI’s needs, and present a screenful of information per call.

As with commands, having a one-to-one mapping between query objects and queries makes it easy to add/remove functionality to a system.

Can’t I use the same object for sending commands and returning results?

That BookingSearchResult looks more or less identical to BookTableCommand we sent before — it has all the same properties. That doesn’t seem very DRY! Can’t I just create a generic DTO and use that in both cases?

Using the same class for commands and queries is called CRUD, and it leads to exactly the sort of situations we are trying to avoid — where commands needs to use different representations of objects than when querying, but can’t because they are tightly coupled to share the same object. As I said in part 1, commands are driven by business transactions, but queries are driven by UX needs, and often include projections, flattening and aggregation — more than just 1:1 mapping.

Brownfield CQRS part 2 – Command Handlers

In my previous post, I described command DTOs and service methods for booking a table at a restaurant. Now, we just need something to interpret this command, and do something useful with it.

  • Brownfield CQRS part 1 – Commands
  • Brownfield CQRS part 2 – Command Handlers
  • Brownfield CQRS part 3 – Queries, Parameters and Results
  • Brownfield CQRS part 4 – Command Dispatcher

To do this, we create a corresponding command handler for each command:

public interface ICommandHandler where T : ICommand{    void Handle(T command);}

Command handlers are responsible for:

  • Performing any required validation on the command.
  • Invoking the domain — coordinating objects in the domain, and invoking the appropriate behaviour on them.

Command handlers are application services, and each execution represents a separate unit of work (e.g. a database transaction). There is only one command handler per command, because commands can only be handled once — they are not broadcast out to all interested listeners like event handlers.

Here’s an example for handling our BookTableCommand. A one-to-one handler/command mapping makes it easy to add/remove features from our service.

public class BookTableCommandHandler : ICommandHandler<BookTableCommand>{    IDinnerServiceRepository nights;    public void Handle(BookTableCommand command)    {        var dinnerService = nights[command.TimeAndDay];        var party = new Party(command.PartySize, command.PartyName);        night.TakeBooking(party);    }}

Note each command implements ICommand — a simple explicit role marker interface that also allows us to use constraints on generic types and automate IoC registration of command handlers.

public interface ICommand { }

Command validation and errors

Aside from transient technical faults, there are two business reasons a command might fail:

  • The command was not valid — e.g. you tried to book a table for zero people.
  • The command could not succeed — e.g. the restaurant is fully booked that night.

Ideally, the client will have pre-checked these to save time, but if the command handler detects a problem, how do we report it back to the user, given commands are not allowed to have return values? How would we report success even?

Actually, this is not a problem at all — commands have no return value, but they can throw a detailed validation/command failed exception back to the client. If they didn’t throw anything, it is assumed to have succeeded.

What if you execute commands asynchronously — e.g. queued and executed at some later time? We can’t throw an exception back to the client in this case. But that’s fine — the client must always assume their asynchronous command will succeed. If it does fail, it will be reported back through some alternate channel (e.g. via e-mail or the query side). This is why it is important to pre-validate commands on the client as much as possible.

DDD eXchange 2010 highlights

On Friday I attended DDD eXchange 2010, a one-day software conference here in London on Domain Driven Design. This years conference focused on two themes — architectural innovation and process — and I saw talks by Eric Evans, Udi Dahan, Greg Young, Ian Cooper and Gojko Azdic discussing their various aspects. Here are some of my highlights.

image

The difference between a Domain and a Domain Model

Eric Evan led the keynote focusing on definitions of many of the original terms that get mixed from the blue DDD book. In particular, he clarified the difference between the domain and the domain model:

  • Domain — the business. How people actually do things.
  • Domain Model — a useful abstraction of the business, modelled in code.

One is fixed (only the business can change it), the other is flexible. It sounds really obvious, but often the terms are used interchangibly and things get messy. Most notably:

  • DDD systems never have a domain layer — they have a domain model. Go and rename your Domain namespace to DomainModel right now.
  • Ubiquitous language is created to describe the domain model — it is not just blindly dictated from the existing domain. Mark Gibaud summarized this succinctly in a tweet:

Both of these are things I have been vocal about in the past.

The difference between a Sub-Domain and a Bounded Context

Following on from the previous point, another tricky one — a subdomain is a sub-part of the business (e.g. in a banking domain, a subdomain might be loans), where bounded contexts are more concerned with things like linguistic boundaries. They are easiest to spot when two business experts use the same words to refer to different concepts.

Likewise, a generic subdomain is not a reusable code library/service (like Google Maps) — it is an area of the business with no special differentiation. Compare this versus the core domain, where the business derives its competitive advantage through differentiating itself from the market.

CQRS

Much of the morning was spent discussing Command Query Responsibility Segregation (CQRS), Event Sourcing, persistent view models, and all that entails. If you’ve kept up with Greg and Udi’s recent talks, you didn’t miss anything.

image

In a paper-based system…

Greg mentioned a useful tool he sometimes uses when identifying roles and events in a business system: imagine how the business would operate if it were completely free of computers.

How did they stay afloat sixty years ago when people went home sick? Records got lost, or contained mistakes? Are there any old employees around you can talk to who still remember? The answers to these questions may provide insight into how the domain should be modelled.

This also led into an interesting discussion about how the rise of the multi-user RDBMS has led businesses to expect their software to be 100% consistent (even though their business processes never historically were, before they got computers), and how difficult it is nowdays to convince businesses to embrace scary prospects like eventually-consistency and the possibility of stale data.

Aggregates synchronize through events, not method calls

Ian Cooper mentioned this briefly in his session on implementing DDD on a large insurance company project. Basically, an aggregate must never call a method on another aggregate — doing so would violate each of their consistency boundaries. Instead, an interaction like this should be modelled as a domain event, with a separate handler coordinating its own consistency boundary (e.g. a transaction) for the second aggregate.

Final thoughts

Overall I had a fantastic time, and I highly recommend anyone to attend next year. Props to the Skills Matter staff and Eric for running such a great event!

My new developement machine – Windows 7 SSD MacBook Pro

Last week, I bought my first new development machine in three years, and it’s a beast.

Early on I decided I wanted a laptop over a big desktop PC. Although laptops are usually more expensive, slower, have smaller screens, and fiddly keyboards, mobility is really important for me — I am planning to be working/travelling around Europe for forseeable future, and lugging a huge desktop everywhere isn’t really feasible. I want to travel light. But also it needs to be powerful enough to run all the heavy developer tools (Visual Studio, VMWare, SQL Server, Photoshop etc) I use every day.

Laptop – i7 MacBook Pro

My last laptop was a Mac (a Powerbook G4), and I was very happy with it — it lasted for five years before I moved to London and saw me through university, two jobs, three flats, and is basically where I learnt to program. So it should not be a surprise that last week, after months of waiting, I finally spec’d out a brand new 15″ i7 MacBook Pro from the Apple online store.

Why a Mac, when I’m primarily a .NET developer? Really, it comes down to these simple reasons:

  1. In my opinion, MacBook Pros are, and always have been the best looking laptops on the market.
  2. They are fast (although still only dual-core).
  3. The build quality of Apple laptops is very high, and it will probably last a long time.
  4. It can happily run Windows.

Going with a MacBook Pro was pretty much a no brainer as far as I’m concerned. But that’s not the end of the story — there’s a lot more to see under the hood.

Solid State Disk (SSD) – 200GB OCZ Vertex LE

It’s long been known that SSDs are fantastic for developers — their ultra-low seek times and small read/write latency can make them several times faster at compiling code than their spinning counterparts. Not to mention it’ll make your PC boot in about ten seconds.

Now SSDs have been available with Macs for some time, but the Toshiba drives used aren’t known for their performance. For a long time I was planning to get an Intel X-25M instead — the long-standing value/performance king of the SSD market. That is, until I saw the absurdly-fast OCZ Vertex 2 Pro.

This thing eats all other SSDs alive. But sadly you will never be able to buy one, because a few months after this review, the entire Vertex 2 Pro product line was cancelled. It seems the cost of the super fast enterprise-grade Sandforce SF-1500 controller was not feasible for OCZ’s desired price range, so they canned it.

However, a small number (only 5,000 units) of SF-1500-based drives were made available to customers as the OCZ Vertex Limited Edition (LE), probably from some early shipment. And according to Anand’s benchmarks, they are actually even faster than the Vertex 2 Pro. So naturally, I had to get one.

The difference is astounding. Here’s a comparison of my XBench results, before and after the upgrade (Seagate Momentus 7200RPM vs OCZ Vertex LE, click for full results):

As you can see, random small (4K) reads and writes is where it really shines — no more waiting for spinning magnetic platters for me!

Memory – 8GB Apple

Historically, Apple’s RAM upgrades have always been notoriously expensive — it’s almost always cheaper to buy/install it yourself. Strangely at the moment, however, Apple’s prices aren’t too bad, so I decided to configure it with 8GB installed — partly for convenience, and partly to ensure my RAM sticks are matched (same specs/manufacturer).

Screen – 15″ Hi Res, Glossy LCD

I have to admit this decision was based entirely on aesthetics — the matte screen is easier to read, particularly in bright sunlight, but that silver bezel is so 2004 🙂

The hi res screen is glorious too, but you will want to increase your font size a bit when coding (I always use 14 pt).

Operating System – dual boot Mac OS X/Windows 7 Professional

Although I love Mac OS X, .NET is my game, so I’ll probably be spending most of my time in Windows. This is my first Intel Mac (hey, it’s been a while…) so I’m quite new to the whole Bootcamp/Parallels/VMWare thing, but at this stage VMWare Fusion looks pretty good – being able to run my Bootcamp partition as a VM under Mac OS X seems like a nice solution.

Moshi Palmguard

I got a Moshi Palmguard because of my last Mac – a G4 Powerbook which suffered from ugly aluminum corrosion and pitting on the palm rests. Apparently aluminium pitting is still a problem for Macs today, so I wanted something to help protect it.

My Moshi Palmguard looks fantastic, and was dead easy to stick on — I found it easiest to align it with the cut-out at the bottom of the the track pad (where you open the lid). Just make sure there isn’t any dust or crumbs on the palm rests before sticking it on — you can’t reapply it once it’s attached. Also, I didn’t bother with the trackpad guard — the trackpad on a MacBook Pro is plastic, not metal, so it won’t corrode, and my Powerbook’s still looked fine after five years, so I didn’t see much point.

So that’s it — my proud new developer machine!

Final specs:

  • 15″ MacBook Pro
  • MBP 15″ HR Glossy WS Display (upgraded from standard res)
  • Intel Core i7 M620 dual core 2.66GHz CPU
  • 8GB 1066MHz DDR3 RAM (upgraded from 4GB)
  • 200GB OCZ Vertex LE SSD SF-1500 (upgraded from 500GB 7200RPM Seagate Momentus)
  • SuperDrive 8X DVDRW/CDRW

Do you produce useful exception messages?

Do you produce useful exception messages?

Here is a method for adding a new Employee to a database:

public void AddEmployee(AddEmployeeCommand command){    var employee = factory.CreateEmployee(command);    repository.Add(employee);}

Here is the same method again, but this time we have vastly improved it, by adding some useful error messages:

public void AddEmployee(AddEmployeeCommand command){    try    {        var employee = factory.CreateEmployee(command);        repository.Add(employee);    }    catch (Exception e)    {        var message = String.Format("Error adding Employee '{0}'",             command.EmployeeName);        throw new Exception(message, e);    }}

Why is this such a vast improvement? Good error messages won’t help deliver features faster or make your tests green. But they will make your life a hell of a lot easier when things start to go wrong.

One recent example where I used this was in a system where we had to constructing a big hierarchical object graph of all the training statuses of employees in a geographical area. When we got an error, it looked like this:

Error generating Training report for 'Canterbury' district.--> Error generating report for 'Christchurch' team.   --> Error generating report for Employee 'Richard Dingwall' (#3463)      --> Error getting Skill 'First Aid' (#12)         --> SQL error in 'SELECT * FROM ...'

Error messages like this make it very easy to pinpoint problems, than if we had just a raw invalid identifier ADO.NET exception. Its like wrapping an onion — each layer adds a bit more context that explains what is going on.

Now, you don’t need to add try/catch blocks to every method in your call stack, just important ones like entry points for controllers and services, which mark the boundary of key areas in your application.

Exception messages should tell us two things:

  1. What was the action that failed?
  2. Which objects were involved? Names, IDs (ideally both), filenames, etc

When adding exception messages, first target areas that deal with external interfaces, because they are the places most likely to cause headaches through bugs or misconfiguration: databases, files, config, third-party systems etc.

Providing good exception messages is essential for making your application easy to maintain — easy for developers to quickly debug, and easy for system administrators to resolve configuration issues.

Remember, there is nothing more infuriating than getting a NullReferenceException from a third-party library.

A dangerous DDD misconception: one-sided ubiquitous language

Lately, I’ve seen a disturbing misconception about DDD crop up a couple of times in online and offline discussions. Here it is:

Ubiquitous language is sourced exclusively from the business. The developer side has no input, and must adopt whatever vocabulary they are given.

This is only true in situations where your project team is a Conformist to some upstream model. For example, if you’re developing an XML library, it’s probably best to stick to standard terms like Element and Attribute, than start inventing your own names for things.

In all other situations, however, you have more say. For example, if you’re developing a standalone business app from scratch, and a domain expert suggests a name for something that you think doesn’t quite fit, suggest a better one. Refining the ubiquitous language into a suitable abstraction is a team effort involving both developers and domain experts, and more often than not it flows back into the business as staff start to use the system. So it had better make sense!

Try not to call your objects DTOs

Strictly speaking, the DTO (Data Transfer Object) pattern was originally created for serializing and transmitting objects. But since then, DTOs have proven useful for things like commands, parameter objects, events, and as intermediary objects when mapping between different contexts (e.g. importing rows from an Excel worksheet).

One consequence of this widespread use is that, now days, naming a class SomeSortOfDto doesn’t tell me much about what the object is for — only that the object carries data, and has no behaviour.

Here’s a few suggestions for better names that might help indicate its purpose:

  • SomeSortOfQueryResult
  • SomeSortOfQueryParameter
  • SomeSortOfCommand
  • SomeSortOfConfigItem
  • SomeSortOfSpecification
  • SomeSortOfRow
  • SomeSortOfItem (for a collection)
  • SomeSortOfEvent
  • SomeSortOfElement
  • SomeSortOfMessage

By no means this is a definitive list — it’s just a few examples I can remember using off the top of my head right now. But you get the general idea — try not to call your objects as just DTOs, but give them names that describe their purpose too.

Guard Methods

In defensive programming, guard clauses are used to protect your methods from invalid parameters. In design by contract, guard clauses are known as preconditions, and in domain driven design, we use them to protect invariants — unbreakable rules that form assumptions about our model:

public class BankAccount{    private int balance;    public void WithDraw(int amount)    {        if (amount < 0)            throw new InvalidAmountException(                "Amount to be withdrawn must be positive.");                if ((balance - amount) < 0)        {            string message = String.Format(                "Cannot withdraw ${0}, balance is only ${1}.",                amount, balance);            throw new InsufficientFundsException(message);        }       balance -= amount;    }}

Unfortunately, in examples like this, the true intention of the method – actually withdrawing money – is now lost in a forest of error-checking guard clauses and exception messages. In fact, the successful path — representing 99% of executions (when there is enough money) — only accounts for 1 line in this method. So let’s refactor:

public class BankAccount{    private int balance;    public void WithDraw(int amount)    {        ErrorIfInvalidAmount(amount);        ErrorIfInsufficientFunds(amount);       balance -= amount;    }    ...}

By extracting these guard clauses into separate guard methods, the intention of the method becomes much clearer, and the explicit method names give a clear indication of what is being checked inside (regardless of how those checks are implemented). And we can concentrate on the main success path again.