Brownfield CQRS Part 1 – Commands

One question that came up several times at DDD eXchange last week was CQRS: now we understand all the benefits, how do we begin migrating our existing applications towards this sort of architecture?

It’s something we’ve been chipping away at at work recently, and over a short series of posts I’d like to share some of the conventions and patterns we’ve been using to migrate traditional WPF client/WCF server systems towards a CQRS-aware architecture.

Note that WCF isn’t strictly required here — these patterns are equally applicable to most other RPC-based services (e.g. old ASMX web services).

CQRS recap

Command-Query Responsibility Segregation (CQRS) is based around a few key assumptions:

  • All system behaviour is either a command that changes the state of the system, or a query that provides a view of that state (e.g. to display on screen).
  • In most systems, the number of reads (queries) is typically an order of magnitude higher than than the number of writes (commands) — particularly on the web. It is therefore useful to be able to scale each side independently.
  • Commands and queries have fundamentally different needs — commands favour a domain model, consistency and normalization, where reads are faster when highly denormalized e.g. table-per-screen with as little processing as possible in between. They often also portray the same objects differently — commands are typically small, driven by the needs of business transactions, where queries are larger, driven by UX requirements and sometimes including projections, flattening and aggregation.
  • Using the same underlying model and storage mechanism for reads and writes couples the two sides together, and ensures at least one will suffer as a result.

CQRS completely separates (segregates) commands and queries at an architectural level, so each side can be designed and scaled independently of the other.

CQRS and WCF

The best way to to begin refactoring your architecture is to define clear interfaces — contracts — between components. Even if secretly, the components are huge messes on the inside, getting their interfaces (commands and queries) nailed down first sets the tone of the system, and allows you to begin refactoring each component at your discretion, without affecting others.

Each method on our WCF service must be either a command or a query. Let’s start with commands.

Commands

Each command method on your service must:

  • Take a single command argument — a simple DTO/parameter object that encapsulates all the information required to execute the command.
  • Return void — commands do not have return values. Only fault contracts may be used to throw an exception when a command fails.

Here’s an example:

[DataContract]
public class BookTableCommand : ICommand
{
    [DataMember]
    public DateTime TimeAndDay { get; set; }

    [DataMember]
    public int PartySize { get; set; }

    [DataMember]
    public string PartyName { get; set; }

    [DataMember]
    public string ContactPhoneNumber { get; set; }

    [DataMember]
    public string SpecialRequests { get; set; }
}

Commands carry all the information they need for someone to execute them — e.g. a command for booking a restaurant would tell us who is coming, when the booking is for, contact details, and any special requests (e.g. someone’s birthday). Commands like these are a special case of the Parameter Object pattern.

Now we’ve got our command defined, here’s the corresponding method on the WCF service endpoint:

[ServiceContract]
public interface IBookingService
{
    [OperationContract]
    void BookTable(BookTableCommand command);
}

One class per command

Command DTO classes are never re-used outside their single use case. For example, in the situation a customer wishes to change their booking (e.g. change it to a different day, or invite more friends), you would create a whole new ChangeBookingCommand, even though it may have all the same properties as the original BookTableCommand.

Why bother? Why not just create a single, general-purpose booking DTO and use it everywhere? Because:

  1. Commands are more than just the data they carry. The type communicates intent, and its name describes the context under which the command would be sent. This information would be lost with a general-purpose object.
  2. Using the same command DTO for two use cases couples them together. You couldn’t add a parameter for one use case without adding it for the other, for example.

What if I only need one parameter? Do I need a whole command DTO for that?

Say you had a command that only carried a reference to an object — its ID:

[DataContract]
public class CancelBookingCommand : ICommand
{
    [DataMember]
    public Guid BookingReference { get; set; }
}

Is it still worth creating an entire command DTO here? Why not just pass the GUID directly?

Actually, it doesn’t matter how many parameters there are:

  • The intent of the command (in this case, cancelling a restaurant booking) is more important than the data it carries.
  • Having a named command object makes this intent explicit. Not possible just with a GUID argument on your operation contract.
  • Adding another parameter to the command (say, for example, an optional reason for cancelling) would require you to change the signature of the service contract. Ading another property to a command object would not.
  • Command objects are much easier to pass around than a bunch of random variables (as we will see in the next post). For example, you can queue commands on a message bus to be processed later, or dispatch them out to a cluster of machines.

Why not just one overloaded Execute() method?

Instead of having one operation contract per command, why don’t you just use a single overloaded method like this?

[ServiceContract]
public interface IBookingService
{
    [OperationContract]
    void Execute<T>(T command) where T : ICommand;
}

You can but I wouldn’t recommend it. We’re still doing SOA here — a totally-generic contract like this makes it much harder for things like service discovery and other clients to see the endpoint’s capabilities. Discover more solutions like this one!

DDD eXchange 2010 highlights

On Friday I attended DDD eXchange 2010, a one-day software conference here in London on Domain Driven Design. This years conference focused on two themes — architectural innovation and process — and I saw talks by Eric Evans, Udi Dahan, Greg Young, Ian Cooper and Gojko Azdic discussing their various aspects. Here are some of my highlights.

image

The difference between a Domain and a Domain Model

Eric Evan led the keynote focusing on definitions of many of the original terms that get mixed from the blue DDD book. In particular, he clarified the difference between the domain and the domain model:

  • Domain — the business. How people actually do things.
  • Domain Model — a useful abstraction of the business, modelled in code.

One is fixed (only the business can change it), the other is flexible. It sounds really obvious, but often the terms are used interchangibly and things get messy. Most notably:

  • DDD systems never have a domain layer — they have a domain model. Go and rename your Domain namespace to DomainModel right now.
  • Ubiquitous language is created to describe the domain model — it is not just blindly dictated from the existing domain. Mark Gibaud summarized this succinctly in a tweet:

Both of these are things I have been vocal about in the past.

The difference between a Sub-Domain and a Bounded Context

Following on from the previous point, another tricky one — a subdomain is a sub-part of the business (e.g. in a banking domain, a subdomain might be loans), where bounded contexts are more concerned with things like linguistic boundaries. They are easiest to spot when two business experts use the same words to refer to different concepts.

Likewise, a generic subdomain is not a reusable code library/service (like Google Maps) — it is an area of the business with no special differentiation. Compare this versus the core domain, where the business derives its competitive advantage through differentiating itself from the market.

CQRS

Much of the morning was spent discussing Command Query Responsibility Segregation (CQRS), Event Sourcing, persistent view models, and all that entails. If you’ve kept up with Greg and Udi’s recent talks, you didn’t miss anything.

image

In a paper-based system…

Greg mentioned a useful tool he sometimes uses when identifying roles and events in a business system: imagine how the business would operate if it were completely free of computers.

How did they stay afloat sixty years ago when people went home sick? Records got lost, or contained mistakes? Are there any old employees around you can talk to who still remember? The answers to these questions may provide insight into how the domain should be modelled.

This also led into an interesting discussion about how the rise of the multi-user RDBMS has led businesses to expect their software to be 100% consistent (even though their business processes never historically were, before they got computers), and how difficult it is nowdays to convince businesses to embrace scary prospects like eventually-consistency and the possibility of stale data.

Aggregates synchronize through events, not method calls

Ian Cooper mentioned this briefly in his session on implementing DDD on a large insurance company project. Basically, an aggregate must never call a method on another aggregate — doing so would violate each of their consistency boundaries. Instead, an interaction like this should be modelled as a domain event, with a separate handler coordinating its own consistency boundary (e.g. a transaction) for the second aggregate.

Final thoughts

Overall I had a fantastic time, and I highly recommend anyone to attend next year. Props to the Skills Matter staff and Eric for running such a great event!

My new developement machine – Windows 7 SSD MacBook Pro

Last week, I bought my first new development machine in three years, and it’s a beast.

Early on I decided I wanted a laptop over a big desktop PC. Although laptops are usually more expensive, slower, have smaller screens, and fiddly keyboards, mobility is really important for me — I am planning to be working/travelling around Europe for forseeable future, and lugging a huge desktop everywhere isn’t really feasible. I want to travel light. But also it needs to be powerful enough to run all the heavy developer tools (Visual Studio, VMWare, SQL Server, Photoshop etc) I use every day.

Laptop – i7 MacBook Pro

My last laptop was a Mac (a Powerbook G4), and I was very happy with it — it lasted for five years before I moved to London and saw me through university, two jobs, three flats, and is basically where I learnt to program. So it should not be a surprise that last week, after months of waiting, I finally spec’d out a brand new 15″ i7 MacBook Pro from the Apple online store.

Why a Mac, when I’m primarily a .NET developer? Really, it comes down to these simple reasons:

  1. In my opinion, MacBook Pros are, and always have been the best looking laptops on the market.
  2. They are fast (although still only dual-core).
  3. The build quality of Apple laptops is very high, and it will probably last a long time.
  4. It can happily run Windows.

Going with a MacBook Pro was pretty much a no brainer as far as I’m concerned. But that’s not the end of the story — there’s a lot more to see under the hood.

Solid State Disk (SSD) – 200GB OCZ Vertex LE

It’s long been known that SSDs are fantastic for developers — their ultra-low seek times and small read/write latency can make them several times faster at compiling code than their spinning counterparts. Not to mention it’ll make your PC boot in about ten seconds.

Now SSDs have been available with Macs for some time, but the Toshiba drives used aren’t known for their performance. For a long time I was planning to get an Intel X-25M instead — the long-standing value/performance king of the SSD market. That is, until I saw the absurdly-fast OCZ Vertex 2 Pro.

This thing eats all other SSDs alive. But sadly you will never be able to buy one, because a few months after this review, the entire Vertex 2 Pro product line was cancelled. It seems the cost of the super fast enterprise-grade Sandforce SF-1500 controller was not feasible for OCZ’s desired price range, so they canned it.

However, a small number (only 5,000 units) of SF-1500-based drives were made available to customers as the OCZ Vertex Limited Edition (LE), probably from some early shipment. And according to Anand’s benchmarks, they are actually even faster than the Vertex 2 Pro. So naturally, I had to get one.

The difference is astounding. Here’s a comparison of my XBench results, before and after the upgrade (Seagate Momentus 7200RPM vs OCZ Vertex LE, click for full results):

As you can see, random small (4K) reads and writes is where it really shines — no more waiting for spinning magnetic platters for me!

Memory – 8GB Apple

Historically, Apple’s RAM upgrades have always been notoriously expensive — it’s almost always cheaper to buy/install it yourself. Strangely at the moment, however, Apple’s prices aren’t too bad, so I decided to configure it with 8GB installed — partly for convenience, and partly to ensure my RAM sticks are matched (same specs/manufacturer).

Screen – 15″ Hi Res, Glossy LCD

I have to admit this decision was based entirely on aesthetics — the matte screen is easier to read, particularly in bright sunlight, but that silver bezel is so 2004 🙂

The hi res screen is glorious too, but you will want to increase your font size a bit when coding (I always use 14 pt).

Operating System – dual boot Mac OS X/Windows 7 Professional

Although I love Mac OS X, .NET is my game, so I’ll probably be spending most of my time in Windows. This is my first Intel Mac (hey, it’s been a while…) so I’m quite new to the whole Bootcamp/Parallels/VMWare thing, but at this stage VMWare Fusion looks pretty good – being able to run my Bootcamp partition as a VM under Mac OS X seems like a nice solution.

Moshi Palmguard

I got a Moshi Palmguard because of my last Mac – a G4 Powerbook which suffered from ugly aluminum corrosion and pitting on the palm rests. Apparently aluminium pitting is still a problem for Macs today, so I wanted something to help protect it.

My Moshi Palmguard looks fantastic, and was dead easy to stick on — I found it easiest to align it with the cut-out at the bottom of the the track pad (where you open the lid). Just make sure there isn’t any dust or crumbs on the palm rests before sticking it on — you can’t reapply it once it’s attached. Also, I didn’t bother with the trackpad guard — the trackpad on a MacBook Pro is plastic, not metal, so it won’t corrode, and my Powerbook’s still looked fine after five years, so I didn’t see much point.

So that’s it — my proud new developer machine!

Final specs:

  • 15″ MacBook Pro
  • MBP 15″ HR Glossy WS Display (upgraded from standard res)
  • Intel Core i7 M620 dual core 2.66GHz CPU
  • 8GB 1066MHz DDR3 RAM (upgraded from 4GB)
  • 200GB OCZ Vertex LE SSD SF-1500 (upgraded from 500GB 7200RPM Seagate Momentus)
  • SuperDrive 8X DVDRW/CDRW

TestFixture attribute – just because R# doesn’t care, doesn’t mean you shouldn’t

TestFixture attribute – just because R# doesn’t care, doesn’t mean you shouldn’t

Recently I noticed ReSharper’s built-in test runner doesn’t require you to decorate your fixtures as TestFixture — it’s smart enough to locate Test methods regardless. My team noticed this too, and as a result we’ve started to omit TestFixture entirely.

Turns out this is a big mistake. NUnit itself requires them — in our case, we discovered our build server (which calls NUnit directly) was ignoring a large number of tests (including some that were failing), and was reporting much lower code coverage results than expected.

So, somehow we need to enforce that the TestFixture attribute is applied everywhere it should be, so us lazy ReSharper developers don’t miss anything. Not to worry… we can write a test for that!

[TestFixture]public class MissingTestFixtureAttributeTests{    [Test]    public void All_test_fixtures_must_have_test_fixture_attribute()    {        IEnumerable<Type> badFixtures =            from t in Assembly.GetExecutingAssembly().GetTypes()            where HasTests(t)            where IsMissingTestFixtureAttribute(t)            select t;        ErrorIfAny(badFixtures);    }    static bool HasTests(Type type)    {        var testMethods =            from m in type.GetMethods()            from a in m.GetCustomAttributes(typeof (TestAttribute), true)            select m;        return testMethods.Any();    }    static bool IsMissingTestFixtureAttribute(Type type)    {        var attributes =            from a in type.GetCustomAttributes(typeof (TestFixtureAttribute), true)            select a;        return !attributes.Any();    }    private static void ErrorIfAny(IEnumerable<Type> fixtures)    {        if (!fixtures.Any())            return;        var sb = new StringBuilder("The following types are missing [TestFixture] attributes:");        sb.AppendLine();        foreach(Type type in fixtures)        {            sb.AppendLine();            sb.Append(type);        }        throw new Exception(sb.ToString());    }}

libcheck – quick and dirty assembly compatibility debugging

libcheck – quick and dirty assembly compatibility debugging

Today I wrote a little command-line tool for helping track down .NET assembly compatibility issues:

System.IO.FileLoadException: Could not load file or assembly {0} or one of its dependencies. The located assembly's manifest definition does not match the assembly reference.

My team encounters these problems very regularly — we have a build/dependency tree that easily rivals Castle Project in complexity, and it gets out of sync easily while waiting for downstream projects to build. Normally, your only option in this situation is to fire up reflector and poke through all your references manually — a very tedious process.

So, to use this tool, you point it at a directory (e.g. your lib folder) and provide a list of patterns for assemblies you want to check:

libcheck.exe C:devyourapplib *NCover*

Then it’ll tell you any that are broken.

Mandatory overloads

Mandatory overloads

Yesterday I read a pretty sensible tweet by Jeremy D. Miller:

(args), you almost always need a Method(Type, args) too.’ width=’550′ height=’292′>

In spirit, I would like to propose another:

When you have a method like Method(IEnumerable<T> items), you should always provide a Method(params T[] items) too.

Do you produce useful exception messages?

Do you produce useful exception messages?

Here is a method for adding a new Employee to a database:

public void AddEmployee(AddEmployeeCommand command){    var employee = factory.CreateEmployee(command);    repository.Add(employee);}

Here is the same method again, but this time we have vastly improved it, by adding some useful error messages:

public void AddEmployee(AddEmployeeCommand command){    try    {        var employee = factory.CreateEmployee(command);        repository.Add(employee);    }    catch (Exception e)    {        var message = String.Format("Error adding Employee '{0}'",             command.EmployeeName);        throw new Exception(message, e);    }}

Why is this such a vast improvement? Good error messages won’t help deliver features faster or make your tests green. But they will make your life a hell of a lot easier when things start to go wrong.

One recent example where I used this was in a system where we had to constructing a big hierarchical object graph of all the training statuses of employees in a geographical area. When we got an error, it looked like this:

Error generating Training report for 'Canterbury' district.--> Error generating report for 'Christchurch' team.   --> Error generating report for Employee 'Richard Dingwall' (#3463)      --> Error getting Skill 'First Aid' (#12)         --> SQL error in 'SELECT * FROM ...'

Error messages like this make it very easy to pinpoint problems, than if we had just a raw invalid identifier ADO.NET exception. Its like wrapping an onion — each layer adds a bit more context that explains what is going on.

Now, you don’t need to add try/catch blocks to every method in your call stack, just important ones like entry points for controllers and services, which mark the boundary of key areas in your application.

Exception messages should tell us two things:

  1. What was the action that failed?
  2. Which objects were involved? Names, IDs (ideally both), filenames, etc

When adding exception messages, first target areas that deal with external interfaces, because they are the places most likely to cause headaches through bugs or misconfiguration: databases, files, config, third-party systems etc.

Providing good exception messages is essential for making your application easy to maintain — easy for developers to quickly debug, and easy for system administrators to resolve configuration issues.

Remember, there is nothing more infuriating than getting a NullReferenceException from a third-party library.

A dangerous DDD misconception: one-sided ubiquitous language

Lately, I’ve seen a disturbing misconception about DDD crop up a couple of times in online and offline discussions. Here it is:

Ubiquitous language is sourced exclusively from the business. The developer side has no input, and must adopt whatever vocabulary they are given.

This is only true in situations where your project team is a Conformist to some upstream model. For example, if you’re developing an XML library, it’s probably best to stick to standard terms like Element and Attribute, than start inventing your own names for things.

In all other situations, however, you have more say. For example, if you’re developing a standalone business app from scratch, and a domain expert suggests a name for something that you think doesn’t quite fit, suggest a better one. Refining the ubiquitous language into a suitable abstraction is a team effort involving both developers and domain experts, and more often than not it flows back into the business as staff start to use the system. So it had better make sense!

Try not to call your objects DTOs

Strictly speaking, the DTO (Data Transfer Object) pattern was originally created for serializing and transmitting objects. But since then, DTOs have proven useful for things like commands, parameter objects, events, and as intermediary objects when mapping between different contexts (e.g. importing rows from an Excel worksheet).

One consequence of this widespread use is that, now days, naming a class SomeSortOfDto doesn’t tell me much about what the object is for — only that the object carries data, and has no behaviour.

Here’s a few suggestions for better names that might help indicate its purpose:

  • SomeSortOfQueryResult
  • SomeSortOfQueryParameter
  • SomeSortOfCommand
  • SomeSortOfConfigItem
  • SomeSortOfSpecification
  • SomeSortOfRow
  • SomeSortOfItem (for a collection)
  • SomeSortOfEvent
  • SomeSortOfElement
  • SomeSortOfMessage

By no means this is a definitive list — it’s just a few examples I can remember using off the top of my head right now. But you get the general idea — try not to call your objects as just DTOs, but give them names that describe their purpose too.

Guard Methods

In defensive programming, guard clauses are used to protect your methods from invalid parameters. In design by contract, guard clauses are known as preconditions, and in domain driven design, we use them to protect invariants — unbreakable rules that form assumptions about our model:

public class BankAccount{    private int balance;    public void WithDraw(int amount)    {        if (amount < 0)            throw new InvalidAmountException(                "Amount to be withdrawn must be positive.");                if ((balance - amount) < 0)        {            string message = String.Format(                "Cannot withdraw 
public class BankAccount{ private int balance; public void WithDraw(int amount) { if (amount < 0) throw new InvalidAmountException( "Amount to be withdrawn must be positive."); if ((balance - amount) < 0) { string message = String.Format( "Cannot withdraw ${0}, balance is only ${1}.", amount, balance); throw new InsufficientFundsException(message); } balance -= amount; }}
, balance is only .", amount, balance); throw new InsufficientFundsException(message); } balance -= amount; }}

Unfortunately, in examples like this, the true intention of the method – actually withdrawing money – is now lost in a forest of error-checking guard clauses and exception messages. In fact, the successful path — representing 99% of executions (when there is enough money) — only accounts for 1 line in this method. So let’s refactor:

public class BankAccount{    private int balance;    public void WithDraw(int amount)    {        ErrorIfInvalidAmount(amount);        ErrorIfInsufficientFunds(amount);       balance -= amount;    }    ...}

By extracting these guard clauses into separate guard methods, the intention of the method becomes much clearer, and the explicit method names give a clear indication of what is being checked inside (regardless of how those checks are implemented). And we can concentrate on the main success path again.