Brownfield CQRS part 1 – Commands

One question that came up several times at DDD eXchange last week was CQRS: now we understand all the benefits, how do we begin migrating our existing applications towards this sort of architecture?

It’s something we’ve been chipping away at at work recently, and over a short series of posts I’d like to share some of the conventions and patterns we’ve been using to migrate traditional WPF client/WCF server systems towards a CQRS-aware architecture.

Note that WCF isn’t strictly required here — these patterns are equally applicable to most other RPC-based services (e.g. old ASMX web services).

CQRS recap

Command-Query Responsibility Segregation (CQRS) is based around a few key assumptions:

  • All system behaviour is either a command that changes the state of the system, or a query that provides a view of that state (e.g. to display on screen).
  • In most systems, the number of reads (queries) is typically an order of magnitude higher than than the number of writes (commands) — particularly on the web. It is therefore useful to be able to scale each side independently.
  • Commands and queries have fundamentally different needs — commands favour a domain model, consistency and normalization, where reads are faster when highly denormalized e.g. table-per-screen with as little processing as possible in between. They often also portray the same objects differently — commands are typically small, driven by the needs of business transactions, where queries are larger, driven by UX requirements and sometimes including projections, flattening and aggregation.
  • Using the same underlying model and storage mechanism for reads and writes couples the two sides together, and ensures at least one will suffer as a result.

CQRS completely separates (segregates) commands and queries at an architectural level, so each side can be designed and scaled independently of the other.

CQRS and WCF

The best way to to begin refactoring your architecture is to define clear interfaces — contracts — between components. Even if secretly, the components are huge messes on the inside, getting their interfaces (commands and queries) nailed down first sets the tone of the system, and allows you to begin refactoring each component at your discretion, without affecting others.

Each method on our WCF service must be either a command or a query. Let’s start with commands.

Commands

Each command method on your service must:

  • Take a single command argument — a simple DTO/parameter object that encapsulates all the information required to execute the command.
  • Return void — commands do not have return values. Only fault contracts may be used to throw an exception when a command fails.

Here’s an example:

[DataContract]
public class BookTableCommand : ICommand
{
    [DataMember]
    public DateTime TimeAndDay { get; set; }

    [DataMember]
    public int PartySize { get; set; }

    [DataMember]
    public string PartyName { get; set; }

    [DataMember]
    public string ContactPhoneNumber { get; set; }

    [DataMember]
    public string SpecialRequests { get; set; }
}

Commands carry all the information they need for someone to execute them — e.g. a command for booking a restaurant would tell us who is coming, when the booking is for, contact details, and any special requests (e.g. someone’s birthday). Commands like these are a special case of the Parameter Object pattern.

Now we’ve got our command defined, here’s the corresponding method on the WCF service endpoint:

[ServiceContract]
public interface IBookingService
{
    [OperationContract]
    void BookTable(BookTableCommand command);
}

One class per command

Command DTO classes are never re-used outside their single use case. For example, in the situation a customer wishes to change their booking (e.g. change it to a different day, or invite more friends), you would create a whole new ChangeBookingCommand, even though it may have all the same properties as the original BookTableCommand.

Why bother? Why not just create a single, general-purpose booking DTO and use it everywhere? Because:

  1. Commands are more than just the data they carry. The type communicates intent, and its name describes the context under which the command would be sent. This information would be lost with a general-purpose object.
  2. Using the same command DTO for two use cases couples them together. You couldn’t add a parameter for one use case without adding it for the other, for example.

What if I only need one parameter? Do I need a whole command DTO for that?

Say you had a command that only carried a reference to an object — its ID:

[DataContract]
public class CancelBookingCommand : ICommand
{
    [DataMember]
    public Guid BookingReference { get; set; }
}

Is it still worth creating an entire command DTO here? Why not just pass the GUID directly?

Actually, it doesn’t matter how many parameters there are:

  • The intent of the command (in this case, cancelling a restaurant booking) is more important than the data it carries.
  • Having a named command object makes this intent explicit. Not possible just with a GUID argument on your operation contract.
  • Adding another parameter to the command (say, for example, an optional reason for cancelling) would require you to change the signature of the service contract. Ading another property to a command object would not.
  • Command objects are much easier to pass around than a bunch of random variables (as we will see in the next post). For example, you can queue commands on a message bus to be processed later, or dispatch them out to a cluster of machines.

Why not just one overloaded Execute() method?

Instead of having one operation contract per command, why don’t you just use a single overloaded method like this?

[ServiceContract]
public interface IBookingService
{
    [OperationContract]
    void Execute<T>(T command) where T : ICommand;
}

You can but I wouldn’t recommend it. We’re still doing SOA here — a totally-generic contract like this makes it much harder for things like service discovery and other clients to see the endpoint’s capabilities.

Next: Part 2 – Command Handlers

June 15, 2010

22 Comments

seagile on June 16, 2010 at 9:16 am.

On the issue of “Why not just one overloaded Execute() method?” I’d say it depends. If you’re service layer is only there to allow your WPF client to send up commands, then why bother creating explicit service contracts? The commands already carry all the intent, why have yet another method probably with the same name as your command type. Using a universal contract or universal command as I like to call it will remove the burden of maintaining service contracts. In addition I’d put all my commands and their basic (as in property/field level) validation in a separate assembly that I can share between my WPF and WCF side of things (mind you that the commands no longer need to be datacontracts in this case). Pastie from a tweet I did ages ago, to give you an idea: http://pastie.org/919519
If you also want to expose your API to integrate with others then you could do it using an RPC style service contract, but I don’t think that’s what SOA really is about … you need to listen/read more of Udi Dahan to understand that bit …

Jake Scott on June 16, 2010 at 9:20 am.

Awesome series to blog about, keep em coming :) Are you persisting your commands in an event stream? I’m kinda guessing that you are not going use event sourcing on a brownfield project? Overarching question is: Who do you keep a single source of truth in the read and the write models?

Jake Scott on June 16, 2010 at 9:22 am.

Oops I meant: How do you keep the write and the read sides in sync? Would you write a compensating command on the write side?

Richard on June 16, 2010 at 10:32 am.

Seagile: that sounds like a good idea – I decided to go with method-per-command at the time, but I hadn’t considered the cost of adding these methods to a WCF service. I think a universal contract as you say would be a nicer alternative long-term.

Jake: focusing on getting the interface between components right. Event sourcing (the single source of truth) would come later, and synchronize to the read side via domain events.

Matt on June 17, 2010 at 7:23 am.

Read this with the same reaction as seagile – why not a universal contract? Take a look at the Agatha project for a fully baked implementation of this that you could adapt to handle commands / events. One downside is that if you keep them as separate service contracts / endpoints you will have much more granular control over things like security, bindings, etc… per endpoint rather than a generic one that winds up being a lowest common denominator scenario. Could also take an endpoint and host it separately from the others in it’s own farm for the purposes of scaling out that endpoint independently because it’s a hot spot in the application. The universal endpoint makes a lot of things easier but it comes at a cost. Whether that cost is detrimental to your needs is your call.

seagile on June 17, 2010 at 10:15 am.

@Matt: I can still cherry pick which commands are behind a universal command service, and thus have as many endpoints/bindings as I like (for as many purposes as I like) – it’s just not obvious to the consumer which commands can be sent to which endpoint (I could provide a catalogue method – smells like an OData service). I tend to think of this approach as being a light-weight service bus emulator.

Matt on June 18, 2010 at 12:10 am.

@seagile: true – that is possible. you are correct there would be a smell, albeit not a good one. :)

David on June 18, 2010 at 6:57 pm.

Excellent article, I look forward to the rest of them however, in regards to the third reason to have a command DTO as apposed to a single parameter;

“Adding another parameter to the command (say, for example, an optional reason for cancelling) would require you to change the signature of the service contract. Ading another property to a command object would not.”

In either case, if you add a parameter to the service contract or add a property to the data contract of that service, the signature of the service in effect has changed (logically not binary). In both instances you would have to take into account versioning of service contracts and/or data contracts if you want, or just make sure all clients have the updated data contract (DTO commands) used by the service or the service contract (single parameter commands) itself.

Richard on June 22, 2010 at 3:15 pm.

@Matt: haven’t had a look at Agatha. I know WCF doesn’t support fully-generic parameters due to SOAP limitations.

Tomas on July 19, 2010 at 10:07 pm.

Lets say you’re a payment service provider (PSP). When creating/executing a transaction you would have a command like CreateTransaction or similar. How should you return the transaction id from such a command. Or should you first make a query asking for a transaction id and then use that id creating/executing your transaction?

seagile on July 19, 2010 at 11:13 pm.

@Tomas You could specify a correlationid and wait for response messages with that correlationid (one of those response messages could carry the transaction-id). I’d do that in a an async fashion, which would make it a basic saga I guess.
The other question you could ask yourself is if the transaction-id could not be determined on the calling side? Now, I never worked with PSPs before, so I may be out of my depth here ;-)

Andy on July 20, 2010 at 9:57 am.

How would you implement (using CQRS) a standard website login (username/password) activity?

A login is something that has to be performed without delays to not annoy the users.

Would you send a login command and then poll the query side for “login ok or failed?”. Will that not be pretty inefficient and take long time to login?

or would you store a “username/pwd” hash on the query side that the client queries?

But when you do login you want to do some logic as well…..

any thoughts?

Richard on July 20, 2010 at 10:02 am.

@Andy: you could ask for a login token (query) then supply that token with each command sent to the server, and finally send a command to invalidate the token on logout.

Andy on July 20, 2010 at 9:54 pm.

But is it not wrong to ask the query side for a token because then you change state (allocate new token). Feels like you want to have a “LoginUserCommand” but it feel wierd…..

Tomas on July 23, 2010 at 10:21 pm.

@seagile: do you mean that the client should provide the correational id and then wait for an event from the server? Isn’t that just a work around for request/response?
@andy, @richard: seems like you having the same discussion but with a different example. I think there will be situations where you want to provide metadata, like transaction id or login token. Do you really want to use CQRS on those kind of commands/operations? I really think it is an interesting pattern but I see some situations where it seems to make things more complex than they are.

Anonymous on February 11, 2012 at 6:18 pm.

First query read model to authenticate. If authentication is successful then create a new UserSession aggregate, which will generate session id as a new GUID or however you choose to implement. Set the cookie on user browser and execute LoginUserCommand to store it on the server

Ross Miller on February 5, 2014 at 8:43 am.

Hi, sorry to resurrect this post, but I found this very useful for implementing my own CQRS pattern, however for the Windsor IOC container do you have code that shows your Installer? At the moment I am having to explicitly match up the command with the handler via the Windsor container.register command as follows:

public class CommandHandlersInstaller : IWindsorInstaller
{
public void Install(IWindsorContainer container, IConfigurationStore store)
{
container.Register(Component
.For<ICommandHandler>()
.ImplementedBy());

}

}

Do you have a more generic way of doing this?

Leave a Reply