Semantic CSS grid layout with LESS

Semantic CSS grid layout with LESS

I have a confession to make. After eight years since we first met, I still don’t quite get CSS.

Sure, I can do typography, sprite backgrounds and understand the basics of the box model. I even know about browser-specific CSS extensions for CSS3 effects. But when it comes to clearing floats, vertically-aligning form elements or figuring out inline parents with block children, I go to a very dark place and start cursing.

I freely admit my CSS layout and positioning skills are lacking, and I probably shouldn’t even be blogging about it. But when I discovered CSS grid frameworks, I was naturally interested — anything that helps me get up and running quicker in a foreign language is a win.

Grid frameworks: CSS for dummies

For those that don’t know, grid frameworks like 960gs, YUI grids and Blueprint CSS provide a simple 12, 16 or 24 column grid that abstracts away complicated CSS layout and positioning rules.

For example, using the Blueprint CSS framework, this markup produces the following layout, neatly centered in the middle of the browser window:

<html><head>    <link rel="stylesheet" type="text/css" href="blueprint/screen.css" /></head><body>    <div class="container">                <div class="push-4 span-15">header</div>                <div class="push-4 span-4">menu</div>                <div class="span-7">main</div>                <div class="span-4">ad space?</div>                <div class="push-4 span-15">footer</div>    </div></body></html>

How easy was that? As a CSS newbie, not having to worry about floats, clears, negative margings and other positioning tricks is a very attractive proposition.

So CSS grids are very powerful for quickly laying out elements. But there is a trade off you should consider — HTML marked up with grids is not at all semantic. Peppering your HTML with classes like .span-12 or .yui-t3 for layout is no better than specifying table widths and heights everywhere.

Wouldn’t it be great if you could keep using these grid classes, but somehow mask them behind semantic class names?

LESS: CSS preprocessor

About the same time I discovered grids, I stumbled upon LESS: a ‘leaner’ CSS command-line preprocessor that extends CSS with its own syntax and features. The .NET port, .Less, has a smaller feature set than the rails version, but it lets you do stuff like this:

.rounded_corners {        @radius: 8px; /* variables */        -moz-border-radius: @radius;        -webkit-border-radius: @radius;        border-radius: @radius;}#header {        .rounded_corners; /* mix-ins */        img.logo { /* nested styles */                margin: (@radius * 2) + 1px; /* expressions */        }}#footer {        .rounded_corners;}

I have .Less is setup as an HttpModule in my web.config. It intercepts any requests to *.less files, translates them to real CSS, and optionally minifies (compresses) them. So you can you can simply reference the .less file directly in your markup, with no extra compilation step required:

<head>    <link rel="stylesheet" type="text/css" href="site.less" /></head>

Grid CSS frameworks + LESS = semantic grids

I’ve been using LESS for a few weeks now, and to be honest, I never want to go back to writing ‘raw’ CSS again. So what happens when you combine the CSS grid framework with LESS? Here’s the new stylesheet:

@import blueprint/screen.css;div.container {        div.header {                .push-4;                .span-15;        }        div.menu {                .push-4;                .span-4;        }        div.main {                .span-7;        }                div.ads {                .push-4;        }                div.footer {                .push-4;                .span-15;        }}

All that’s left is semantic class names using grids styles as mix-ins. Now the markup is looking acceptable again:

<html><head>    <link rel="stylesheet" type="text/css" href="site.less" /></head><body>    <div class="container">                <div class="header">header</div>                <div class="menu">menu</div>                <div class="main">main</div>                <div class="ads">ad space?</div>                <div class="footer">footer</div>    </div></body></html>

Using grid classes as mix-ins gives us the best of both worlds — you get the power of the grid CSS framework, but without introducing layout concerns to your markup.

Domain-Driven Documentation

Here’s a couple of real-life documentation examples from a system I’ve been building for a client:

Monitored Individual is a role played by certain Employees. Each Monitored Individual is required to be proficient in a number of Competencies, according to [among other things] what District they’re stationed in.

Training Programme is comprised of Skills, arranged in Skill GroupsSkill Groups can contain Sub Groups with as many levels deep as you like. Skills can be used for multiple Training Programmes, but you can’t have the same Skill twice under the same Training Programme. When a Skill is removed from a Training ProgrammeIndividuals should no longer have to practice it.

This is the same style Evans uses himself in the blue DDD book. A colleague jokingly called it Domain-Driven Documentation.

I adopted it after noticing a couple of problems with my documentation:

  • I was using synonyms — different words with the same meaning — interchangeably to refer to the same thing in different places.
  • Sentences talking about the code itself looked messy and inconsistent when mixing class names with higher-level concepts.

It’s a pretty simple system. There are only three rules to remember: when referring to domain concepts, use capital letters, write them in full, and write them in bold.

Highlighting the names of domain concepts like this is a fantastic way to hammer down the ubiquitous language — the vocabulary shared between business and developers.

Since adopting it, I’ve noticed improvements in both the quality of my documentation, and of the communication in our project meetings — non-technical business stakeholders are starting to stick to the ubiquitous language now, where in the past they would fall back to talking about purely UI artifacts. This is really encouraging to see — definitely a success for DDD.

jQuery: checkboxes that remember their original state

Last week I had to write a little javascript for a form that involved a long list of check boxes. To save time, the application only processes rows where the checkbox value actually changed away from its original state.

For example, if a checkbox was checked to begin with, then the user unchecks it then re-checks it, we don’t have to do anything because although it did change, it ended up back at the same original state.

This is pretty easy to achieve by remembering the original state of the checkbox using jQuery data — library methods for storing arbitrary javascript objects in DOM elements — then comparing against the original value when toggled.

<script type="text/javascript">        $(document).ready(function() {                // Set aside the original state of each checkbox.                $("input.toggle").each(function() {                        $(this).data("originallyChecked", $(this).is(":checked"));                });                                // Check whether it really changed on click.                $("input.toggle").change(function() {                        var action = $(this).siblings("span");                        if ($(this).data("originallyChecked") == $(this).is(":checked"))                                action.text("(no change)");                        else                                action.text($(this).is(":checked") ? "added" : "removed");                });        });</script><ol>        <li>Apple <input type="checkbox" class="toggle" /> <span/></li>        <li>Banana <input type="checkbox" class="toggle" checked /> <span/></li>        <li>Carrot <input type="checkbox" class="toggle" /> <span/></li>        <li>Zucchini <input type="checkbox" class="toggle" checked /> <span/></li></ol>

The next requirement was adding a select/deselect all button. This proved a little bit more difficult, because of the way jQuery deals with events and the checked attribute. To cut a long story short, I ended up with an event handler that manually sets the checked attribute and then fires the change event on all the checkboxes (I tried the click event first, but it didn’t seem to pick up the new state).

$(document).ready(function() {        $("input#select-all").click(function() {                $("input.toggle").attr('checked', this.checked);                $("input.toggle").change();        });});

Altogether, it works very nicely in only a few lines of javascript.

Stick to the paradigm (even if it sucks)

Stick to the paradigm (even if it sucks)

Today I had the pleasure of fixing a bug in an unashamedly-procedural ASP.NET application, comprised almost entirely of static methods with anywhere from 5-20 parameters each (yuck yuck yuck).

After locating the bug and devising a fix, I hit a wall. I needed some additional information that wasn’t in scope. There were three places I could get it from:

  • The database
  • Form variables
  • The query string

The problem was knowing which to choose, because it depended on the lifecycle of the page — is it the first hit, a reopened item, or a post back? Of course that information wasn’t in scope either.

As I contemplated how to figure the state of the page using HttpContext.Current, my spidey sense told me to stop and reconsider.

Let’s go right back to basics. How did we use to manage this problem in procedural languages like C? There is a simple rule — always pass everything you need into the function. Global variables and hidden state may be convenient in the short-term, but only serve to confuse later on.

To fix the problem, I had to forget about “this page” as an object instance. I had to forget about private class state. I had to forget that C# was an object-oriented language. Those concepts were totally incompatible with this procedural code base, and any implementation would likely result in a big mess.

In fact, it turns out to avoid a DRY violation working out the page state again, and adding hidden dependencies on HttpContext, the most elegant solution was simply to add an extra parameter to every method in the stack. So wrong from a OO/.NET standpoint, but so right for procedural paradigm in place.

DDD: making the Time Period concept explicit

One of the applications I work on is a planning system, used for managing the operations of the business over the next week, month and financial year.

Almost every entity in this application has a fixed ‘applicable period’ — a lifetime that begins and ends at certain dates. For example:

  • An employee’s applicable period lasts as long as they are employed
  • A business unit’s lifetime lasts from the day it’s formed, to the day it disbands
  • A policy lasts from the day it comes into effect to the day it ends
  • A shift starts at 8am and finishes at 6pm

Previous incarnations of the application simply added StartDate and EndDate properties to every object, and evaluated them ad-hoc as required. This resulted in a lot of code duplication — date and time logic around overlaps, contiguous blocks etc were repeated all over the place.

As we’ve been carving off bounded contexts and reimplementing them using DDD, I’m proud to say this concept has been identified and separated out into an explicit value type with encapsulated behaviour. We call it a Time Period:

It’s sort of like a .NET TimeSpan but represents a specific period of time, e.g. seven days starting from yesterday morning — not seven days in general.

Here’s the behaviour we’ve implemented so far, taking care of things like comparisons and overlapping periods:

/// <summary>/// A value type to represent a period of time with known end points (as/// opposed to just a period like a timespan that could happen anytime)./// The end point of a TimeRange can be infinity. /// </summary>public class TimePeriod : IEquatable<TimePeriod>{    public DateTime Start { get; }    public DateTime? End { get; }    public bool IsInfinite { get; }    public TimeSpan Duration { get; }    public bool Includes(DateTime date);    public bool StartsBefore(TimePeriod other);    public bool StartsAfter(TimePeriod other);    public bool EndsBefore(TimePeriod other);    public bool EndsAfter(TimePeriod other);    public bool ImmediatelyPrecedes(TimePeriod other);    public bool ImmediatelyFollows(TimePeriod other);    public bool Overlaps(TimePeriod other);    public TimePeriod GetRemainingSlice();    public TimePeriod GetRemainingSliceAsAt(DateTime when);    public bool HasPassed();    public bool HasPassedAsAt(DateTime when);    public float GetPercentageElapsed();    public float GetPercentageElapsedAsAt(DateTime when);}

Encapsulating logic all in one place means we can get rid of all that ugly duplication (DRY), and it still maps cleanly to StartDate/EndDate columns in the database as an NHibernate component or IValueType.

You can grab our initial implementation here:

  • TimePeriod.cs
  • TimePeriodTests.cs

The Trouble with Soft Delete

Soft delete is a commonly-used pattern amongst database-driven business applications. In my experience, however, it usually ends up causing more harm than good. Here’s a few reasons why it can fail in bigger applications, and some less-painful alternatives to consider.

Tomato, Tomato

I’ve seen a few different implementations of this pattern in action. First is the standard deleted flag to indicate an item should be ignored:

SELECT * FROM Product WHERE IsDeleted = 0

Another style uses meaningful status codes:

SELECT * FROM Task WHERE Status = 'Pending'

You can even give an item a fixed lifetime that starts and ends at a specific time (it might not have started yet):

SELECT * FROM Policy WHERE GETDATE() BETWEEN StartDate AND EndDate

All of these styles are all flavors of the same concept: instead of pulling dead or infrequently-used items out of active set, you simply mark them and change queries to step over the corpses at runtime.

This is a trade-off: soft delete columns are easy to implement, but incur a cost to query complexity and database performance later down the track.

Complexity

To prevent mixing active and inactive data in results, all queries must be made aware of the soft delete columns so they can explicitly exclude them. It’s like a tax; a mandatory WHERE clause to ensure you don’t return any deleted rows.

This extra WHERE clause is similar to checking return codes in programming languages that don’t throw exceptions (like C). It’s very simple to do, but if you forget to do it in even one place, bugs can creep in very fast. And it is background noise that detracts away from the real intention of the query.

Performance

At first glance you might think evaluating soft delete columns in every query would have a noticeable impact on performance.

However, I’ve found that most RDBMSs are actually pretty good at recognizing soft delete columns (probably because they are so commonly used) and does a good job at optimizing queries that use them. In practice, filtering inactive rows doesn’t cost too much in itself.

Instead, the performance hit comes simply from the volume of data that builds up when you don’t bother clearing old rows. For example, we have a table in a system at work that records an organisations day-to-day tasks: pending, planned, and completed. It has around five million rows in total, but of that, only a very small percentage (2%) are still active and interesting to the application. The rest are all historical; rarely used and kept only to maintain foreign key integrity and for reporting purposes.

Interestingly, the biggest problem we have with this table is not slow read performance but writes. Due to its high use, we index the table heavily to improve query performance. But with the number of rows in the table, it takes so long to update these indexes that the application frequently times out waiting for DML commands to finish.

This table is becoming an increasing concern for us — it represents a major portion of the application, and with around a million new rows being added each year, the performance issues are only going to get worse.

Back to the original problem

The trouble with implementing soft delete via a column is that it simply doesn’t scale well for queries targeting multiple tables — we need a different strategy for larger data models.

Let’s take a step back and examine the reasons why you might want to implement soft deletes in a database. If you think about it, there really are only four categories:

  1. To provide an ‘undelete’ feature.
  2. Auditing.
  3. For soft create.
  4. To keep historical items.

Let’s look at each of these and explore what other options are available.

Soft delete to enable undo

Human error is inevitable, so it’s common practice to give users the ability to bring something back if they delete it by accident. But this functionality can be tricky to implement in a RDBMS, so first you need to ask an important question — do you really need it?

There are two styles I have encountered that are achieved via soft delete:

  1. There is an undelete feature available somewhere in the UI, or
  2. Undelete requires running commands directly against the database.

If there is an undo delete button available somewhere for users, then it is an important use case that needs to be factored into your code.

But if you’re just putting soft delete columns on out of habit, and undelete still requires a developer or DBA to run a command against the database to toggle the flags back, then this is a maintenance scenario, not a use case. Implementing it will take time and add significant complexity to your data model, and add very little benefit for end users, so why bother? It’s a pretty clear YAGNI violation — in the rare case you really do need to restore deleted rows, you can just get them from the previous night’s backup.

Otherwise, if there really is a requirement in your application for users to be able to undo deletes, there is already a well-known pattern specifically designed to take care of all your undo-related scenarios.

The memento pattern

Soft delete only supports undoing deletes, but the memento pattern provides a standard means of handling all undo scenarios your application might require.

It works by taking a snapshot of an item just before a change is made, and putting it aside in a separate store, in case a user wants to restore or rollback later. For example, in a job board application, you might have two tables: one transactional for live jobs, and an undo log that stores snapshots of jobs at previous points in time:

If you want to restore one, you simply deserialize it and insert it back in. This is much cleaner than messing up your transactional tables with ghost items, and lets you handle all undo operations together using the same pattern.

Soft delete for auditing

Another common practice I have seen is using soft delete as a means of auditing: to keep a record of when an item was deleted, and who deleted it. Usually additional columns are added to store this information:

As with undo, you should ask a couple of questions before you implement soft delete for auditing reasons:

  • Is there a requirement to log when an item is deleted?
  • Is there a requirement to log any other significant application events?

In the past I have seen developers (myself included) automatically adding delete auditing columns like this as a convention, without questioning why it’s needed (aka cargo cult programming).

Deleting an object is only one event we might be interested in logging. If a rogue user performs some malicious acts in your application, updates could be just as destructive so we should know about those too.

One possible conclusion from this thought is that you simply log all DML operations on the table. You can do this pretty easily with triggers:

-- Log all DELETE operations on the Product table
CREATE TRIGGER tg_Product_Delete ON Product AFTER DELETE
AS
    INSERT INTO [Log]
    (
        [Timestamp],
        [Table],
        Command,
        ID
    )
    SELECT
        GETDATE(),
        'Product',
        'DELETE',
        ProductID
    FROM
        Deleted
CREATE TRIGGER tg_Product_Update ON Product AFTER UPDATE
AS
-- ...etc

Contextual logging

If something goes wrong in the application and we want to retrace the series of steps that led to it, CREATES, UPDATES and DELETES by themselves don’t really explain much of what the user was trying to achieve. Getting useful audit logs from DML statements alone is like trying to figure out what you did last night from just your credit card bill.

It would be more useful if the logs were expressed in the context of the use case, not just the database commands that resulted from it. For example, if you were tracking down a bug, would you rather read this:

[09:30:24] DELETE ProductCategory 142 13 dingwallr

… or this?

[09:30:24] Product 'iPhone 3GS' (#142) was removed from the catalog by user dingwallr. Categories: 'Smart Phones' (#13), 'Apple' (#15).

Logging at the row level simply cannot provide enough context to give a true picture of what the user is doing. Instead it should be done at a higher level where you know the full use-case (e.g. application services), and the logs should be kept out of the transactional database so they can be managed separately (e.g. rolling files for each month).

Soft create

The soft delete pattern can also be extended for item activation — instead of simply creating an item as part of the active set, you create it in an inactive state and flick a switch or set a date for when it should become active.

For example:

-- Get employees who haven't started work yet
SELECT * FROM Employee WHERE GETDATE() &lt; StartDate

This pattern is most commonly seen in publishing systems like blogs and CMSs, but I’ve also seen it used as an important part of an ERP system for scheduling changes to policy before it comes into effect.

Soft delete to retain historical items

Many database-backed business applications are required to keep track of old items for historical purposes — so users can go back and see what the state of the business was six months ago, for example.

(Alternatively, historical data is kept because the developer can’t figure out how to delete something without breaking foreign key constraints, but this really amounts to the same thing.)

We need to keep this data somewhere, but it’s no longer immediately interesting to the application because either:

  • The item explicitly entered a dormant state – e.g. an expired eBay listing or deactivated Windows account, or
  • The item was implicitly dropped from the active set – e.g. my Google Calendar appointments from last week that I will probably never look at again

I haven’t included deleted items here because there are very few cases I can think of in business applications where data is simply deleted (unless it was entered in error) — usually it is just transformed from one state to another. Udi Dahan explains this well in his article Don’t Delete — Just’ Don’t:

Orders aren’t deleted – they’re cancelled. There may also be fees incurred if the order is canceled too late.Employees aren’t deleted – they’re fired (or possibly retired). A compensation package often needs to be handled.

Jobs aren’t deleted – they’re filled (or their requisition is revoked).

Unless your application is pure CRUD (e.g. data grids), these states most likely represent totally different use cases. For example, in a timesheet application, complete tasks may be used for invoicing purposes, while incomplete tasks comprise your todo list.

Different use cases, different query needs

Each use case has different query requirements, as the information the application is interested in depends on the context. To achieve optimal performance, the database should reflect this — instead of lumping differently-used sets of items together in one huge table with a flag or status code as a discriminator, consider splitting them up into separate tables.

For example, in our job board application, we might store open, expired and filled listings using a table-per-class strategy:

Physically separating job listings by state allows us to optimize them for different use cases — focusing on write performance for active items, and read performance for past ones, with different columns, indexes and levels of (de)normalization for each.

Isn’t this all overkill?

Probably. If you’re already using soft delete and haven’t had any problems then you don’t need to worry — soft delete was a sensible trade-off for your application that hasn’t caused any serious issues so far.

But if you’re anticipating growth, or already encountering scalability problems as dead bodies pile up in your database, you might like to look at alternatives that better satisfy your application’s requirements.

The truth is soft delete is a poor solution for most of the problems it promises to solve. Instead, focus on what you’re actually trying to achieve. Keep everything simple and follow these guidelines:

  • Primary transactional tables should only contain data that is valid and active right now.
  • Do you really need to be able to undo deletes? If so, there are dedicated patterns to handle this.
  • Audit logging at the row level sucks. Do it higher up where you know the full story.
  • If a row doesn’t apply yet, put it in a queue until it does.
  • Physically separate items in different states based on their query usage.

Above all, make sure you’re not gold plating tables with soft delete simply out of habit!

Powershell script to find orphan stored procedures

Working on a legacy application that uses over 2,200 stored procedures, it can be hard to keep track of which ones are still active and which can be deleted.

Here’s a quick PowerShell script I wrote that locates stored procedures in a database that aren’t referenced by code or other database objects (assuming you have them scripted in source control).

# find un-used stored procedures# ---------------------------------------------------------# C# files$src = "C:yourprojectsrc"# db objects (e.g. DDL for views, sprocs, triggers, functions)$sqlsrc = "C:yourprojectsqlscripts"# connection string$db = "Data Source=localhost;Initial Cataog..."# ---------------------------------------------------------echo "Looking for stored procedures..."$cn = new-object system.data.SqlClient.SqlConnection($db)$q = "SELECT        nameFROM        sys.objectsWHERE        type in ('P', 'PC')        AND is_ms_shipped = 0        AND name NOT IN        (                'sp_alterdiagram', -- sql server stuff                'sp_creatediagram',                'sp_dropdiagram',                'sp_helpdiagramdefinition',                'sp_helpdiagrams',                'sp_renamediagram',                'sp_upgraddiagrams'        )ORDER BY        name ASC"$da = new-object "System.Data.SqlClient.SqlDataAdapter" ($q, $cn)$ds = new-object "System.Data.DataSet" "dsStoredProcs"$da.Fill($ds) | out-null# chuck stored procs name in an array$sprocs = New-Object System.Collections.Specialized.StringCollection$ds.Tables[0] | FOREACH-OBJECT {        $sprocs.Add($_.name) | out-null}$count = $sprocs.Countecho "  found $count stored procedures"# search in C# filesecho "Searching source code..."dir -recurse -filter *.cs $src | foreach ($_) {        $file = $_.fullname        echo "searching $file"        for ($i = 0; $i -lt $sprocs.Count; $i++) {        $sproc = $sprocs[$i];                if (select-string -path $file -pattern $sproc) {                        $sprocs.Remove($sproc)                        echo "  found $sproc"                }        }}# search in NHibernate *.hbm.xml mapping filesecho "Searching hibernate mappings..."dir -recurse -filter *hbm.xml $src | foreach ($_) {        $file = $_.fullname        echo "searching $file"        for ($i = 0; $i -lt $sprocs.Count; $i++) {        $sproc = $sprocs[$i];                if (select-string -path $file -pattern $sproc) {                        $sprocs.Remove($sproc)                        echo "  found $sproc"                }        }}# search through other database objectsdir -recurse -filter *.sql $sqlsrc | foreach ($_) {        $file = $_.fullname        echo "searching $file"        for ($i = 0; $i -lt $sprocs.Count; $i++) {                $sproc = $sprocs[$i];                if ($file -notmatch $sproc) {                        if (select-string -path $file -pattern $sproc) {                                $sprocs.Remove($sproc)                                echo "  found $sproc"                        }                }        }}# list any that are still here (i.e. weren't found)$count = $sprocs.Countecho "Found $count un-used stored procedures."for ($i=0; $i -lt $count; $i++) {        $x = $sprocs[$i]        echo "  $i. $x"}

It ain’t too pretty, but it does the job.

Unit tests for private methods are a code smell

Unit tests for private methods are a code smell

This week I attended a talk where some people were discussing techniques for unit testing private methods — they were going on about problems they had getting something called Private Accessors to work with ReSharper.

The tools they mentioned were completely foreign to me, and I wondered why I’d never heard of them. I think the reason is because I never bothered trying to test a private method before.

I take the approach that, if you have a private method worthy of having its own tests, then it is worthy of being extracted to a public method on a new class. Tests for private methods are a big smell you have an SRP violation somewhere. And they will be brittle anyway so just don’t do it!

Domain model refactoring: replace query with composition

Domain model refactoring: replace query with composition

Here’s a snippet of ubiquitous language (altered slightly to protect the innocent) from a system I’ve been working on over the past few months:

An Officer is a role played by certain Employees. Each Officer is required to be proficient in a number of Competencies, according to [among other things] what District they’re stationed in.

We originally implement the Officer-District association as a query because it comes from a different bounded context (Rostering) and changes frequently as Employees move around.

When creating an Officer role for an Employee, we simply queried to find out what District they were working in and what Competencies are required to be practiced there. It looked something like this:

public class OfficerRoleFactory : IRoleFactory&lt;Officer&gt;{    ...        public Officer CreateRoleFor(Employee employee)    {        District district = districtResolver.GetCurrentLocationOf(employee);        var requiredCompetencies = competencyRepository            .GetCompetenciesRequiredToBePracticedIn(district);                return new Officer(employee, requiredCompetencies);    }}

That was great for when someone first became an Officer, but presented a big problem when they want to move to a different District. To update their required Competencies we have to:

  1. Find what Competencies were required because of their old District
  2. Find what Competencies are required in their new District
  3. Add new required Competencies to the Officer and remove any that no longer apply

Our model did not easily permit this because it failed to encapsulate what District an Officer was working in when their required Competencies were assigned (we simply queried for their current location whenever it was needed). Our code got stuck:

public class Officer : IRole{    ...    /// &lt;summary&gt;    /// Change the District the Officer is working in. Removes any    /// Competencies no longer required to be practiced and adds    /// new ones.    /// &lt;/summary&gt;    public void ChangeDistrict(District newDistrict)    {        var oldCompetencies = competencyRepository            .GetCompetenciesRequiredToBePracticedIn(/* what goes here? */);        var newCompetencies = competencyRepository            .GetCompetenciesRequiredToBePracticedIn(newDistrict);       this.requiredCompetencies.Remove(oldCompetencies);       this.requiredCompetencies.Add(newCompetencies);    }}

An Officer’s old District simply wasn’t defined anywhere.

Make everything explicit

Even without updating Competencies to reflect staff movements, we foresaw a lot of confusion for users between where Training thinks you are and where the Rostering says you actually are.

We decided the best way to resolve these issues was to make the Officer-District association a direct property of the Officer that gets persisted within the Training BC:

It seems like a really simple conclusion now, but took us a while to arrive at because our heads were stuck in the rest of the system where pretty much everything (legacy dataset-driven code) queries back to the live Roster tables. Instead we should have been focusing on domain driven design’s goal of eliminating confusion like this by making implicit concepts explicit:

public class Officer : IRole{    /// &lt;summary&gt;    /// The District the Officer is currently stationed in. He/she must    /// be proficient in Competencies required there.    /// &lt;/summary&gt;    public District District { get; set; }        ...    /// &lt;summary&gt;    /// Change the District the Officer is working in. Removes any    /// Competencies no longer required to be practiced and adds    /// new ones.    /// &lt;/summary&gt;    public void ChangeDistrict(District newDistrict)    {        var oldCompetencies = competencyRepository            .GetCompetenciesRequiredToBePracticedIn(this.District);        var newCompetencies = competencyRepository            .GetCompetenciesRequiredToBePracticedIn(newDistrict);       this.requiredCompetencies.Remove(oldCompetencies);       this.requiredCompetencies.Add(newCompetencies);              this.District = newDistrict;    }}

Benefits

Refactoring away from the query to simple object composition makes our domain model a lot easier to understand and also improved some SOA concerns and separation between BCs:

  • ‘Where the Training BC thinks you are’ is now an explicit and observable concept. This clears up a lot of confusion both for users wondering why certain Competencies are assigned to them, and developers trying to debug it.
  • It breaks an ugly runtime dependency between the Training and Rostering BCs. Previously, if the DistrictResolver failed for some reason, it would block the Training BC from succeeding because it was called in-line. Now we can take that whole Rostering BC offline and Training won’t notice because it knows for itself where each Officer is stationed.
  • It allows us to deal with staff movements in a much more event-driven manner. Instead of the DistrictResolver returning up-to-the-second results each time, the District is now an explicit property of the Officer aggregate root that only changes when we want it to — e.g. in response to a StaffChangedDistrict domain event. We can now queue these events and achieve better performance via eventual-consistency.

Overall I am very happy with this.

Repositories Don’t Have Save Methods

Here’s a repository from an application I’ve been working on recently. It has a pretty significant leaky abstraction problem that I shall be fixing tomorrow:

public interface IEmployeeRepository
{
    void Add(Employee employee);
    void Remove(Employee employee);
    void GetById(int id);
    void Save(Employee employee);
}

What’s Wrong with this Picture?

Let me quote the DDD step by step wiki on what exactly a repository is:

Repositories behave like a collection of an Aggregate Root, and act as a facade between your Domain and your Persistence mechanism.

The Add and Remove methods are cool — they provide the collection semantics. GetById is cool too — it enables the lookup of an entity by a special handle that external parties can use to refer to it.

Save on the other hand signals that an object’s state has changed (dirty), and these changes need to be persisted.

What? Dirty tracking? That’s a persistence concern, nothing to do with the domain. Dirty tracking is the exclusive responsibility of a Unit of Work — an application-level concept that most good ORMs provide for free. Don’t let it leak into your domain model! Stay tune for more info on this topic here!