In defensive programming, guard clauses are used to protect your methods from invalid parameters. In design by contract, guard clauses are known as preconditions, and in domain driven design, we use them to protect invariants — unbreakable rules that form assumptions about our model:

public class BankAccount
{
    private int balance;

    public void WithDraw(int amount)
    {
        if (amount < 0)
            throw new InvalidAmountException(
                "Amount to be withdrawn must be positive.");

        if ((balance - amount) < 0)
        {
            string message = String.Format(
                "Cannot withdraw ${0}, balance is only ${1}.",
                amount, balance);

            throw new InsufficientFundsException(message);
        }

       balance -= amount;
    }
}

Unfortunately, in examples like this, the true intention of the method – actually withdrawing money – is now lost in a forest of error-checking guard clauses and exception messages. In fact, the successful path — representing 99% of executions (when there is enough money) — only accounts for 1 line in this method. So let’s refactor:

public class BankAccount
{
    private int balance;

    public void WithDraw(int amount)
    {
        ErrorIfInvalidAmount(amount);
        ErrorIfInsufficientFunds(amount);

       balance -= amount;
    }

    ...
}

By extracting these guard clauses into separate guard methods, the intention of the method becomes much clearer, and the explicit method names give a clear indication of what is being checked inside (regardless of how those checks are implemented). And we can concentrate on the main success path again.

April 8th, 2010 | 4 Comments

As you may know, I’ve recently started at a new position here in London. Last week I was pair programming all day, every day, with a couple of the other developers, learning the ins and outs of the system. It reminded me of an important tip for beginners: good pair programming requires both driver and observer to maintain focus between making a decision and implementing that change.

Renaming a class or a method can be done throughout your code in a fraction of a second with built-in IDE shortcuts. But something like promoting a static method to an injected service (still a pretty common refactoring) can easily take a couple of minutes to actually do. In this time, there’s not much going on for the observer, except listening to the sound of typing. And waiting. And trying not to get bored.

(If you’re unlucky enough to be paired with me, I’ll probably be completely distracted and playing with all the things on your desk by this point.)

Now, I am no stranger to pair programming, and have used it many times before for small tasks. But this is the first time I’ve ever done it solidly for several days at a time. This past week has been a real eye-opener for me in terms of quickly making code changes; although I have been using ReSharper for a while now, I have a new found appreciation for its keyboard shortcuts, and people who can use them effectively. Which sadly doesn’t include me yet.

Using your mouse to navigate code and poke through menus is just too slow, and it really shows when you are refactoring code in front of someone else. A good driver minimises the lag between agreeing on a change and effecting it with lighting-fast typing skills. So unplug your mouse, and go learn your tools’ keyboard shortcuts!

March 4th, 2010 | 5 Comments

In programming, correctness and robustness are two high-level principles from which a number of other principles can be traced back to.

Correctness Robustness
Design by Contract
Defensive programming
Assertions
Invariants
Fail Fast
Populating missing parameters
Sensible defaults
Getting out of the users way
Anticorruption Layer
Backwards-compatible APIs

Robustness adds built-in tolerance for common and non-critical mistakes, while correctness throws an error when it encounters anything less than perfect input. Although they are contradictory, both principles play an important role in most software, so it’s important to know when each is appropriate, and why.

Robustness

“Robustness” is well known as one of the founding principles of the internet, and is probably one of the major contributing factors to its success. Postel’s Law summarizes it simply:

Be conservative in what you do; be liberal in what you accept from others.

Postel’s Law originally referred to other computers on a network, but these days, it can be applied to files, configuration, third party code, other developers, and even users themselves. For example:

Problem Robust approach Correct approach
A rogue web browser that adds trailing whitespace to HTTP headers. Strip whitespace, process request as normal. Return HTTP 400 Bad Request error status to client.
A video file with corrupt frames. Skip over corrupt area to next playable section. Stop playback, raise “Corrupt video file” error.
A config file with lines commented out using the wrong character. Internally recognize most common comment prefixes, ignore them. Terminate on startup with “bad configuration” error.
A user who enters dates in a strange format. Try parsing the string against a number of different date formats. Render the correct format back to the user. Invalid date error.

(Important side note: Postel’s law doesn’t suggest skipping validation entirely, but that errors in non-essential items simply be logged and/or warned about instead of throwing fatal exceptions.)

In many cases, relentless pursuit of correctness results in a pretty bad user experience. One recent annoyance of mine is companies that can’t handle spaces in credit card numbers. Computers are pretty good at text processing, so wasting the user’s time by forcing them to retype their credit card numbers in strange formats is pure laziness on behalf of the developer. Especially if you consider the validation error probably took more code than simply stripping the spaces out.

Compare this to Google Maps, where you can enter just about anything in the search box and it will figure out a street address. I know which I prefer using.

Robustness makes users’ lives easier, and by doing so promotes uptake. Not only amongst end users, but also developers — if you’re developing a standard/library/service/whatever, and can build in a bit of well thought-out flexibility, it’s going to grant a second chance for users with clients that aren’t quite compliant, instead of kicking them out cold.

Adam Bosworth noted a couple of examples of this in a recent post what makes successful standards:

If there is something in HTTP that the receiver doesn’t understand it ignores it. It doesn’t break. If there is something in HTML that the browser doesn’t understand, it ignores it. It doesn’t break. See Postel’s law. Assume the unexpected. False precision is the graveyard of successful standards. XML Schema did very badly in this regard.

HTTP and HTML are displaying robustness here; if they see anything they don’t recognize they simply skip past it. XML Schema on the other hand fails validation if there are any extra elements/attributes present other than precisely those specified in the XSD.

Correctness

So robustness makes life easier for users and third-party developers. But correctness, on the other hand, makes life easier for your developers — instead of bogging down checking/fixing parameters and working around strange edge cases, they can focus on the one single model where all assumptions are guaranteed. Any states outside the main success path can thus be ignored (by failing noisily) — producing code that is briefer, easier to understand, and easier to maintain.

For example, consider parsing XHTML vs regular HTML. If XHTML isn’t well formed, you can simply throw a parser error and give up. HTML on the other hand has thoroughly documented graceful error handling and recovery procedures that you need to implement — much harder than simply validating it as XML.

Which should you use then?

We have a conflict here between robustness or correctness as a guiding principle. I remember one paralyzing moment of indecision I had when I was a bit younger with this very question. The specific details aren’t important, but the 50/50 problem basically boiled down to this: If my code doesn’t technically require this value to be provided, should I check it anyway?

To answer this question, you need to be aware of where you are in the code base, and who it is serving.

  • External interfaces (UI, input files, configuration, API etc) exist primarily to serve users and third parties. Make them robust, and as accommodating as possible, with the expectation that people will input garbage.
  • An application’s internal model (i.e. domain model) should be as simple as possible, and always be in a 100% valid state. Use invariants and assertions to make safe assumptions, and just throw a big fat exception whenever you encounter anything that isn’t right.
  • Protect the internal model from external interfaces with an anti-corruption layer which maps and corrects invalid input where possible, before passing it to the internal model.

Remember if you ignore users’ needs, no one will want to use your software. And if you ignore programmers’ needs, there won’t be any software. So make your external interfaces robust. Make your internal model correct.

Or in other words: internally, seek correctness; externally, seek robustness. A successful application needs both.

February 10th, 2010 | 5 Comments

Today I had the pleasure of fixing a bug in an unashamedly-procedural ASP.NET application, comprised almost entirely of static methods with anywhere from 5-20 parameters each (yuck yuck yuck).

After locating the bug and devising a fix, I hit a wall. I needed some additional information that wasn’t in scope. There were three places I could get it from:

  • The database
  • Form variables
  • The query string

The problem was knowing which to choose, because it depended on the lifecycle of the page — is it the first hit, a reopened item, or a post back? Of course that information wasn’t in scope either.

As I contemplated how to figure the state of the page using HttpContext.Current, my spidey sense told me to stop and reconsider.

Let’s go right back to basics. How did we use to manage this problem in procedural languages like C? There is a simple rule — always pass everything you need into the function. Global variables and hidden state may be convenient in the short-term, but only serve to confuse later on.

To fix the problem, I had to forget about “this page” as an object instance. I had to forget about private class state. I had to forget that C# was an object-oriented language. Those concepts were totally incompatible with this procedural code base, and any implementation would likely result in a big mess.

In fact, it turns out to avoid a DRY violation working out the page state again, and adding hidden dependencies on HttpContext, the most elegant solution was simply to add an extra parameter to every method in the stack. So wrong from a OO/.NET standpoint, but so right for procedural paradigm in place.

November 26th, 2009 | No Comments Yet

One of the applications I work on is a planning system, used for managing the operations of the business over the next week, month and financial year.

Almost every entity in this application has a fixed ‘applicable period’ — a lifetime that begins and ends at certain dates. For example:

  • An employee’s applicable period lasts as long as they are employed
  • A business unit’s lifetime lasts from the day it’s formed, to the day it disbands
  • A policy lasts from the day it comes into effect to the day it ends
  • A shift starts at 8am and finishes at 6pm

Previous incarnations of the application simply added StartDate and EndDate properties to every object, and evaluated them ad-hoc as required. This resulted in a lot of code duplication — date and time logic around overlaps, contiguous blocks etc were repeated all over the place.

As we’ve been carving off bounded contexts and reimplementing them using DDD, I’m proud to say this concept has been identified and separated out into an explicit value type with encapsulated behaviour. We call it a Time Period:

It’s sort of like a .NET TimeSpan but represents a specific period of time, e.g. seven days starting from yesterday morning — not seven days in general.

Here’s the behaviour we’ve implemented so far, taking care of things like comparisons and overlapping periods:

/// <summary>
/// A value type to represent a period of time with known end points (as
/// opposed to just a period like a timespan that could happen anytime).
/// The end point of a TimeRange can be infinity.
/// </summary>
public class TimePeriod : IEquatable<TimePeriod>
{
    public DateTime Start { get; }
    public DateTime? End { get; }
    public bool IsInfinite { get; }
    public TimeSpan Duration { get; }
    public bool Includes(DateTime date);
    public bool StartsBefore(TimePeriod other);
    public bool StartsAfter(TimePeriod other);
    public bool EndsBefore(TimePeriod other);
    public bool EndsAfter(TimePeriod other);
    public bool ImmediatelyPrecedes(TimePeriod other);
    public bool ImmediatelyFollows(TimePeriod other);
    public bool Overlaps(TimePeriod other);
    public TimePeriod GetRemainingSlice();
    public TimePeriod GetRemainingSliceAsAt(DateTime when);
    public bool HasPassed();
    public bool HasPassedAsAt(DateTime when);
    public float GetPercentageElapsed();
    public float GetPercentageElapsedAsAt(DateTime when);
}

Encapsulating logic all in one place means we can get rid of all that ugly duplication (DRY), and it still maps cleanly to StartDate/EndDate columns in the database as an NHibernate component or IValueType.

You can grab our initial implementation here:

November 25th, 2009 | 7 Comments