Duct-tape programmers ship… once

Duct-tape programmers ship… once

I read an interesting article today full of praise for duct-tape programmers, a breed of developers who are smart enough to get away without unit tests, and ship code early by using duct-tape style programming techniques.

Duct-tape programmers are considered great because they help companies take first-mover advantage, while others waste time developing abstract frameworks and trying to over-engineer everything. Customers and project managers love duct-tape programmers because they get the job done fast.

I see duct-tape programmers in a different way:

Of course you need to ship products fast to beat your competition. But duct-tape isn’t a sustainable way to build software, unless you have the luxury of never having to touch the code again (one-off console games come to mind here). Otherwise, maintenance costs from brittle code will soon come back to bite and affect your ability to ship version 2.0 in time.

Our challenge is to find a balance in the middle where we aim high, but have an architecture flexible enough to allow duct-tape-style compromises in a clean and isolated fashion when required.

Increase font size, get to the point quicker

Increase font size, get to the point quicker

A few months ago, I changed all my IDEs and text editors (Visual Studio, SQL Management Studio, Notepad2, Textmate) from their default font sizes up to a much larger 14-16 points.

A bigger font size is easier on the eyes, and encourages you to write shorter lines and smaller methods that fit on one screen so you don’t have to scroll to read them.

It’s not a massive change, but does provide two useful side-effects: it promotes highly cohesive code (lots of small units of code doing specific little tasks) and also reduces cognitive load (i.e. stress).

Recently I installed a nice new WordPress theme for my blog that looks great and has a much bigger font size than the previous one. And after two weeks I have found similar effects creeping across into my writing style.

As well as writing bite-sized units of code, I am writing bite-sized blog posts that get to the point a lot quicker. Good stuff.

Make NHibernate and Enterprise Library play nice together

Make NHibernate and Enterprise Library play nice together

Recently we have been introducing NHibernate and a domain layer to an older system that relies primarily on the Enterprise Library Data Access application block and stored procedures for database access.

This has mostly gone pretty smoothly, except in situations where we mix the two strategies inside a transaction. Enterprise Library and NHibernate both manage their own connections and if we try to wrap them both in a TransactionScope it gets promotes to a distributed transaction, causing all sorts of headaches. Wouldn’t it be great if NHibernate and Enterprise Library could just share a single SqlConnection instead?

Luckily NHibernate makes this sort of thing really easy to achieve. All you need to do is write a custom IConnectionProvider that wraps Enterprise Library’s Database.CreateConnection(). Then just drop into your hibernate.xml.cfg:

<hibernate-configuration xmlns="urn:nhibernate-configuration-2.2">  <session-factory>    <property name="connection.provider">Xyz.EntLibConnectionProvider, Xyz</property>    <property name="dialect">NHibernate.Dialect.MsSql2005Dialect</property>    <property name="connection.driver_class">NHibernate.Driver.SqlClientDriver</property>    <property name='proxyfactory.factory_class'>NHibernate.ByteCode.Spring.ProxyFactoryFactory, NHibernate.ByteCode.Spring</property>  </session-factory></hibernate-configuration>

Note you don’t need a connection.connection_string anymore because it retrieves it from the Enterprise Library’s dataConfiguration section:

<dataConfiguration defaultDatabase="XyzProd">  <connectionStrings>    <add name="XyzProd"         providerName="System.Data.SqlClient"            connectionString="server=localhost; database=AdventureWorks; UID=user;PWD=word;" />  </connectionStrings></dataConfiguration>

You can grab the code here: EntLibConnectionProvider.cs

I can’t decide if this is an application service, or a domain service!

I can’t decide if this is an application service, or a domain service!

Deciding whether a service belongs in the domain or service layer can be a very tough question.

I think it’s important to remember that, just because a class:

  • Deals exclusively with objects in the domain model
  • Is called in turn by other application services
  • Has no dependencies on other non-domain services

…does not make it a domain service.

Try not thinking of it as the domain layer, but as the domain model. The domain model is a very strict abstraction of the business; domain layer is just a tier in your app where the model resides. It’s a small change of words but a big difference in perspective.

Is your service actually part of the domain model, or just manipulating it?

SOLID ugly code

SOLID ugly code

Today we are working on a system that, among other things, sends notification e-mails to employees when their attention is required. Getting an employee’s e-mail address is normally pretty simple, but this organisation has are around 10,000 staff out in the field, many of whom don’t have access to a computer let alone a work e-mail account.

To counter this problem we use some simple chain-of-command rules:

  1. If the Employee has an e-mail address, send it to that.
  2. If he doesn’t have one, send it to his immediate manager. If his manager doesn’t have an e-mail address, keep backtracking up the organisation until you find someone that does.
  3. If still no email address is found, send the message to a system administrator, and they can get the word out via other channels.

The interface for this service is pretty simple. It takes an employee, and returns an email address:

/// <summary>/// Service that can find an email address for an Employee... or the next best/// alternative if they don't have one./// </summary>public interface IEmailAddressResolver{   string GetEmailAddressFor(IEmployee employee);}

So how am I going implement it? With a T-SQL stored procedure, of course.

What? That may sound like a pretty bad idea — stored procedures are notorious for leaking application + domain logic into the persistence layer, and they are practically impossible to write tests for. But here is my justification:

  • This is a database-driven legacy app, and only one bounded context has been modeled using DDD so far. The organisational hierarchy is only accessible via SQL, and modeling and mapping the legacy schema with NHibernate would take a couple of weeks at least. Therefore the simplest way to query it is via stored procedure, or stored-procedure backed services.
  • I don’t want to add an e-mail property to Employee because that is an application concern, not part of the domain model. This needs to be done in a different layer, along with usernames, passwords and UI state, and we haven’t really thought about that yet.
  • We’re getting close to the final release date for this project and we have a massive backlog of work remaining. A stored procedure is about the quickest thing I can think of to implement, and everyone in the team is well-versed in SQL.

Putting it to practice, here’s the concrete implementation we wrote. It’s called via NHibernate so at least we get caching:

// Implements IEmailAddresssResolver using a stored proc.public class EmailAddressResolver : IEmailAddressResolver{    readonly ISession session;    ...    public string GetEmailAddressFor(IEmployee employee)    {        if (employee == null)            throw new ArgumentNullException("employee");        return this.session.GetNamedQuery("employeeEmailAddress")            .SetParameter("employee", employee)            .SetCacheable(true)            .UniqueResult<string>();    }}

I’m not even going to show you the stored proc.

SOLID lets you write ugly code when you have to

The point of this story is that sometimes you have to write ugly code. But when you do, SOLID lets you do so in a neat decoupled manner. None of the callers of IEmailAddressResolver have any idea it’s actually just a dirty stored procedure because the implementation details are all hidden behind an intention-revealing interface. One day we can write a better implementation, swap them out in the IoC container, and no-one will be any wiser.

Using NUnit to check domain event handlers are registered

Using NUnit to check domain event handlers are registered

We’ve been using Udi Dahan’s excellent Domain Events pattern in a project at work. It’s best to keep them as coarse-grained as possible, but we have already identified a dozen or so events that need to be raised by the domain and processed by our services layer.

Naturally, however, I am forgetting to register some of the event handlers in our IoC container. So, as before with our domain services, I decided to write some integration tests to check everything is set up properly. This is very simple to achieve using NUnit’s trusty parameterized tests, ServiceLocator and a sprinkling of generics:

IEnumerable<Type> GetDomainEvents(){    var domain = Assembly.GetAssembly(typeof(Employee));    return domain.GetTypes()        .Where(typeof(IDomainEvent).IsAssignableFrom)        .Where(t => !t.Equals(typeof(IDomainEvent)));}[Test]public void Should_be_able_to_resolve_handlers_for_domain_event(    [ValueSource("GetDomainEvents")]Type @event){    var handler = typeof (IHandles<>).MakeGenericType(@event);    ServiceLocator.Current.GetAllInstances(handler).Should().Not.Be.Empty();}

This reveals a nice todo list of all the handlers we haven’t implemented yet. And the ones I forgot to register!

NUnit Should_be_able_to_resolve_handlers_for_domain_event

Law of Demeter is easy to spot when you need extra mocks

Law of Demeter is easy to spot when you need extra mocks

In code, the Law of Demeter (aka the one-dot rule) is a principle that basically states:

void DoSomething(IFoo foo){    foo.GetStatus(); // good}void DoSomething(IFoo foo){    foo.Profile.GetStatus(); // bad}

In this example, DoSomething() knows intimate details about what an IFoo’s insides look like. This is bad because it’s additional coupling that will bog us down later — if we ever want to change the internal composition of IFoo, we will have to also update all pervert methods like DoSomething() that depend on it.

In some situations, this rule really doesn’t matter, e.g. for trivial or built-in types (datasets come to mind). But other times we definitely want to avoid it. One trick I have discovered for identifying trouble spots is a code smell you might encounter when isolating something for unit testing:

var foo = new Mock<IFoo>();var profile = new Mock<IProfile>();profile.Setup(p => p.GetStatus()).Returns(/* the thing we are testing */);foo.SetupGet(f => f.Profile).Returns(profile);DoSomething(foo.Object); // assert etc

Does this code look familiar? In it we are setting up two mocks:

  • The Profile instance, which has a method we want to stub out and verify
  • The parent IFoo, which only exists to return the child Profile

The code smell is all the extra setup cruft required — two levels of nested mocks for just one parameter we are testing. If we do move method and provide a GetStatus() method on IFoo (that internally delegates to Profile), our test immediately becomes a lot clearer:

var foo = new Mock<IFoo>();foo.Setup(p => p.GetStatus()).Returns(/* the thing we are testing */);DoSomething(foo.Object); // assert etc

Does your Visual Studio run slow?

Recently I’ve been getting pretty annoyed by my Visual Studio 2008, which has been taking longer and longer to do my favorite menu item, Window > Close All Documents. Today was the last straw — I decided 20 seconds to close four C# editor windows really isn’t acceptable for a machine with four gigs of ram, and so I went to look for some fixes.

Here are some of the good ones I found that worked. Use at your own risk of course!

Disable the customer feedback component

In some scenarios Visual Studio may try to collect anonymous statistics about your code when closing a project, even if you opted out of the customer feedback program. To stop this time-consuming behaviour, find this registry key:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\VisualStudio\9.0\Packages\{2DC9DAA9-7F2D-11d2-9BFC-00C04F9901D1}

and rename it to something invalid:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\VisualStudio\9.0\Packages\Disabled-{2DC9DAA9-7F2D-11d2-9BFC-00C04F9901D1}

Clear Visual Studio temp files

Deleting the contents of the following temp directories can fix a lot of performance issues with Visual Studio and web projects:

C:\Users\richardd\AppData\Local\Microsoft\WebsiteCache
C:\Users\richardd\AppData\Local\Temp\Temporary ASP.NET Files\siteName

Clear out the project MRU list

Apparently Visual Studio sometimes accesses the files in your your recent projects list at random times, e.g. when saving a file. I have no idea why it does this, but it can have a big performance hit, especially if some are on a network share that is no longer available.

To clear your recent project list out, delete any entries from the following path in the registry:

HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\8.0\ProjectMRUList

Disable AutoRecover

In nearly four years, I have never used Visual Studio’s AutoRecover feature to recover work. These days, source control and saving regularly almost entirely eliminates the need for it.

To disable it and gain some performance (particularly with large solutions), go to Tools > Options > Environment > AutoRecover and uncheck Save AutoRecovery information. (Cheers Jake for the tip)

Three reasons why a unit test might fail

Here is a little convention I have adopted for check-in comments when fixing red lights in NUnit:

Red traffic light

  • A failing test is when the issue lies in production code. It fails because the code it is testing has bugs or isn’t fully implemented to spec yet.
  • A buggy test is when the test itself has an issue, and the production code is totally fine. For us these most commonly occur when we forget to set something up e.g. null test objects or unconfigured mocks.
  • A deprecated test fails because the behaviour of the code is now intentionally different than before, and the assertions it makes are no longer valid.

A fourth, more insidious case is the missing test, where possible paths through production code exist without any tests that specify what their behaviour should be. If you’re doing TDD this should never happen, but when it does it can be quite difficult to track down. A coverage tool like NCover can help, but only if you miss a whole block or method. Otherwise you need to use your programmer spidey sense and stay on the look-out for uncovered situations.

Hopefully most of your red lights are due to failing and deprecated tests. If you get a lot of buggy or missing ones, then it probably means you are skipping the red step of red-green-refactor.