This week I’ve been working on a brownfield Castle-powered WCF service that was creating a separate NHibernate session on every call to a repository object.

Abusing NHibernate like this was playing all sorts of hell for our app (e.g. TransientObjectExceptions), and prevented us from using transactions that matched with a logical unit of work, so I set about refactoring it.


  • One session per WCF operation
  • One transaction per WCF operation
  • Direct access to ISession in my services
  • Rely on Castle facilities as much as possible
  • No hand-rolled code to plug everything together

There are a plethora of blog posts out there to tackle this problem, but most of them require lots of hand-rolled code. Here are a couple of good ones — they both create a custom WCF context extension to hold the NHibernate session, and initialize/dispose it via WCF behaviours:

These work well, but actually, there is a much simpler way that only requires the NHibernate and WCF Integration facilities.

Option one: manual Session.BeginTransaction() / Commit()

The easiest way to do this is to register NHibernate’s ISession in the container, with a per WCF operation lifestyle:

windsor.AddFacility("nhibernate", new NHibernateFacility(...));

        .UsingFactoryMethod(x => windsor.Resolve<ISessionManager>().OpenSession()),

If you want a transaction, you have to manually open and commit it. (You don’t need to worry about anything else because NHibernate’s ITransaction rolls back automatically on dispose):

public class MyWcfService : IMyWcfService
    readonly ISession session;

    public MyWcfService(ISession session)
        this.session = session;

    public void DoSomething()
        using (var tx = session.BeginTransaction())
            // do stuff


(Note of course we are using WindsorServiceHostFactory so Castle acts as a factory for our WCF services. And disclaimer: I am not advocating putting data access and persistence directly in your WCF services here; in reality ISession would more likely be injected into query objects and repositories each with a per WCF operation lifestyle (you can use this to check for lifestyle conflicts). It is just an example for this post.)

Anyway, that’s pretty good, and allows a great deal of control. But developers must remember to use a transaction, or remember to flush the session, or else changes won’t be saved to the database. How about some help from Castle here?

Option two: automatic [Transaction] wrapper

Castle’s Automatic Transaction Facility allows you to decorate methods as [Transaction] and it will automatically wrap a transaction around it. IoC registration becomes simpler:

windsor.AddFacility("nhibernate", new NHibernateFacility(...));


And using it:

[ServiceBehavior, Transactional]
public class MyWcfService : IMyWcfService
    readonly ISessionManager sessionManager;

    public MyWcfService(ISessionManager sessionManager)
        this.sessionManager = sessionManager;

    public virtual void DoSomething()
        // do stuff

What are we doing here?

  • We decorate methods with [Transaction] (remember to make them virtual!) instead of manually opening/closing transactions. I put this attribute on the service method itself, but you could put it anywhere — for example on a CQRS command handler, or domain event handler etc. Of course this requires that the class with the [Transactional] attribute is instantiated via Windsor so it can proxy it.
  • Nothing in the NHibernateFacility needs to be registered per WCF operation lifestyle. I believe this is because NHibernateFacility uses the CallContextSessionStore by default, which in a WCF service happens to be scoped to the duration of a WCF operation.
  • Callers must not dispose the session — that will be done by castle after the transaction is commited. To discourage this I am using it as a method chain — sessionManager.OpenSession().Save() etc.
  • Inject ISessionManager, not ISession. The reason for this is related to transactions: NHibernateFacility must construct the session after the transaction is opened, otherwise it won’t know to enlist it. (NHibernateFacility knows about ITransactionManger, but ITransactionManager doesn’t know about NHibernateFacility). If your service depends on ISession, Castle will construct the session when MyWcfService and its dependencies are resolved (time of object creation) before the transaction has started (time of method dispatch). Using ISessionManager allows you to lazily construct the session after the transaction is opened.
  • In fact, for this reason, ISession is not registered in the container at all — it is only accessible via ISessionManager (which is automatically registered by the NHibernate Integration Facility).

This gives us an NHibernate session per WCF operation, with automatic transaction support, without the need for any additional code.

August 17th, 2010 | 3 Comments

I ran into an interesting problem today with command-based SQL Notifications. We’ve recently introduced SysCache2 on a project as NHibernate’s level 2 cache provider, because of it’s ability to invalidate regions when underlying data changes, and I already wrote about some issues we had with it. Unfortunately we hit another road block today, this time with the queries for notification themselves.

Here’s the offending config:

    <cacheRegion name="Tasks" relativeExpiration="9999999">
              <add command="SELECT ID FROM Task" connectionName="Xyz" />
              <add command="SELECT ID FROM TaskUser" connectionName="Xyz" />

Can you spot the bug here that will result in an unhandled exception in ISession.Get()?

Nope? Neither could I for most of this afternoon.

Query for notification timeout errors

The problem is that the Task and TaskUser tables have four and six million rows respectively. SELECT ID from TaskUser takes over 90 seconds to execute. At this speed, by the time we have re-subscribed to the query notification, new data would have already been written by other users.

Depending on your exact scenario, you have several options:

  1. Refactor the database schema to remove rows from these tables that aren’t likely to change.
  2. Accept the slow query subscription.
  3. Enable caching, but ignore changes from these tables.
  4. Limit the command to only cover rows that are likely to change, e.g. SELECT ID FROM Task WHERE YEAR(DueDate) = 2009.
  5. Disable level 2 cache for these entities entirely.

Accepting the slow query subscription only works if you have very infrequent writes to the table, where it is worth caching rows for a long time.

For us, the high frequency of writes to these tables means that we would be invalidating the cache region all the time, and limited sharing of data between users doesn’t give much benefit in caching. Also blocking a HTTP request thread for 90 seconds is not feasible. So we chose the last option and now don’t bother caching these tables at all.

By the way, while working on this problem, I submitted my first patch to NH Contrib that adds a commandTimeout setting to SysCache2.

October 15th, 2009 | 1 Comment

One of the projects I’m involved at work in is slowly migrating from classic ADO.NET to NHibernate. However, with around 2,200 stored procedures and 30 mapped classes, there is some overlap that cannot be avoided: a number of tables are written by one side but queried by the other.

For performance, we want to use NHibernate’s level 2 cache as much as possible. When rows in the underlying table are changed outside of the NHibernate session however (e.g. by a stored proc), the L2 cache needs to be flushed so it can repopulate with the new updated values.

SysCache2 is a drop-in cache provider for NHibernate that supports SqlDependencies, a SQL Server feature that notifies our application when data changes. It’s pretty easy to set up, but I found mine falling over immediately with the following error:

System.InvalidOperationException: When using SqlDependency without providing an options value, SqlDependency.Start() must be called prior to execution of a command added to the SqlDependency instance.

This isn’t mentioned in the documentation for SysCache2, and there’s not much on Google about it, but as the error message suggests, all you need to do is call SqlDependency.Start() at some point when your app starts. For example, in a web application, you can simply chuck it in global.asax.cs, before you configure the Session Factory.

public class Global : HttpApplication
    protected void Application_Start(Object sender, EventArgs e)

        // create SessionFactory, set up container etc

    protected void Application_End(Object sender, EventArgs e)
October 9th, 2009 | 1 Comment

Recently we have been introducing NHibernate and a domain layer to an older system that relies primarily on the Enterprise Library Data Access application block and stored procedures for database access.

This has mostly gone pretty smoothly, except in situations where we mix the two strategies inside a transaction. Enterprise Library and NHibernate both manage their own connections and if we try to wrap them both in a TransactionScope it gets promotes to a distributed transaction, causing all sorts of headaches. Wouldn’t it be great if NHibernate and Enterprise Library could just share a single SqlConnection instead?

Luckily NHibernate makes this sort of thing really easy to achieve. All you need to do is write a custom IConnectionProvider that wraps Enterprise Library’s Database.CreateConnection(). Then just drop into your hibernate.xml.cfg:

<hibernate-configuration xmlns="urn:nhibernate-configuration-2.2">
    <property name="connection.provider">Xyz.EntLibConnectionProvider, Xyz</property>
    <property name="dialect">NHibernate.Dialect.MsSql2005Dialect</property>
    <property name="connection.driver_class">NHibernate.Driver.SqlClientDriver</property>
    <property name='proxyfactory.factory_class'>NHibernate.ByteCode.Spring.ProxyFactoryFactory, NHibernate.ByteCode.Spring</property>

Note you don’t need a connection.connection_string anymore because it retrieves it from the Enterprise Library’s dataConfiguration section:

<dataConfiguration defaultDatabase="XyzProd">
    <add name="XyzProd"
         connectionString="server=localhost; database=AdventureWorks; UID=user;PWD=word;" />

You can grab the code here: EntLibConnectionProvider.cs

September 12th, 2009 | 2 Comments

On my quest for DDD/TDD nirvana, I’m going to start documenting moments of insight on real projects when design problems are solved, principles suddenly make sense for the first time, or pitfalls are avoided. I want to do it partly for posterity, and partly to help others who are also learning. Here’s the first one.

One of my projects at the moment is developing a simple asset management system. It’s basically just a catalog of assets (computers, devices and office furniture) with various details (specs, serial numbers, notes on wear and tear etc). Each asset may be issued to a person.

I was writing some tests for my concrete repositories (mainly in order to check the database mappings are correct), when the question arose: what should happen to a person’s assets when the person is deleted?

My first thought was simply to edit the mapping XML and change a cascade option so Asset.AssignedToPersonID would be set back to NULL. However, I realised this is more than just a foreign key issue for NHibernate — returning a person’s assets when they leave the organisation is a significant domain concern.

If I take the shortcut and leave it implemented as a cascade option:

  • It’ll work fine, but the business rule will be implicit, rather than explicitly expressed somewhere as something that happens when a person is terminated. Other developers probably won’t even notice it at all.
  • The implementation won’t be testable without hitting a live SQL instance.
  • The business rule won’t be observable in other test cases unless they hit a live SQL instance, because mock repositories will likely not factor for it.
  • If we ever move away from NHibernate, or otherwise need to recreate the database mappings, this feature could be lost, and will probably only be rediscovered after FK constraint errors arise.

That’s no good. Let’s make this business rule an official part of the domain model instead, with a fully-fledged domain service:

public interface IPersonTerminatorService
    void TerminatePerson(Person person);

public class PersonTerminatorService : IPersonTerminatorService
    IPersonRepository personRepository;
    IAssetRepository assetRepository;

    public PersonTerminatorService(IPersonRepository personRepository,
        IAssetRepository assetRepository)
        DesignByContract.Require(personRepository != null,
            "personRepository cannot be null.");
        DesignByContract.Require(assetRepository != null,
            "assetRepository cannot be null.");

        this.personRepository = personRepository;
        this.assetRepository = assetRepository;

    public void TerminatePerson(Person person)

    public void ReturnAllAssetsIssuedToPerson(Person person)
        IEnumerable<Asset> assets =

        foreach (Asset asset in assets)

The implementation is not tested yet (and will likely be refactored later because I think it does too much stuff) but here’s what it’s going to satisfy:

public void Terminating_a_person_returns_all_assets_issued_to_them()
    Asset a = new Computer() { Name = "Apple Powerbook" };
    Asset b = new Computer() { Name = "Dell Dimension" };
    Asset c = new Computer() { Name = "IBM ThinkPad" };

    Person p = new Person("Richard");


    IPersonTerminatorService terminatorService =



It’s a start!

January 20th, 2009 | No Comments Yet