Syntax for renaming a foreign key with sp_rename

Assuming the following tables:

CREATE TABLE Shopping.Product (CategoryID INT NOT NULL)CREATE TABLE Shopping.Category (CategoryID INT PRIMARY KEY)ALTER TABLE Shopping.Product    ADD CONSTRAINT FK_Product_Caetgory FOREIGN KEY(CategoryID)    REFERENCES Shopping.Category

There is a typo in the foreign key name. How would you fix this using sp_rename? Is it…

-- 1. Just the object name as seen in sys.objects:sp_rename 'FK_Product_Caetgory', 'FK_Product_Category', 'OBJECT'-- 2. Qualified with  the schema:sp_rename 'Shopping.FK_Product_Caetgory', 'FK_Product_Category', 'OBJECT'-- 3. Qualified with the schema and table:sp_rename 'Shopping.Product.FK_Product_Caetgory', 'FK_Product_Category', 'OBJECT'

If you picked #2, you are correct. This took me 10 mins to figure out.

Domain-Driven Design and OOP: friction = traction

A few days ago I read a really great post by Seth Godwin about Traction and Friction. He’s not a programmer (as far as I know), but writes a lot of insightful articles about strategy and value that can be applied to everyday subjects.

This particular article struck a chord in me as a good metaphor for the benefits domain-driven design brings to object-oriented code. Here’s a snippet:

A big car on a wet frozen lake goes nowhere. No traction, no motion.

A small bug working its way across a gravel driveway takes forever. Too much friction, too little motion.

If you’re stuck, it’s probably because one of these two challenges.

The power of online platforms is that they create traction. No, you can’t write more than 140 characters, no you can’t design any layout you want, no you can’t spam everyone with your inane sales pitch. You have something to leverage against, but it’s that thing, the friction, that makes it work.

Object-oriented languages like C# or Java are very powerful things. They provide an infinite canvas on which you can create just about anything, including a huge mess — like the car on the frozen lake — if you start coding without any rules to guide you.

DDD on the other hand provides principles and patterns you can leverage and push off with, like the side of a swimming pool. No you can’t put UI logic in the domain layer, no you can’t use a dataset to model an important business concept, no you can’t reference this entity without going through its aggregate root first. This friction is what makes DDD effective — it gives traction to your OO code.

SQL Notifications: not very practical for large data sets

SQL Notifications: not very practical for large data sets

I ran into an interesting problem today with command-based SQL Notifications. We’ve recently introduced SysCache2 on a project as NHibernate’s level 2 cache provider, because of it’s ability to invalidate regions when underlying data changes, and I already wrote about some issues we had with it. Unfortunately we hit another road block today, this time with the queries for notification themselves.

Here’s the offending config:

<syscache2>    <cacheRegion name="Tasks" relativeExpiration="9999999">        <dependencies>          <commands>              <add command="SELECT ID FROM Task" connectionName="Xyz" />              <add command="SELECT ID FROM TaskUser" connectionName="Xyz" />              ...          </commands>      </dependencies>    </cacheRegion></syscache2>

Can you spot the bug here that will result in an unhandled exception in ISession.Get()?

Nope? Neither could I for most of this afternoon.

Query for notification timeout errors

The problem is that the Task and TaskUser tables have four and six million rows respectively. SELECT ID from TaskUser takes over 90 seconds to execute. At this speed, by the time we have re-subscribed to the query notification, new data would have already been written by other users.

Depending on your exact scenario, you have several options:

  1. Refactor the database schema to remove rows from these tables that aren’t likely to change.
  2. Accept the slow query subscription.
  3. Enable caching, but ignore changes from these tables.
  4. Limit the command to only cover rows that are likely to change, e.g. SELECT ID FROM Task WHERE YEAR(DueDate) = 2009.
  5. Disable level 2 cache for these entities entirely.

Accepting the slow query subscription only works if you have very infrequent writes to the table, where it is worth caching rows for a long time.

For us, the high frequency of writes to these tables means that we would be invalidating the cache region all the time, and limited sharing of data between users doesn’t give much benefit in caching. Also blocking a HTTP request thread for 90 seconds is not feasible. So we chose the last option and now don’t bother caching these tables at all.

By the way, while working on this problem, I submitted my first patch to NH Contrib that adds a commandTimeout setting to SysCache2.

Life inside an Aggregate Root, part 2

Life inside an Aggregate Root, part 2

In part one I talked about how entities have reference their parent aggregate root. Today I will talk about how new entities are added to the aggregate. Let’s have a look at the spec again for my example training system:

A Training Programme is comprised of Skills, arranged in Skill Groups. Skill Groups can contain Sub Groups with as many levels deep as you like.

Here’s our original code for adding Skill Groups to a Training Programme:

SkillGroup subGroup = new SkillGroup("First Aid", programme);programme.Add(subGroup);// add skills to subgroup etc

There are three things wrong here.

  1. We’re breaking aggregate root boundaries. Skill Groups can only be constructed against one Training Programme, but can be added to as many Training Programmes as you like via the Add(SkillGroup) method.
  2. Anemic domain model: Training Programmes and Skill Groups are just dumb containers here. The actual logic is being performed in the application services layer.
  3. The application services layer knows too much about Skill Groups (how to construct them).

Rule #5: New objects inside the aggregate are created by existing objects

These sorts of problems can be avoided if we move the responsibility for creating Skill Groups to the Training Programme itself. The new keyword is a bit of a smell here — as Udi Dahan recently stated in a blog post, customers don’t just appear out of thin air. Let’s flip it around:

SkillGroup subGroup = trainingProgramme.AddSubGroup("First Aid");// add skills to subgroup etc

Now the Training Programme knows about creating and adding Skill Groups, and can hide away any factory construction or injection required to construct them. It also eliminates the situation where you could add the same Skill Group to multiple programmes:

SkillGroup subGroup = new SkillGroup("First Aid", programmeA);programmeA.Add(subGroup);programmeB.Add(subGroup); // what happens now?

Much better!

Life inside an Aggregate Root, part 1

One of the most important concepts in Domain Driven Design is the Aggregate Root — a consistency boundary around a group of related objects that move together. To keep things as simple as possible, we apply the following rules to them:

  1. Entities can only hold references to aggregate roots, not entities or value objects within
  2. Access to any entity or value object is only allowed via the root
  3. The entire aggregate is locked, versioned and persisted together

It’s not too hard to implement these restrictions when you’re using a good object-relational mapper. But there are a couple of other rules that are worth mentioning because they’re easy to overlook.

Real-life example: training programme

Here’s a snippet from an app I am building at work (altered slightly to protect the innocent). Domain concepts are in bold:

Training Programme is comprised of Skills, arranged in Skill GroupsSkill Groupscan contain Sub Groups with as many levels deep as you like. Skills can be used for multiple Training Programmes, but you can’t have the same Skill twice under the same Training Programme. When a Skill is removed from a Training ProgrammeIndividuals should no longer have to practice it.

Here’s what it looks like, with our two aggregate roots, Training Programme and Skill:

Pretty simple right? Let’s see how we can implement the two behaviours from the snippet using aggregate roots.

Rule #4: All objects have a reference back to the aggregate root

Let’s look at the first behaviour from the spec:

…you can’t have the same Skill twice under the same Training Programme.

Our first skill group implementation looked this like:

public class TrainingProgramme
{
    public IEnumerable<SkillGroup> SkillGroups { get; }

    ...
}

public class SkillGroup
{
    public SkillGroup(string name) { ... }

    public void Add(Skill skill)
    {
        // Error if the Skill is already added to this Skill Group.
        if (Contains(skill))
            throw new DomainException("Skill already added");

        skills.Add(skill);
    }

    public bool Contains(Skill skill)
    {
        return skills.Contains(skill);
    }

    ...

    private IList<Skill> skills;
}

What’s the problem here? Have a look at the SkillGroup’s Add() method. If you try to have the same Skill twice under a Skill Group, it will throw an exception. But the spec says you can’t have the same Skill twice anywhere in the same Training Programme.

The solution is to have a reference back from the Skill Group to it’s parent Training Programme, so you can check the whole aggregate instead of just the current entity.

public class TrainingProgramme
{
    public IEnumerable<SkillGroup> SkillGroups { get; }

    // Recursively search through all Skill Groups for this Skill.
    public bool Contains(Skill skill) { ... }

    ...
}

public class SkillGroup
{
    public SkillGroup(string name, TrainingProgramme programme)
    {
        ...
    }

    public void Add(Skill skill)
    {
        // Error if the Skill is already added under this Training Programme.
        if (programme.Contains(skill))
            throw new DomainException("Skill already added");

        skills.Add(skill);
    }

    ...

    private TrainingProgramme programme;
    private IList<Skill> skills;
}

Introducing circular coupling like this feels wrong at first, but is totally acceptable in DDD because the AR restrictions make it work. Entities can be coupled tightly to aggregate roots because nothing else is allowed to use them!

NHibernate.Caches.SysCache2: don’t forget to call SqlDependency.Start()

One of the projects I’m involved at work in is slowly migrating from classic ADO.NET to NHibernate. However, with around 2,200 stored procedures and 30 mapped classes, there is some overlap that cannot be avoided: a number of tables are written by one side but queried by the other.

For performance, we want to use NHibernate’s level 2 cache as much as possible. When rows in the underlying table are changed outside of the NHibernate session however (e.g. by a stored proc), the L2 cache needs to be flushed so it can repopulate with the new updated values.

SysCache2 is a drop-in cache provider for NHibernate that supports SqlDependencies, a SQL Server feature that notifies our application when data changes. It’s pretty easy to set up, but I found mine falling over immediately with the following error:

System.InvalidOperationException: When using SqlDependency without providing an options value, SqlDependency.Start() must be called prior to execution of a command added to the SqlDependency instance.

This isn’t mentioned in the documentation for SysCache2, and there’s not much on Google about it, but as the error message suggests, all you need to do is call SqlDependency.Start() at some point when your app starts. For example, in a web application, you can simply chuck it in global.asax.cs, before you configure the Session Factory.

public class Global : HttpApplication{    protected void Application_Start(Object sender, EventArgs e)    {        SqlDependency.Start(Config.DefaultConnectionString);        // create SessionFactory, set up container etc    }    protected void Application_End(Object sender, EventArgs e)    {        SqlDependency.Stop(Config.DefaultConnectionString);    }}

A wee test helper for setting private ID fields

A wee test helper for setting private ID fields

Every now and then I need to write tests that depend on the ID of a persistent object. IDs are usually private and read-only, assigned internally by your ORM, so I use a little fluent extension method to help set them via reflection.

public static class ObjectExtensions{    // Helper to set private ID    public static T WithId<T>(this T obj, object id)    {        typeof(T)            .GetField("id", BindingFlags.Instance | BindingFlags.NonPublic)            .SetValue(obj, id);        return obj;    }}

Tweak as desired to suit your naming conventions or base classes. Usage is as follows:

[TestFixture]public class When_comparing_two_foos{    [Test]    public void Should_not_be_the_same_if_they_have_different_ids()    {        var a = new Foo().WithId(4);        var b = new Foo().WithId(42);        a.Should().Not.Be.EqualTo(b);    }}

Memory leak with Enterprise Library 4 Data block and Execute Reader

Yesterday we had a very strange problem in production with an old sproc-based ASP.NET web application. We had just switched from using a handful of “SqlHelper” classes to the Microsoft Enterprise Library Data Access Application block (DAAB).

Strangely though, after releasing these changes to production, the application became plagued by connection and memory leaks, causing connection pool overflows (over 200 connections to the SQL box at one point) and the IIS application pool to fall over every few minutes.

Memory leaks are notoriously difficult to diagnose. We eventually were able to narrow it down to this offending piece of code, used for running FOR XML queries:

public XmlDocument ExecuteNativeXml(string sprocName, params object[] parameters){    SqlDatabase db = (SqlDatabase)DatabaseFactory.CreateDatabase();    using (DbCommand command = db.GetStoredProcCommand(sprocName, parameters))    {        XmlDocument document = new XmlDocument();        using (XmlReader reader = db.ExecuteXmlReader(command))            document.Load(reader);        return document;    }}

Can you spot the memory leak? Everything that implements IDisposable is wrapped in a using block, and DAAB takes care of opening/closing SqlConnections — our code doesn’t have any contact with them at all. So what’s the problem then?

This article about connection pools in SQLMag from 2003 explains it:

…My test application shows that even when you use this [CommandBehavior.CloseConnection] option, if you don’t explicitly close the DataReader (or SqlConnection), the pool overflows. The application then throws an exception when the code requests more connections than the pool will hold.

Some developers insist that if you set the CommandBehavior.CloseConnection option, the DataReader and its associated connection close automatically when the DataReader finishes reading the data. Those developers are partially right—but the option works this way only when you’re using a complex bound control in an ASP.NET Web application. Looping through a DataReader result set to the end of its rowset (that is, when —the DataReader’s Read method— returns false) isn’t enough to trigger automatic connection closing. However, if you bind to a complex bound control such as the DataGrid, the control closes the DataReader and the connection— but only if you’ve set the CommandBehavior.CloseConnection option.

If you execute a query by using another Execute method (e.g., ExecuteScalar, ExecuteNonQuery, ExecuteXMLReader), you are responsible for opening the SqlConnection object and, more importantly, closing it when the query finishes. If you miss a close, orphaned connections quickly accumulate.

The fix is pretty bit ugly and unexpected — you have to reach inside the DBCommand and explicitly close its connection yourself:

public XmlDocument ExecuteNativeXml(string sprocName, params object[] parameters){    SqlDatabase db = (SqlDatabase)DatabaseFactory.CreateDatabase();    using (DbCommand command = db.GetStoredProcCommand(sprocName, parameters))    {        XmlDocument document = new XmlDocument();        using (XmlReader reader = db.ExecuteXmlReader(command))            document.Load(reader);        // If you do not explicitly close the connection here, it will leak!        if (command.Connection.State == ConnectionState.Open)            command.Connection.Close();        return document;    }}

So the bug was not the fault of DAAB, but it was caused by its use of CommandBehavior.CloseConnection — the recommended technique. A nice trap for young players!

RSS: What I’m reading

RSS: What I’m reading

Lately I’ve had a few people asking for resources and links to articles about TDD and DDD. I’m a big RSS junkie, so I thought I would just share the feeds I’m currently reading.

I mostly read programming blogs, but you can see a few hints of my hidden UX aspirations as well 🙂

  • 456 Berea Street
  • A List Apart
  • Ayende @ Rahien
  • CodeBetter.Com – Stuff you need to Code Better!
  • {codesqueeze}
  • Coding Horror
  • Coding Instinct
  • Devlicio.us
  • DotNetKicks.com
  • Hendry Luk — Sheep in Fence
  • HunabKu
  • James Newton-King
  • Jimmy Bogard
  • Joel on Software
  • Jon Kruger’s Blog
  • jp.hamilton
  • Lean and Kanban
  • Los Techies
  • Paul Graham: Essays title
  • programming – top reddit links
  • Random Code
  • Scott Hanselman’s Computer Zen
  • Scott Muc
  • ScottGu’s Blog
  • Seth’s Blog
  • Smashing Magazine
  • Thoughts From Eric » Tech
  • Udi Dahan – The Software Simplist title
  • UI Scraps
  • UX Magazine
  • xProgramming.com
  • you’ve been HAACKED

Architectural examples: how does your app fit together?

Architectural examples: how does your app fit together?

I had the pleasure of hearing a talk by Stewart Baird today, and I liked one of his ideas so much I thought I’d mention it here.

Object-relational mappers, IoC, ESBs, REST etc are all a big deal when you’re introducing them to teams or working with contractors who haven’t used them before.

To explain how it all fits together, you should create an architectural example — a reference system that implements a single, simple behaviour. To be effective, it should:

  • Be end-to-end. It should demonstrate how events and data flow from the user interface through to the database and back, including any and all services in between.
  • Be checked into source control. Publish it alongside the production code so everyone can see it.
  • Lead your production code. When you introduce a new component — e.g. a distributed cache — add it to the architectural example first, before rolling it out to your main system.

It’s always nice if you can explore and see everything working for yourself. An architectural example can help explain new topics to less-experienced developers, and provide a nice reference of how you want to see things implemented in future.