A programmers secret weapon: the humble to-do list

A while ago I wrote a post how to lose traction on a personal software project, based on mistakes I have made myself in the past that have slowed down or even completely stopped progress. Today I want to share a tip that has greatly improved my time management since I’ve started doing it, and helped combat many “programmer’s block” moments I am notorious for.

Coming back to work

After you take a break from a coding project for a while, getting back into the groove can be difficult. Stopping for a while can be a great way to gain perspective and maybe reevaluate your goals of what the application should and shouldn’t be, but it can also be really hard starting again where you left off.

If you know you will be leaving for a while, and want to return later, you can try leaving a test failing, or some other obvious problem that needs fixing (like a syntax error), but these only keep you going in the short term. It can be difficult choosing the next big piece of functionality to work on.

Keeping an eye on the big picture

Have you ever been in the situation where, working on an application, you surround yourself with a forest of rich infrastructure code — configuration, data access, logging, UI widgets etc — and then stop and realise that’s all your application is: an empty shell that doesn’t do anything useful yet?

Or alternatively, do you ever find yourself getting lost in detail, giving unfair attention to making one little component perfect while neglecting the other 90% needed to be up and running for the alpha release?

All those little extra “clean up” tasks you want to do

While coding I’m always spotting things I want to clean up like refactorings, missing comments, source code formatting, etc. I want to finish my current task before starting something new (and can’t until I check in my current work anyway), but I do want to do these little clean ups at some stage.

All these problems really boil down to one simple question — what should I work on next?

The programmers’ to-do list

I have found from own personal experience that sitting down to cut code without a well-thought-out plan of attack is asking for trouble. As well as the examples above, I generally skip from task to task, dabbling in things I find interesting, but not gaining much real traction towards a useful application.

Over the past few months, I have discovered that keeping a detailed to-do list is a great tool for combating these problems and staying on track. The premise is very simple: take a high-level block of work — e.g. implementing a single user story — and break it down into all the little programming steps you need to do to get there, no matter how insignificant or obvious.

Here’s a dumb example snippet of a to-do list for a web app I am working on in my spare time (the whole thing runs about four pages total). You can see the ‘high level’ tasks get vaguer towards the end, because I haven’t planned them out yet, and right at the bottom there are unimportant cleanup tasks.

The key here is to be fine-grained; you want to see all the little tasks that needs to be done, and then tick them off to see your progress. Even if they are always assumed — like validation — it’s like a calendar, only useful if you know all your appointments are in there. And it doesn’t really matter where your high-level tasks start, as long as together they add up to a program that will be useful to a user.

Defer little tasks

If you think of an easy little ‘clean up’ task you’d like to do, don’t do it now — just write it down instead. Why? Because:

  • If you do it now, it could sidetrack you from your current focus
  • It might be a low priority and a poor use of your time right now
  • Instead of doing it now, it could be an easy way to get back into the groove later

For example, if you’ve got half an hour free before going somewhere, and want to spend it on your application, you don’t want to be starting fresh on some giant new piece of work. Why not spend the minutes crossing off one or two of those little clean-up tasks you’ve been meaning to do?

Planning sessions

Alternatively, if you’re not in the mood to code, you could use the time as a planning sessionand try to think of the next big task you want to achieve, and all the small steps that make it up. This is just as helpful as cutting coding, and makes you think about the big picture.

To-do list software

I’m a geek, so naturally, I want some flash tool for managing my to-do list. There are a lot to-do list applications on the web like Todoist and Remember the Milk. I tried a few, but eventually just went back to a bulleted, indented Word document because I found it by far the quickest to work with.

Using NUnit to check your IoC container is set up right

One small problem I encountered when getting into Dependency Injection (DI) and Inversion of Control (IoC) was that even though all my services were now beautifully SOLID and test-driven, quite often it was all wasted because I forgot to register them in my IoC container, causing massive errors that wouldn’t be detected until runtime (oops).

Luckily, it is very easy to write a test to check everything has been registered properly. All you need is a bit of reflection to find all the interfaces in an assembly, then try to resolve them all:

[Test]
public void Should_be_able_to_resolve_all_interfaces_in_domain()
{
    var domain = Assembly.GetAssembly(typeof(Book));
    var interfaces = domain.GetTypes().Where(t => t.IsInterface);
    foreach (Type @interface in interfaces)
    {
        Assert.IsNotNull(ServiceLocator.Current.GetInstance(@interface));
    }
}

We can make this even nicer with NUnit 2.5. Instead of a for loop with multiple asserts within one test case (hmm) that can only show one failure at a time (ugh), we can use NUnit parameterized tests to automagically generate a separate test case for each interface we need to resolve:

That’s heaps easier to read (otherwise Unity’s exceptions are very terse), and we can see multiple failures in one run. To make NUnit generate this without typing them all out by hand, all you need is the new ValueSource attribute, which lets you choose a method, field or property or method that returns an IEnumerable set of objects:

public IEnumerable<Type> GetDomainInterfaces()
{
    var domain = Assembly.GetAssembly(typeof(Book));
    return domain.GetTypes().Where(t => t.IsInterface);
}

[Test]
public void Should_be_able_to_resolve_domain_service(
    [ValueSource("GetDomainInterfaces")]Type @interface)
{
    Assert.IsNotNull(ServiceLocator.Current.GetInstance(@interface));
}

Note I include this with my integration tests, because it can take a few secs to run (e.g. I keep my NHibernate ISession in the container, and building the session factory takes a long time).

Fluent Builder Pattern for classes with long-ish constructors

Last week I discovered a rather wonderful construct for objects with long constructors, e.g. immutable value types:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
public class UserProfile
{
    public string City { getprotected set; }
    public string Country { getprotected set; }
    public Uri Url { getprotected set; }
    public string Email { getprotected set; }
    public string Tagline { getprotected set; }
    public UserProfile(string city, string country, Uri url, string email,
        string tagline)
    {
        ...
    }
}

This constructor has bad Connascence of Position (CoP); to construct a UserProfile instance, users have to know the position of each parameter. Otherwise they might mix up the city with the country for example:

1
2
3
// Spot the bug!
var profile = new UserProfile("NZ""Wellington",
    new Uri("https://richarddingwall.name"), "[email protected]"".NET guy");

This won’t be a problem with named parameters in C# 4.0, but until then, a nice alternative is a fluent builder class as described:

1
2
3
4
5
6
UserProfile profile = new UserProfileBuilder()
    .WithCity("Wellington")
    .WithCountry("NZ")
    .WithUrl(new Uri("https://richarddingwall.name"))
    .WithEmail("[email protected]")
    .WithTagline(".NET guy");

Builders are very easy to implement. Each With method records its value and returns the current builder instance. Then we provide an implicit cast operator that finally constructs a UserProfile with all the parameters in the right places.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
public class UserProfileBuilder
{
    internal string City { getset; }
    internal string Country { getset; }
    // ... etc
    public UserProfileBuilder WithCity(string city)
    {
        this.City = city;
        return this;
    }
    // ... etc
    public static implicit operator UserProfile(UserProfileBuilder builder)
    {
        return new UserProfile(builder.City, builder.Country, builder.Url,
            builder.Email, builder.Tagline);
    }
}

I really like this!

WPF: How to Combine Multiple Assemblies into a Single exe

At work, I am currently working on a small WPF app that we want to keep as a simple standalone exe. However, it has dependencies on several third-party assemblies: SharpZipLib, Unity, ServiceLocator, etc.

Microsoft has handy tool called ILMerge that merges multiple .NET assemblies into a single dll or exe. Unfortunately, it doesn’t support WPF applications, because of the way XAML is compiled.

Instead, you can use another approach — include all your referenced third-party assemblies as embedded resources of the exe:

 

public partial class App : Application
{
    private void OnStartup(object sender, StartupEventArgs e)
    {
        AppDomain.CurrentDomain.AssemblyResolve +=
            new ResolveEventHandler(ResolveAssembly);
        // proceed starting app...
    }
    static Assembly ResolveAssembly(object sender, ResolveEventArgs args)
    {
        Assembly parentAssembly = Assembly.GetExecutingAssembly();
        var name = args.Name.Substring(0, args.Name.IndexOf(',')) + ".dll";
        var resourceName = parentAssembly.GetManifestResourceNames()
            .First(s => s.EndsWith(name));
        using (Stream stream = parentAssembly.GetManifestResourceStream(resourceName))
        {
            byte[] block = new byte[stream.Length];
            stream.Read(block, 0, block.Length);
            return Assembly.Load(block);
        }
    }
}

Whenever .NET can’t find a referenced assembly, it will call our code and we can provide an Assembly instance ourselves. Note this code expects a sane DLL-naming convention 🙂

TDD: How to Supersede a Single System Library Call

This morning I read an article this morning by Karl Seguin on allowing clients to replace system calls with delegates (function pointers) for testing purposes — making the untestable testable.

It is a pattern I have used myself recently, but under a different name with a more formalized syntax. Imagine, for example, you have created an interface to decouple e-mail sending dependencies:

1
2
3
4
public interface IMailSender
{
    void Send(MailMessage message);
}

This allows me to swap out my IMailSender (via Dependency Injection) with a fake implementation for testing purposes – e.g. FakeMailSender or a mock.

Otherwise, production code uses SmtpMailSender, a concrete class I wrote that implements IMailSender via a call to System.Net.Mail.SmtpClient.Send():

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
public class SmtpMailSender : IMailSender
{
    // The MailSent event is raised every time an email is sent.
    public event EventHandler<MailSentEventArgs> MailSent;
    public void Send(MailMessage message)
    {
        if (message == null)
            throw new ArgumentNullException("message");
        // Send the message using System.Net.Mail.SmtpClient()
        new SmtpClient().Send(message);
        // Notify observers that we just sent a msg.
        MailSent(new MailSentEventArgs(message));
    }
}

Note this class is a little bit special — it also has an event MailSent that gets raised every time a message is sent. This lets me attach stuff like global e-mail logging very easily, but how can we write a unit test that asserts this event gets raised at the correct time? Because of the dependency on SmtpClient.Send(), my SmtpMailClient class is now facing the exact same problem IEmailSender was designed to solve. For this issue you can check out this article as well.

Injecting an Action in the constructor

System.Net.Mail.SmtpClient.Send() isn’t virtual, so we can’t mock it, and I don’t really want another interface just for this situation. One solution Karl suggests is injecting an Action that does the actual sending:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
public class SmtpMailSender : IMailSender
{
    // The MailSent event is raised every time an email is sent.
    public event EventHandler<MailSentEventArgs> MailSent;
    // method that actually sends the message.
    readonly Action<MailMessage> send;
    public SmtpMailSender(Action<MailMessage> send)
    {
        this.send = send;
    }
    public void Send(MailMessage message)
    {
        if (message == null)
            throw new ArgumentNullException("message");
        // Send the message using System.Net.Mail.SmtpClient()
        send(message);
        // Notify observers that we just sent a msg.
        MailSent(new MailSentEventArgs(message));
    }
}

This solves the dependency problem effectively and without creating new classes and interfaces. However, now users of the SmtpMailSender class are forced to provide the send action in the constructor.

1
2
var sender = new SmtpMailSender(new SmtpClient().Send);
sender.Send(...);

If you have a good IoC container it can take care of this, but other users may not be so lucky. There are a few other things that I didn’t like as well:

  • 99% of this class’s functionality lies in this single action. A class where 99% of it’s functionality is injected in a single parameter raises the question why it really needs to exist at all.
  • The default implementation, SmtpClient.Send(), only needs to be overriden in a few test cases. Everyone else shouldn’t have to care about it.
  • Putting random delegates in a constructor makes me feel uncomfortable. Unless it is an intention-revealing named delegate, I don’t think this is a good pattern to be promoting.

Using Supersede Instance Variable instead

In Working Effectively with Legacy Code, Michael Feathers discusses a pattern called Supercede Instance Variable. Although it is used for technical reasons (replacing global dependencies in non-virtual C++ classes, where methods cannot be overridden via subclassing), I believe the pattern fits this usage example well.

Really, the only difference is that, here, is that unless a user performs the optional step of overriding the action via a special SupersedeSendActionmethod, a default is used:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
public class SmtpMailSender : IMailSender
{
    public event EventHandler<MailSentEventArgs> MailSent;
    Action<MailMessage> send;
    public Action<MailMessage> DefaultSend
    {
        get return new SmtpClient().Send; }
    }
    public SmtpMailSender()
    {
        this.send = DefaultSend;
    }
    public void Send(MailMessage message)
    {
        if (message == nullthrow new ArgumentNullException("message");
        this.send(message);
        MailSent(new MailSentEventArgs(message));
    }
    /// <summary>
    /// Supersede the method that is used to send a MailMessage for testing
    /// purposes (supersede static variable to break dependency on non-
    /// mockable SmtpClient.Send() method).
    /// </summary>
    public void SupersedeSendAction(Action<MailMessage> newSend)
    {
        this.send = newSend;
    }
}

It is only a small difference, but as Feathers states, using the supersede term indicates it is a rare deviation from default behaviour, and for special cases only:

One nice thing about using the word supersede as the method prefix is that it is kind of fancy and uncommon. If you ever get concerned about whether people are using the superceding methods in production code, you can do a quick search to make sure they aren’t.

Only supersede when you can’t inject

Remember though, that Supersede Instance Variable should only be used for very special cases like individual system library calls in low-level wrapper classes. Dependency Injection and intention-revealing interfaces is still a much better pattern, and should still be your primary tool in all other situations.

How to Lose Traction on a Personal Software Project

So you’ve started writing your first program — great stuff! The software development can be very tricky, especially if you are not absolutely sure what you are doing. In this blog we are exploring different software development methodologies. Here are a few tips and traps to watch out for along the way.

Long periods of time when your program doesn’t compile and/or run at all

You enthusiastically start work on some big changes (e.g. re-architecting your application), but stop because you hit a brick wall, or don’t have the time to finish it. The source code is left in a crippled and unfinished state. You can’t do anything with any of your code until this is fixed, and the longer you leave it, the more you’ll forget and the harder it will be to get started again.

In Continuous Integration this is called a broken build, and is a big no-no because it impacts other peoples ability to work. Without the pressure of a team environment pushing you forward, having a roadblock like this in your path makes it very easy to lose faith and motivation.

Make massive changes on a whim without revision control or a backup

There’s nothing worse than changing your mind about the design of something after you’ve already thrown away the old one. When you do something that involves changing or deleting existing code, be sure to make some sort of backup in case things don’t work out.

If you haven’t taken the plunge with revision control yet, I highly recommend looking at some of the free SVN or GIT hosting services out there — once you get into it, you’ll never look back.

Ignore the YAGNI principle and focus on the fun/easy bits

Focusing on things like validation, eye candy or even general purpose ‘utility’ functions is a great way to build up a large complex code base that doesn’t do anything useful yet. Focus on the core functionality of your software first — your main features should be complete before you start thinking about nice-to-haves.

Throw your code away and start from scratch

As Netscape famously discovered a few years ago, throwing away existing code to start afresh is almost never a good idea. Resist the urge and make a series of small, manageable code refactorings instead.

Start programming before you have a clear goal in mind

Instead of a command line tool, maybe my application would be better as a simple GUI application? Or, I was originally writing my homebrew game for my old Xbox360, but now that we’ve bought a Wii, I’ll port it to that instead!

What are you actually trying to achieve? Spend some time with a pen and some paper coming up with a really clear vision of what you’re trying to create — e.g. screen mock-ups. If you don’t know what you’re writing from the start, the goal posts will keep moving as you change your mind, and you’ll have no chance of finishing it.

Get carried away with project hype before you’ve actually got anything to show for yourself

Spending hours trying to think of the perfect name for your software, designing an icon, choosing the perfect open-source license and making a website won’t get you any closer to having a working application. Get something up and running first, and worry about telling people about it later.

Start a million new features and don’t finish any of them

Jumping from one idea to another without finishing anything is like spinning a car’s wheels and not going anywhere. Make a list of features your program should have, and put them in order of most-to-least important. Work on them, one-at-a-time, in that order.

Capture the Output from a Scheduled Task

Today I had to diagnose a problem with a Windows Scheduled Task that was sporadically failing with a non-zero return code. The exe in question was a .NET console application that was throwing an exception before Main() got called; it was outside our try-catch block.

Anyway, if you ran the .exe from a command line yourself, you would see the error written to stderr. If you ran the Scheduled Task, the error was not logged anywhere.

To capture the output of the scheduled task, I redirected it to a text file with the following command:

1
2
3
before: NightlyBatchJob.exe
after: cmd /C NightlyBatchJob.exe >> NightlyBatchJob.output.txt 2>&1

The > symbol redirects the output to a file; >> makes it append instead of creating a new blank file each time it runs. 2>&1 makes it include the output from stderr with stdout — without it you won’t see any errors in your logs.

The whole command is run in a new cmd.exe instance, because just running an .exe directly from a scheduled task doesn’t seem to produce any console output at all. For more posts on this subject, go back to the homepage.

IRepository: one size does not fit all

I’ve been spending a lot of time putting TDD/DDD into practice lately, and obsessing over all sorts of small-yet-important details as I try concepts and patterns for myself.

One pattern that has recently become popular in mainstream .NET is IRepository<T>. Originally documented in PoEAA, a repository is an adapter that can read and write objects from the database as if it were an in-memory collection like an array. This is called persistence ignorance (PI), and lets you forget about underlying database/ORM semantics.

Somewhere along the line however, someone realized that with generics you can create a single, shared interface that will support pretty much any operation you could ever want to do with any sort of object. They usually look something like this:

public interface IRepository<T>
{
    T GetById(int id);
    IEnumerable<T> GetAll();
    IEnumerable<T> FindAll(IDictionary<string, object> propertyValuePairs);
    T FindOne(IDictionary<string, object> propertyValuePairs);
    T SaveOrUpdate(T entity);
    void Delete(T entity);
}

If you need to add any custom behaviour, you can simply subclass it — e.g. IProjectRepository : IRepository — and add new methods there. That’s pretty handy for a quick forms-over-LINQ-to-SQL application.

However, I don’t believe this is satisfactory for applications using domain-driven design (DDD), as repositories can vary greatly in their capabilities. For example, some repositories will allow aggregates to be added, but not deleted (like legal records). Others might be completely read-only. Trying to shoehorn them all into a one-size-fits-all IRepository<T> interface will simple result in a lot of leaky abstractions: you could end up with a Remove() method that is available but always throws InvalidOperationException, or developer team rules like “do not never call Save() on RepositoryX”. That would be pretty bad, so what else can we do instead?

Possibility #2: no common repository interface at all

The first alternative is simply dropping the IRepository<T> base, and make a totally custom repository for each aggregate.

public interface IProjectRepository
{
    Project GetById(int id);
    void Delete(Project entity);
    // ... etc
}

This is good, because we won’t inherit a bunch of crap we don’t want, but similar repositories will have a lot of duplicate code. Making method names consistent is now left to the developer — we could end up with one repository having Load(), another Get(), another Retrieve() etc. This isn’t very good.

Thinking about it, a lot of repositories are going to fit into one or two broad categories that share a few common methods — those that support reading and writing, and those that only support reading. What if we extracted these categories out into semi-specialized base interfaces?

Possibility #3: IReadOnlyRepository and IMutableRepository

What if we provided base interfaces for common repository types like this:

This is better, but still doesn’t satisfy all needs. Providing a GetAll() method might be helpful for a small collection of aggregates that we enumerate over often, but it wouldn’t make so much sense for a database of a million customers. We still need to be able to include/exclude standard capabilities at a more granular level.

Possibility #4: Repository Traits

Let’s create a whole load of little fine-grained interfaces — one for each standard method a repository might have.

public interface ICanGetAll<T>
{
    IEnumerable<T> GetAll();
}

public interface ICanGetById<TEntity, TKey>
{
    TEntity GetById(TKey id);
}

public interface ICanRemove<T>
{
    void Remove(T entity);
}

public interface ICanSave<T>
{
    void Save(T entity);
}

public interface ICanGetCount
{
    int GetCount();
}

// ...etc

I am calling these Repository Traits. When defining a repository interface for a particular aggregate, you can explicitly pick and choose which methods it should have, as well as adding your own custom ones:

public interface IProjectRepository :
    ICanSave<Project>,
    ICanRemove<Project>,
    ICanGetById<Project, int>,
    ICanGetByName<Project>
{
    IEnumerable<Project> GetProjectsForUser(User user);
}

This lets you define repositories that can do as little or as much as they need to, and no more. If you recognize a new trait that may be shared by several repositories — e.g., ICanDeleteAll — all you need to do is define and implement a new interface.

Side note: concrete repositories can still have generic bases

Out of interest, here’s what my concrete PersonRepository looks like:

There’s very little new code in it, because all of the common repository traits are already satisfied by a generic, one-size-fits-all NHibernateRepository base class (which must be completely hidden from external callers!). The IPersonRepository just defines which subset of its methods are available to the domain layer.

Gallio, the framework-agnostic test runner for .NET

Over the past week, I’ve been converting a bunch of unit tests written with NUnit to Microsoft’s MSTest framework, the tool of choice for this particular project.

I’m not a big fan of either the MSTest framework, nor Visual Studio’s in-built test runner. It feels like you’re running unit tests in Access. Plus, the last thing my Visual Studio needs is to be cluttered up with more windows and toolbars.

Anyway, here’s where Gallio comes in. It’s a open source, framework-agnostic test runner that supports MbUnit, MSTest, NBehave, NUnit, and xUnit.Net frameworks. Running tests is quite similar to NUnit — you add test assembly paths to a .gallio project file and fire them up in Icarus, Gallio’s graphical test runner:

As well as being a nicer test runner for MSTest, Gallio is ideal for managing tests in a mixed-framework environment. It also integrates with MSBuild, NAnt, CruiseControl.NET for automated testing during your build process.

Nested FOR XML results with SQL Server’s PATH mode

Today, while doing some work on a highly data (not object) driven .NET application, I needed a query output as XML from the application’s SQL Server 2005 database. I wanted:

  • Nicely formatted and properly mapped XML (e.g. no <row> elements as found in FOR XML RAW mode)
  • To be able to easily map columns to XML elements and attributes
  • A single root node, so I can load it into an XmlDocument without having to create the root node myself
  • Nested child elements
  • Not to have to turn my elegant little query into a huge mess of esoteric T-SQL (as with [Explicit!1!Mode])

I discovered that all this is surprisingly easy to achieve all of these things with SQL Server 2005′s FOR XML PATH mode. (I say surprising, because I’ve tried this sort of thing with FOR XML AUTO a few times before under SQL Server 2000, and gave up each time).

Here’s a quick example I’ve created using the venerable AdventureWorks example database, with comments against all the important bits:

SELECT
    -- Map columns to XML attributes/elements with XPath selectors.
    category.ProductCategoryID AS '@id',
    category.Name AS '@name',
    (
        -- Use a sub query for child elements.
        SELECT
            ProductID AS '@id',
            Name AS '@name',
            ListPrice AS '@price'
        FROM
            SalesLT.Product
        WHERE
            ProductCategoryID = category.ProductCategoryID
        FOR
            XML PATH('product'), -- The element name for each row.
            TYPE -- Column is typed so it nests as XML, not text.
    AS 'products' -- The root element name for this child collection.
FROM
    SalesLT.ProductCategory category
FOR
    XML PATH('category'), -- The element name for each row.
    ROOT('categories'-- The root element name for this result set.

As you can see, we’ve mapped columns to attributes/elements with XPath selectors, and set row and root element names with PATH() and ROOT() respectively.

Plus, by specifying my own names for everything, I was also able to address the difference in capitalization, prefixing and pluralization style between the AdventureWorks’ database table names and common XML.

Running this query produces output in the following format. Note the root nodes for both outer and child collections:

<categories>
  <category id="4" name="Accessories" />
  <category id="24" name="Gloves">
    <products>
      <product id="858" name="Half-Finger Gloves, S" price="24.4900" />
      <product id="859" name="Half-Finger Gloves, M" price="24.4900" />
      <product id="860" name="Half-Finger Gloves, L" price="24.4900" />
      <product id="861" name="Full-Finger Gloves, S" price="37.9900" />
      <product id="862" name="Full-Finger Gloves, M" price="37.9900" />
      <product id="863" name="Full-Finger Gloves, L" price="37.9900" />
    </products>
  </category>
  <category id="35" name="Helmets">
    <products>
      <product id="707" name="Sport-100 Helmet, Red" price="34.9900" />
      <product id="708" name="Sport-100 Helmet, Black" price="34.9900" />
      <product id="711" name="Sport-100 Helmet, Blue" price="34.9900" />
    </products>
  </category>
</categories>