SP1 for Visual Studio 2008, .NET 3.5 and TFS 2008

Microsoft have released the following service packs this morning:

  • Visual Studio 2008 SP1: download bootstrapper (536KB) or redistributable (831MB)
  • .NET Framework 3.5 SP1: download bootstrapper (2.8MB) or redistributable (231MB)
  • Team Foundation Server 2008 SP1: download redistributable (133MB)

Note that if you have previously installed hotfixes for Visual Studio 2008, you must run the Hotfix Cleanup Utility before installing Service Pack 1.

These service packs add full support for SQL Server 2008 (which went RTM last week) in Visual Studio 2008. The ADO.NET Entity Framework is out of beta!

Are your ASP.NET MVC URLs consistent?

Over the past couple of months, I’ve finally started work on my first web application built using the ASP.NET MVC framework. It’s going pretty well now, but at the start, in typical fashion, I spent more time googling and trawling message boards than actually cutting code.

My first head-scratching moment was around the naming of controllers and actions that form the path at the end of a URL. Specifically:

  • Are controller names plural or singular?
  • In a route, which comes first; the ID or the action?
  • Should I call this action add, create, insert or new?

Don’t laugh. This stuff will come as naturally as breathing to Rails developers — in fact, all of those decisions are already made for you if you use a tool like the scaffold generator to stub your code out. But with ASP.NET MVC, insufficient guidance over these sorts of choices leads to uncertainty, which leads to indecision, which leads to me mass-refactoring entire projects while I obsess over getting my object names perfect.

As you can imagine, this sort of thing is something I prefer to avoid. Luckily, some people far brighter and more experienced have already thought about this stuff, and have answers for all of my questions.

Are controller names plural or singular?

The ASP.NET MVC Framework uses a REST-based request model; instead of serving up files, it serves up resources. Resources are abstract concepts that together – like articles on Wikipedia or videos on Youtube – comprise an application’s content.

Such resources are stored in repositories, which are implemented in ASP.NET MVC as controllers.

Repository names are plural – they represent a collection of objects. For example, you might have a controller called Products. A Product – singular – is the type of resource it serves. It’s similar to the name you’d give an array or database table.

What order do I put the URL parameters?

I can’t find any reason why you’d want to deviate from the default here, although I did briefly try an OO-inspired controller/id/action route before I realised what was happening.

ASP.NET MVC is the same as Rails: first comes the controller name, then the action, then the ID of the resource in question:


Some people have suggested additional standard parameters you might like to put after this. One is pagination – if you split a list (e.g. search results) across several pages:


And another is alternative representation formats (e.g. RSS, XML, CSV etc):


If the output of your action can be represented in different formats, I think this a pretty neat way of specifying which one you want.

How do I name my controller actions?

This depends on what Route Handler you’re using. If you’re using Adam Tybor‘s Simply Restful Routing, then the decision is made for you – the action names are copied from Rails.

However, if you’re using the ASP.NET MVC default, you’ve got a bit more freedom. Stephen Walter, a product manager and ASP.NET MVC guru at Microsoft has come up with a series of standard controller action names. These are based loosely on the seven Rails controller actions, but exclude HTTP PUT and DELETE verbs, which aren’t supported by the default ASP.NET MVC route handler. They are as follows:

Controller Action HTTP verb Description
Detail GET Displays a single resource such as a database record.
Index GET Displays a collection of resources.
Create GET Displays a form for creating a new resource.
Insert POST Inserts a new resource into the repository.
Edit GET Displays a form for editing an existing resource.
Update POST Updates an existing resource.
Destroy GET Displays a page that confirms whether or not you want to delete a resource.
Delete POST Deletes a resource.

He also suggests some actions for non resource-based controllers – e.g. Home:

Controller Action HTTP verb Description
Login GET Displays a login form.
Authenticate POST Authenticates a user name and password.
Logout POST Logs out a user.

If you follow these guidelines, you will have a consistent and predictable URL format. Enforcing a standard URL format means less decision to make during development, and makes things clearer for end-users.

Upgrading your jail-broken 1.1.4 iPhone to 2.0 firmware

iPhone home screen

On Sunday afternoon, I finally got around to upgrading my jail-broken first-generation iPhone to the version 2.0 firmware. Previously, it was running 1.1.4, unlocked with ZiPhone.

Overall, apart from forgetting to back up my contacts before I did the factory restore, the whole upgrade process was much easier than I expected. Here’s how I did it, using my Powerbook G4.

What you’ll need:

  • The latest version of iTunes.
  • The version 2.0 firmware update: iPhone1,1_2.0_5A347_Restore.ipsw (218MB).
  • PwnageTool 2.0 (19MB).
  • A cable to plug your iPhone into your Mac.

Let’s get started.

  1. Plug your iPhone in and back up everything you want to keep.
  2. Reset your iPhone to factory defaults. In iTunes, hold down Option and click Restore. Select the iPhone1,1_2.0_5A347_Restore.ipsw file you downloaded before. It will take a few minutes to restore.
  3. When the restore is complete and your phone has turned back on again, open up PwnageTool. Set the device to iPhone and enable Expert Mode. Load the same iPhone1,1_2.0_5A347_Restore.ipsw file from before.
  4. When prompted, disable the stupid pineapple and Steve Jobs logos. Everything else I left at default settings.
  5. Complete the Pwnage process and follow any onscreen instructions (e.g. entering DFU mode). It will generate a patched IPSW file which will be used to load onto the phone.
  6. Return to iTunes, hold Option and click Restore again. This time select the custom IPSW file that was created by PwnageTool.
  7. When it finishes restoring, your iPhone will restart and do some BootNeuter stuff. After that, it should be back up and running with version 2.0 of the operating system.

If you’re using the Vodafone New Zealand network, iTunes will prompt you to download and install a new carrier bundle. If you don’t have an iPhone data plan, this will break your default APN, and GPRS data will stop working. To fix this, browse to unlockit.co.nz on your iPhone and click the “Allow Data on any SIM” link.

C# 3.0’s var keyword: Jeff Atwood gets it all wrong

One of the new features of C# 3.0 is the var keyword, which can be used to implicitly declare a variable’s type. For example, instead of declaring the type, you can just write var instead, and the compiler will figure out what type name should be from the value assigned to it:

// A variable explicitly declared as a certain type.
string name = "Richard";

// A variable implicitly declared as the type being assigned to it.
var name = "Richard";

Note that a var isn’t a variant, so it’s quite different from say, a var in javascript. C# vars are strongly typed, so you can’t re-assign them with values of different types:

var name = "Richard"
name = 62; // error, can't assign an int to a string

Vars were introduced to support the use of anonymous types, which don’t have type names. For example:

var me = { Name: "Richard", Age: 22 };

But there’s nothing to stop you from using vars with regular types. In a recent article on codinghorror.com, Jeff Atwood recommends using vars “whenever and wherever it makes [your] code more concise”. Guessing from his examples, this means everywhere you possibly can.

The logic behind this is that it improves your code by eliminating the redundancy of having to write type names twice — once for the variable declaration, and again for a constructor.

There’s always a tradeoff between verbosity and conciseness, but I have an awfully hard time defending the unnecessarily verbose way objects were typically declared in C# and Java.

BufferedReader br = new BufferedReader (new FileReader(name));

Who came up with this stuff?

Is there really any doubt what type of the variable br is? Does it help anyone, ever, to require another BufferedReader on the front of that line? This has bothered me for years, but it was an itch I just couldn’t scratch. Until now.

I don’t think he’s right on this one — my initial reaction is that using var everywhere demonstrates laziness by the programmer. And I don’t mean the good sort of laziness which works smarter, not harder, but the bad sort, which can’t be bothered writing proper variable declarations.

Jeff is right though – writing the type name twice does seem a little excessive. However, as Nicholas Paldino notes, the correct way to eliminate this redundancy would be to make the type name on the right hand-side optional, not the left:

While I agree with you that redundancy is not a good thing, the better way to solve this issue would have been to do something like the following:

MyObject m = new();

Or if you are passing parameters:

Person p = new("FirstName", "LastName");

Where in the creation of a new object, the compiler infers the type from the left-hand side, and not the right.

Microsoft’s C# language reference page for var also warns about the consequences of using var everywhere.

Overuse of var can make source code less readable for others. It is recommended to use var only when it is necessary, that is, when the variable will be used to store an anonymous type or a collection of anonymous types.

In the following example, the balance variable could now theoretically contain an Int32, Int64, Single (float), Double, Decimal, or even a ‘Balance’ object instance, depending on how the GetBalance() method works.

var balance = account.GetBalance();

This is rather confusing. Plus, unless you explicitly down cast, vars are always the concrete type being constructed. Let’s go back to Jeff’s BufferedReader example, of which he asks, “is there really any doubt what type of the variable br is?”

BufferedReader br = new BufferedReader (new FileReader(name));

Actually, yes there is, because polymorphism is used quite extensively in .NET’s IO libraries. The fact that br is being implemented with a BufferedReader is most likely irrelevant. All we need is something that satisfies a Reader base class contract. So br might actually look like this:

Reader br = new BufferedReader (new FileReader(name));

Just because a language allows you to do something, doesn’t mean it’s a good idea to do so. Sometimes new features adapt well to solving problems they weren’t designed for, but this is not one of those situations. Stick to using vars for what they were designed for!

Boost: How do I write a unit test for a signal?

Today, while writing some unit tests, I encountered a challenge. The user story was that, when a Person’s details are updated, the display should be updated to reflect the changes.

I’d implemented this feature using a signal on the person class that will be called whenever any details are updated:

class person
    // Set the Person's name.
    void name(const std::string & name)
        name_ = name;
    // An signal that will be called when the person's details are updated.
    boost::signal<void(const person & person)> updated;
    // The person's name.
    std::string name_;

This is a fairly standard application of an observer pattern that you might find in any MVC application.

But the question is, using the Boost unit test framework, how can I test if my signal has been called?

The mock signal handler

To test the signal handler, we’ll use a functor as a mock signal handler, that sets an internal flag when it gets called. In the functor’s destructor, we’ll do a test on the flag to make sure it got set:

struct mock_handler
    mock_handler(const person & expected_person) :
        has_been_called_(false), expected_person_(expected_person) {};
    // The signal handler function.
    void operator()(const person & person)
        has_been_called_ = true;
        BOOST_CHECK_EQUAL(&person == &expected_person_, true);
    // This handler must be called before it goes out of scope.
        BOOST_CHECK_EQUAL(has_been_called_, true);
    bool has_been_called_;
    const person & expected_person_;

The test case

Once we’ve written a mock, the test case is pretty simple. Note that I wrap my handler with a boost::ref, so that it doesn’t get copied.

// Test that setting a new name triggers the person.updated signal.
    person subject;
    mock_handler handler(subject);
    // Change the person's name, triggering the updated signal.

This works great. And if we comment out the updated signal call in person::name():

Running 1 test case... person_test.cpp(49): error in "setting_name_triggers_update_signal": check has_been_called_ == true failed [0 != 1]
*** 1 failure detected in test suite "tests"

..then the test case will fail accordingly.

Using the Boost Unit Test Framework with Xcode 3

When writing C++ code, I frequently use the Boost C++ libraries, which includes all sorts of great libraries to complement the C++ Standard Template Library (STL).

Recently I’ve been playing around with Boost’s Test library — in particular, the Unit Test Framework. Here’s a quick guide on how you can integrate it into a project in Xcode, Apple’s IDE.

You’ll need Xcode 3 and Boost 1.35 installed, and a C++ project to play with. For this example, I’m using a C++ Command Line Utility project.

Add a target for the tests executable

First, let’s create a new target for the executable that will run all our tests.

Right click on Targets, and click Add > New Target. Select Shell Tool from the list. Use “Tests” for the target’s name.

On the Build tab of the Target Info window, add Boost’s install paths to the Header and Library Search Paths. On this machine, I used MacPorts to install Boost, which uses /opt/local/include and opt/local/lib, respectively.

Right click on the Tests target, and click Add > Existing Frameworks. Browse and select libboost_unit_test_framework-mt-1_35.dylib from your library search path. Add it to your Tests target.

Now the Boost unit test framework is ready to be used from your Xcode project. Let’s use it!

Writing some test suites

My application has two classes, called a and b. They both have bugs. I’ve created a new group called Tests, and written a simple test suite for each of them.

Here’s my b_tests.cpp file. The examples I’ve used aren’t important, but the case and suite declarations are.

// Tests for the 'b' class.

	// Ensure that subtracting 6 from 8 gives 2.
	BOOST_CHECK_EQUAL(b::subtract(8, 6), 2);


First, make sure all your .cpp files are added to the Tests target. Then, to tie all the test files together into a single executable, we’ll use the test framework’s automatic main() feature in our own file called tests_main.cpp:

#define BOOST_TEST_MODULE my application tests

#include <boost/test/unit_test.hpp>

Set the active target to Tests. To check everything got wired up correctly, open the debugger window and hit Build and Go. You should see a bunch of tests fail.

Running tests as part of the build process

Our tests are set up, so let’s integrate them into the build process. Right-click on the Tests target, and click Add > New Run Script Build Phase. Paste the following into the script field (this will resolve to the tests executable):


To keep things obvious, I renamed our new Run Script phase to Run Tests.

Now let’s set up a new rule – the main target should only build after tests have been built and run successfully. Right-click on the MyApp target, click Get Info, and add Tests to the Direct Dependencies list.

Change the active target to MyApp, and hit Build. The tests should fail and return an error code, which Xcode will pick up as a script error.

If the tests don’t succeed, the build will fail, which is exactly what we want.

Parsing the test results

So far, we’ve got the tests running, and integrated into the build process, but the test results output are a bit rough. Let’s fix that.

Xcode will automatically parse script output if it’s prefixed the right way. Unfortunately, none of the Boost Unit Test Framework’s run-time parameters can produce this format. So, we’re going to have to roll up our sleeves and write our own custom unit_test_log_formatter instead.

struct xcode_log_formatter :
	public boost::unit_test::output::compiler_log_formatter
	// Produces an Xcode-friendly message prefix.
	void print_prefix(std::ostream& output,
		boost::unit_test::const_string file_name, std::size_t line)
		output << file_name << ':' << line << ": error: ";

I’ve chucked this in a file called xcode_log_formatter.hpp in the Tests group. In tests_main.cpp, we’ll use a global fixture to tell the unit test framework to use our Xcode formatter instead of the default.

// Set up the unit test framework to use an xcode-friendly log formatter.
struct xcode_config
		unit_test_log.set_formatter( new xcode_log_formatter );

	~xcode_config() {}

// Call our fixture.

After we’ve got this wired up, the test results look much better:

As you can see, instead of reporting it as a generic script error, Xcode has now recognised both test failures as two individual errors. And instead of hunting around through different files trying to find the line where a test broke, you can simply click on an error, and Xcode will find and highlight it for you.

This makes for much easier development, and allows you to manage unit test failures as easily as standard compile errors.

Is the 80 character line limit still relevant?

Traditionally, it’s always been standard practice for programmers to wrap long lines of code so they don’t span more than 80 characters across the screen.


This is because, back in the bad old days, most computer terminals could only display 25 rows of 80 columns of text on screen at once. Any lines that were longer would simply trail off out of sight. To ensure this didn’t happen, programmers split up long lines of code so none of them exceeded 80 characters.

Today, however, it’s pretty unlikely that you or anyone will still be writing code on an 80-column-width terminal. So why do we keep limiting our code to support them?

The answer, of course, is that we don’t. An 80 character limit has no relevance any more with modern computer displays. The three-year-old Powerbook I am writing this post on, for example, can easily display over 200 characters across the screen, at a comfortable 10 point font size. That’s two and a half VT100s!

The reason this standard has stuck around all these years is because of the other benefits it provides.


Long lines that span too far across the monitor are hard to read. This is typography 101. The shorter your line lengths, the less your eye has to travel to see it.

If your code is narrow enough, you can fit two files on screen, side by side, at the same time. This can be very useful if you’re comparing files, or watching your application run side-by-side with a debugger in real time.

Plus, if you write code 80 columns wide, you can relax knowing that your code will be readable and maintainable on more-or-less any computer in the world.

Another nice side effect is that snippets of narrow code are much easier to embed into documents or blog posts.


Constraining the width of your code can sometimes require you to break up lines in unnatural places, making the code look awkward and disjointed. This particular problem is worse with languages like Java and .NET, that tend to use long, descriptive identifier names.

Plus, the amount of usable space for code is also by impacted by tab width. For example, if you’re using 8-space tabs and an 80-column page width, code within a class, a method, and an if statement will already have almost a third of the available space taken for indentation.


Why 80? At work, my current project team uses a 120-character limit. We’ve all got 24″ wide-screen LCD displays, and 120 characters seems to be a good fit for our .NET/Visual Studio development environment, while still leaving ample whitespace.

There are a few factors you should think about, however. The average length of a line of code depends on what language and libraries you’re using. C generally has much shorter identifier names, and subsequently much shorter lines than, say, a .NET language.

It also depends on what sort of project you’re working on. For private and internal projects, use discretion. Find out what works best for your team, and follow it.

For open-source projects, or other situations where you don’t know who’s going to be reading your source code, tradition dictates that you stick with 80.

Another possibility is to make the limit a guideline, rather than a concrete rule. Sometimes you might not care if a particular line continues out of sight. A long string literal, for example, isn’t going to cause the end of the world if you can’t see the whole thing on screen at once.

It may sound pedantic, but if you do decide to use something different, make sure everyone knows the rule, and obeys it. When there are unclear or conflicting rules, chaos ensues. You can end up with hilarious games like formatting tennis, where every time a developer works on a piece of code, they first waste time reformatting the whole thing to reflect their own preferred coding style.

Why bother?

Some of you might wonder why anyone would worry about such trivial details like the length of a line of code. And that’s cool. But, if like me, you believe that code isn’t finished until it not only works well, but looks beautiful too, balancing style with practicality is very important.

The finer points of .NET DirectoryServices

The finer points of .NET DirectoryServices

In the past few days I’ve been tasked with writing a .NET service, part of which must do the following:

  • Read all the user accounts from Active Directory.
  • Identify which accounts are disabled.
  • Find out what groups each user belongs to.

Achieving these three simple goals with .NET’s DirectoryServices proved to be surprisingly difficult. Here are workarounds for three common issues you might encounter.

First challenge: Getting more than 1000 results

The first problem I encountered was my DirectorySearcher object only returning 1000 results from FindAll(). I wanted all 6000. This limit was being imposed due to some funny behaviour with DirectorySearcher’s SizeLimit property, which defaults to 1000. If you try to set it higher, it will max out at the server’s limit, which (you guessed it) also defaults to 1000.

The trick is to set the DirectorySearcher’s PageSize value to less than 1000. All the results will be silently paged back from the server in the background, ignoring the SizeLimit.

using (DirectorySearcher searcher = new DirectorySearcher(this.searchRoot)){        // Search for user objects...        searcher.Filter = "(&(objectClass=user)(objectCategory=person))";        // Set the PageSize between 0 and the max page size (1000) to return all        // results at once (invisibly paged on demand in the background). Otherwise,        // a limit of 1000 results is imposed.        searcher.PageSize = 500;        using (SearchResultCollection results = searcher.FindAll())        {                foreach (SearchResult result in results)                {                        DirectoryEntry directoryEntry = result.GetDirectoryEntry();                        // ... process user                }        }}

Second challenge: Identifying disabled accounts

DirectorySearcher and DirectoryEntry have a collection of properties that provide useful information from Active Directory like the sAMAccountName, memberOf list, location and e-mail address.

Unfortunately, no property exists to identify if the account has been disabled. To get around this, we have to use a bitwise OR on the UserAccountControl flags to see if ACCOUNTDISABLE is set.

// Check and see if the ACCOUNTDISABLE flag is set.const int ACCOUNTDISABLE = 0x0002;int flags = (int)directoryEntry.Properties["userAccountControl"].Value;bool isDisabled = Convert.ToBoolean(flags & ACCOUNTDISABLE);

Third challenge: Finding all of a user’s groups

DirectoryEntry has a property called memberOf that, at first glance, looks like an easy-to-use list of all the user’s groups. Unfortunately, under Windows 2000, this collection excludes the user’s primary group.

To get an unabridged list of groups, I used a different method that assembles the account’s tokenGroups (a list of SIDs) into an OR-query, and enumerates the results. Here’s what it looked like:

directoryEntry.RefreshCache(new string[]{"tokenGroups"});// Start building a new LDAP OR query.StringBuilder sb = new StringBuilder();sb.Append("(|");// Attach each tokenGroup's SID to the query.foreach (byte[] sid in directoryEntry.Properties["tokenGroups"])        sb.AppendFormat("(objectSid={0})", BuildOctetString(sid));sb.Append(")");StringCollection groups = new StringCollection();using (DirectorySearcher searcher = new DirectorySearcher(this.searchRoot)){        // Apply a filter from our query, and load the name property of each        // object found.        searcher.Filter = sb.ToString();        searcher.PropertiesToLoad.Add("name");        using (SearchResultCollection results = searcher.FindAll())        {                // Get each group's name, and add it to our StringCollection.                foreach (SearchResult result in results)                        groups.Add(result.Properties["name"][0].ToString());        }}...// Helper function to convert a binary SID into a string format suitable for use// in an LDAP query.static string BuildOctetString(byte[] bytes){        StringBuilder sb = new StringBuilder();                                foreach (byte b in bytes)                sb.AppendFormat("\{0}", b.ToString("X2"));        return sb.ToString();}

Note that with tokenGroups, you might get more groups than expected. It contains all nested security groups, not just the user’s immediate groups.

A bad name for a method

When prototyping new code I often leave a web browser with thesaurus.com open in the background. It may sound pedantic, but I sometimes find it very useful when deciding what name to use for a class, or what verb to use for a method name. Well-versed code is easy to understand; each class’s name defines its role in the application, and each method’s name describes what the function it performs.

One bad example, which has bugged me for a long time, can be found in Visual Studio. Visual Studio can automatically generate a method stub for you to handle an event. These methods are named after the object that raises the event, and the name of the event, with an underscore between them.

For example, to handle the Click event of a button called SaveButton, the following method stub would be generated:

void SaveButton_Click(object sender, EventArgs e){    ...}

This method’s name describes the circumstances under which it gets called, not what it actually does. Methods are supposed to do things. Just because it’s an event handler doesn’t mean the rules no longer apply. Where’s the verb here?

I would propose changing it to generate code that looks like this instead:

void HandleSaveButtonClick(object sender, EventArgs e){    ...}

This example is just as consistent (in fact it actually conforms a lot closer to the .NET naming guidelines), and it describes what the method does — it handles a SaveButton Click event.