The Performance of Exceptional Things

Following up from my previous blog post, I’ve had some cracking feedback from a number of people both for and against the use of exceptions – it’s one of those areas (as so many are in coding) that really does seem to have its own holy war.

On one side, those that are against the use of exceptions for ‘program flow’ (though I suspect if I looked at use cases in detail, I probably would be too) and see exceptions more for exceptional circumstances.  The approach favoured by this group tends to be in returning state and programming defensively to avoid exceptions wherever possible.

I totally agree with that final statement – if I have a method ‘IsLoggedIn’ and the user isn’t, then a simple ‘false’ will do and I’ll program defensively in that method to ensure that simple things like NullReferenceExceptions etc. aren’t thrown.

The other group like seem to like the concept of Business Exceptions as a means of handling logic, though (like me) they all wondered about the performance of that approach.

My Use Case

In the example code I put together for the last post, I used the business process of logging in the customer as a use case.  I could have equally used the concept of payments into the site, though obviously a far more significant use case that would have had me writing demo code long after it made sense to do so!

In my exceptions (User Not Found, Password Mismatch, Account in various ‘no play’ states), I’ve just done an analysis of yesterdays traffic to our site (which is hitting approx 1.8-2million unique visitors per month), and we have the following errors (all day):

  • User Not Found – 1842
  • Password Mismatch – 1125
  • Account Self Excluded / Account Cooling Off / Account Disabled / Account Closed – 240

So basically, 3207 things that in our new software will throw exceptions throughout a 24hr period, or 134 per hour, or 2.3 per minute.

Obviously there are payment type errors to take into account, which I suspect will be busier, lets say up to 20-30 exceptions per minute (tops).

So just how heavy are these exceptions?

I’ve updated the hosted code I used in the previous post, and have created two approaches to getting user data – one via models, one via exceptions.  The main web navigation at the top of the page will allow you to test with exceptions or test with models.

I basically setup a test to fail login (User Not Found), and iterated through it 10,000 times, and the code is in there both for exceptions and testing returning models.

I then iterated over those 10,000 tests 10 times each.

Yup, I know this isn’t really as indicative a test as it demonstrates best possible outcomes (the exceptions being repeatedly called will obviously do some form of optimisation that is beyond me!), but it’s helpful as one measure when the core thing people mention is performance.

And yup, there *is* a performance hit when throwing exceptions – no denying it.

But when you look at the code, failing login and returning a model (single run of 10,000 fails) averages out at 289.6ms, whereas with Exceptions, the same 10,000 iteration comes out at 624.1ms.  That makes a single exception (my maths is shite, so happy to be corrected on this) take 0.034ms more to throw.

Oops! Ignore the ticks figures below – I actually (stupidly) divided Ticks by 10,000 rather than Stopwatch.Frequency, so they’ll be slightly out – the milliseconds figures reflect reality though.

  Measured in Ticks     Measured in Milliseconds  
Run Exceptions Models   Exceptions Models
1 2150098 1009757   628 290
2 2165310 1018790   624 287
3 2144660 1018190   622 288
4 2136548 1012047   623 293
5 2139677 1009204   621 289
6 2154162 1011982   627 289
7 2146923 1019645   623 290
8 2167315 1026824   623 289
9 2148493 1011428   626 291
10 2156894 1008608   624 290
Avg Ticks 2151008 1014648      
Avg Ms 215.1008 101.4648   624.1 289.6
Ms per iteration 0.02151008 0.010146   0.06241 0.02896
Cost Increase for Ex   0.011364     0.03345

Where are the real stats?

Well, this is where my naivety kicks in and I really must defer to clever people.  Odd to think I’m a senior dev when I can’t effectively dig any further into it than where I’m at currently, but I’ve found a few cracking posts that really help me see that I’m happy with the approach we’re taking with regards to Business Exceptions (I promise to post when this goes live to let you know if the performance hit took our site down though!).

Blog 1 – Rico Mariani

Rico is (as they say) the man, and he really knows his stuff – he certainly sits on the ‘don’t do this’ side of the holy war, and has good reasons.  He highlights that iterative testing like the above is certainly a ‘best case’ and wouldn’t demonstrate typical usage.

Blog 2 – Jon Skeet

I like this one, it kinda supports our approach! lol.  In particular, a great quote from him:

“If you ever get to the point where exceptions are significantly hurting your performance, you have problems in terms of your use of exceptions beyond just the performance.”

Blog 3 – Krzysztof Cwalina

This is *exactly* how I see our approach to exceptions, and I agree with Jon Skeet, I couldn’t have put it even 10% as good as Krzysztof has.  His bullet point list of Do’s and Don’ts is brilliant.

Code Project Post – Vagif Abilov

I thought this one interesting as he’s gone into far more detail in terms of the tests than I have, and his conclusions are interesting.

Blog 4 – Eric Lippert

Not one so much on performance, as a ‘don’t throw exceptions when you don’t need to’, and there are often ways around throwing exceptions if you code ‘well’.

Blog 5 – Krzysztof Cwalina

Another that I’ve linked to just for the quote which very much reflects my thinking:

“One of the biggest misconceptions about exceptions is that they are for “exceptional conditions.” The reality is that they are for communicating error conditions. From a framework design perspective, there is no such thing as an “exceptional condition”. Whether a condition is exceptional or not depends on the context of usage, — but reusable libraries rarely know how they will be used. For example, OutOfMemoryException might be exceptional for a simple data entry application; it’s not so exceptional for applications doing their own memory management (e.g. SQL server). In other words, one man’s exceptional condition is another man’s chronic condition.”

Exception Management Guidance – Multiple authors

Some good feedback re: exceptions in this post.


I’ve updated the code on Google Code at: to cover both Exceptions and Models if anyone wants a looksy.

Again though, really interested in hearing thoughts on this.  I think from the performance testing I’ve done and the posts I’ve read, I’m happy with our approach, but I’m equally happy for someone to come along and shout NOOOOOOO! and tell me why I’m an idiot 🙂

Over to you guys, and thanks for all the feedback thus far!

Business Exceptions in c# (as I understand them!)

Thought I’d best caveat the post as this really is just a collection of thoughts from a number of very clever people, and I’ve come to wonder over the past few days (since #dddscot) whether this is a good way to handle business exceptions or not.

My approach has been born out of a cracking talk by Jeffrey Richter at DevWeek this year (see the summary post elsewhere in my blog) where he talked about exception within your software and (as @plip did at dddscot this year) about embracing them.  He talked about exceptions in the following way though:

  1. Exceptions are not just for exceptional circumstances
  2. They are there as a means of saying ‘something hasn’t worked as expected, deal with it’
  3. They should be thrown when they can reliably be managed (be that logging or something else)
  4. They should be useful/meaningful

In my other post, I used the example of ProcessPayment as a method, and the various things that could go wrong during that method, but I thought I’d bring together a simple app that demonstrates how we are using exceptions currently.

The reason for this post

There was a lot of discussion after #dddscot about how folks handle this sort of thing, and really, there were some very clever people commenting!  It’s kinda made me nervous about the approach we’re taking, you all know the crack:

Dev1: “And that new method works even if the input is X, Y, and A?”

Dev2: “It did until you asked me, but now I’m going to have to test it all again!”

Ahhh, self doubt, you have to love it 🙂

Though I digress – basically, I would love to get some feedback from the community on this one.

Business information – what are the options?

Ok, if we take a simple method call, something like:

ProcessLogin(username, password)

How can we find out if that method fails for whatever reason?  If it does fail, why does it fail?  Was the username wrong?, is their account disabled?, did the password not match up?  This is a relatively straight forward method which is why I’ve chosen it for the demo, though there are any number of things that can go wrong with it.

Option 1 – returning an enum or something that can identify the type of error

So the method signature could be:

public ProcessLoginResult ProcessLogin(string username, string password) {
	// stuff

public enum ProcessLoginResult {

You may feel like that’s a lot of fail states, but these are what I work with in my current environment so they have to be included.

Obviously then we have something from the calling code like:

var result = ProcessLogin(username, password);

if (result != ProcessLoginResult.Success) {
	switch(result) {
		case ProcessLoginResult.UsernameMismatch:
		case ProcessLoginResult.PasswordMismatch:
			ModelState.AddModelError("General", "We have been unable to verify your details, etc. etc.");
		case ProcessLoginresult.[errorstate1]
			return RedirectToAction("ErrorState1", "ErrorPages");
		case ... [for each extra error state]

There are obvious pro’s to this approach from my point of view – one is that we’re not throwing exceptions!  People talk a lot about the performance overhead in actually throwing new exceptions – there’s generally a sucking in of teeth as they do this.  I personally have no idea how “expensive” they are to raise, and it’s certainly something I’ll have to look into.

The difficulty here for me though is two-fold:

  1. If I want the richness of business information to return from my methods on failure, I need to come up with (almost) an enum per method to define the states that it can return with?
  2. If I have a different method (e.g. GetUserById(userId)) my only option is to setup the method signature with the user as an out param or pass it down by reference.

Option 2 – Business Exceptions

And this is the approach I’ve taken, though again – feedback very much appreciated!  Each of the possible fail states becomes a potential exception.  So the ProcessLogin method becomes:

/// Processes the login.  Steps are:
///  - Check the existence of the user
///  - Check the password matches (yup, we'd be hashing them here, no need for the demo)
///  - Check the account status
/// The username.
/// The password.
public MyCompanyUser ProcessLogin(string username, string password)
	MyCompanyUser user;

		user = dal.GetUserByUsername(username);
	catch (MyCompanyUserNotFoundException)
		throw; // but then pass the exception up to the UI layer as it is most easily able to deal with it from a user perspective

	if (user.Password != password)	
		MyCompanyUserWrongPasswordException ex = new MyCompanyUserWrongPasswordException("Password doesn't match");
		ex.Data.Add("Username", username);
		// potentially if you had an MD5 or something here you could add the hashed password to the data collection too

		throw ex;
		case AccountStatus.SelfExcluded:
			MyCompanyUserSelfExcludedException ex = new MyCompanyUserSelfExcludedException("User self excluded");
			ex.Data.Add("Username", username);
			throw ex;
		case AccountStatus.CoolingOff:
			MyCompanyUserCoolingOffException ex = new MyCompanyUserCoolingOffException("User cooling off");
			ex.Data.Add("Username", username);
			throw ex;
		case AccountStatus.Disabled:
			MyCompanyUserAccountDisabledException ex = new MyCompanyUserAccountDisabledException("Account disabled");
			ex.Data.Add("Username", username);
			throw ex;
		case AccountStatus.Closed:
			MyCompanyUserAccountClosedException ex = new MyCompanyUserAccountClosedException("Account closed");
			ex.Data.Add("Username", username);
			throw ex;
	return user;

obviously with this in place I can either Log at this level or log at the UI layer (I don’t have a strong feel architecturally either way).

The process login method call at the UI layer then becomes a little more convoluted:

	MyCompanyUser user = service.ProcessLogin(model.Username, model.Password);

	return RedirectToAction("LoggedIn", "Home");
catch (MyCompanyUserSelfExcludedException)
	return RedirectToAction("SelfExcluded", "ErrorPages");
catch (MyCompanyUserCoolingOffException)
	return RedirectToAction("CoolingOff", "ErrorPages");
catch (MyCompanyUserAccountDisabledException)
	return RedirectToAction("AccountDisabled", "ErrorPages");
catch (MyCompanyUserAccountClosedException)
	return RedirectToAction("AccountClosed", "ErrorPages");
catch (MyCompanyUserException)
	// if we're this far, it's either UserNotFoundException or WrongPasswordException, but we'll catch the base type (UserException)
	// we can log them specifically, handle them specifically, etc. though here we don't care which one it is, we'll handle them the same
	ModelState.AddModelError("General", "We have been unable to match your details with a valid login.  (friendly helpful stuff here).");

I don’t know why I find this a more elegant solution though – it certainly doesn’t generate any less code! There is very much a need for good documentation in this one (each method call documenting what types of exceptions can be thrown).

Want to see more?

I’ve put together a test VS2010 project using MVC2 and separate projects for the exception definitions and one for the models/services/dal stuff.

It’s rudimentary, but our core solution as Unity in there as an IoC container, it has interface based Services and Repositories, it has unit tests etc. and it just wasn’t viable (or commercially acceptable) to make any of that available, so I’ve distilled it down to the basics in the solution.

What I’d love now is feedback – how do people feel about this approach (Business Exception led) as opposed to the other?  What other approaches are available?  Is it bad to use exceptions in this way (and I’m fine if the answer is ‘ffs tez, stop this now!’ so long as there’s a good reason behind it!)

The code is available on google code at:

and I’ve only created a trunk (subversion) at present at:

Feedback pleeeeeez!

Developer Developer Developer Scotland, or summer arrives early in Glasgow!

Who let the sun out?

What a stunning day we were all faced with for #dddscot this year – the drive up from Newcastle (albeit starting at an ungodly hour) was actually fun – great scenery on the way, I’d forgotten what it was like to get out of a built up area – plenty more trips out needed over the summer methinks.  I had high expectations of the event after attending #ddd8 earlier in the year and being overwhelmed by the content there, and the day didn’t disappoint.

Onto the talks I managed to get to:

HTML 5: The Language of the Cloud?

Craig Nicol – @craignicol

A good start to the day, and pertinent for my current role (we’re investigating what HTML5 can do to help us with alternate platform delivery, certainly with a focus on the mobile market).  Craig’s talk was animated (in both senses of the word!), and it was useful to see just where the ‘standards’ were at.  Safe to say at present, and Craig mentioned it a few times during this talk, that if you want to target HTML5 then you really do need to pick your target browser (or generate more work usually and target browserS), as the standards are still significantly in flux.  There is a lot of help out there, and those people creating mashups really are helping in showing which browsers support which elements.

I particularly liked the look of the XForms (forms 2.0) stuff – being able to define something as an ‘email’ field, or a ‘telephone’ or ‘uri’ I think adds significant context to the proceedings and will deliver (for the users) a far richer experience.

As with a lot of emerging technologies though, I certainly think it’s far too early for reliable deployment in all but very controlled environments – even if you implement progressive enhancement well.  Something to follow for sure though.

Overall a very well presented talk, a minimal smattering of the expected ‘this worked 10mins ago!’, but this is HTML5+bits, so to be expected.

Exception Driven Development

Phil Whinstanley – @plip

plip at his usual exuberant self with this talk on exceptions, and it was a useful additional session to one I’d seen at DevWeek earlier in the year given by Jeffrey Richter.  The initial message was ‘exceptions happen’ – we have to learn how to live with them, what to do when they happen, which ones we should fix (and yup, I’m one of those people that hates warnings, so I suspect I’ll have to fix all of them!), which ones we should prioritise – how we make sure we’re aware of them, that sort of thing.

Two very useful additions to my current understanding – one was ‘Exception.Data’ which is essentially a dictionary of your own terms.  At present we’re throwing our own exceptions within our business software (more on that later), but .Data will give us far more information about what parameters were at play when the exception happened – utterly brilliant, and terrifying that I didn’t know about this!

Another was the use of window.onerror in javascript – ensure that you http post (or whatever other mechanism works best for you) when your scripts don’t work – there’s nothing worse than your javascript borking and not being able to repeat it, so make sure you report upon these too.

Some key snippets (some common sense, some not) such as never redirect to an aspx page on a site error (thar be dragons and potential infinite loops), go do static html instead.

plip’s acronym at the end of the session made me chuckle, I shant repeat it, but it had an odd way of sticking in the consciousness 😉

The only thing I thought lacking in this talk (and it’s no real criticism of plip) was the concept that was covered in that talk earlier in the year at DevWeek.  The idea that Exceptions are *not* for exceptional circumstances, they’re there as a means of controlling program flow, of reporting when something didn’t work as expected, and of giving more effective information.

So for example, if I had a method called ‘ProcessLogin(username, password)’ and one of the first checks was ‘does this username exist in the DB’, if it doesn’t, throw new UserNotFoundException.

Of course, if plip had gone down the custom exceptions and business defined exceptions, the talk could comfortably lasted two to three times longer, so I feel the devweek talk and plip’s complemented each other well.

Cracking talk though plip – really did get a lot out of this one, and I think this was the most useful session of the day for me.

A Guided Tour of Silverlight 4

Mike Taulty – @mtaulty

A reminder from Mike that I really need to spend some time looking into Silverlight 4.  I focus very heavily on web development and web technologies, and although I have little interest in desktop development, SL4 I think has a lot of interest in terms of as an intranet based tool with rich GUI.  Of course, I may be better going down the WPF route with that, but there’s something about the versatility of SL4 that appeals.

Cracking talk from Mike as per – always good to see one of the UK evangelists wax lyrical about their current focus, and this was no exception.

What ASP.NET (MVC) Developers can learn from Rails

Paul Cowan – not sure on twitter

I have to prefix this talk by saying that I thought Paul’s presentation style was great, and much as he maligned his irish accent, he was cracking to listen to.

That said – rails… what a bag of shite! lol.  I suspect I may get a number of replies to this, but what I like about MVC2 is that I can focus on architecture and the important stuff, and ‘get the job done’ without too many interruptions.  Ok, I have to add views myself, and a ‘Customer’ entity doesn’t automatically get a controller/views/unit tests associated with it.  But I feel in complete control, and don’t feel constrained at all.

I spent too many years in a unix/perl/python environment, and I really do not miss the command line shite I had to go through to really add value to what I was doing in the programming language.

VS2010 + Resharper deliver a significant number of improvements in the ‘streamlining’ of application development, and I have none of the hassle that came about as part of that rails demo (no matter how much it delivered with just a simple command line).

So I really do apologise to Paul – his presentation was great, but it only reinforced for me that the love affair I’m having with MVC2 at present is well grounded.  God, I sound like such a fanboy!

Real World MVC Architectures

Ian Cooper – @icooper

A few teething troubles at the start (don’t you just hate it when a backup brings your system to its knees), but overall a good presentation – I’d seen Ian’s talk at #ddd8 (prior to really solidly working with MVC), and I thought I’d re-attend this again after spending 2months solidly working with MVC2.  It has certainly reinforced what I’m doing is ‘right’ or at least appears to be good practice.  I’m still sceptical about the overhead that CQRS delivers when implemented in its purest sense, though the principles (don’t muddy up your queries with commands, and vice versa) is a one that obviously all should follow.

Ian had a bit of a mare with his demo code, though more to my benefit as I managed to nab some swag for being ‘that geek’ in the front row pointing it out – yay for swag!

The Close

Colin Mackay and the rest of the guys then spent some time covering the day, handing out significant swag (yay, I won a resharper (or if I can wing it as I have one) a dotTrace license!), and we had the obligatory Wrox Lollipop shot taken.

All in all, it was a cracking day, and well worth that early drive up from Newcastle – I think events like this work so well – getting a room or rooms full of enthusiastic devs, who all just want to be better at their art, and being presented to by people who’ve spend some time working on that art.  There’s nothing finer in the geek world.

Thanks to all organisers and sponsors – great fun was had by all 🙂

Unit Testing with DataAnnotations outside of MVC

This past week has seen us start on a big project at work to re-architect the site into .net and MVC2.  Naturally we have our models in a separate project, and we have two separate test projects (Unit and Integration) setup to use NUnit.

As it’s early days for us, and our first “real” MVC project I thought I’d write this up, a) as an aid to learning for me, but b) to try to gain feedback from the community on what they do with regards validation on their models.

I can see a few different ways we could have done this (annotate the ViewModels we’ll use on the front end, build in logic into our setters to validate, etc. etc.) but we’re now going down a route that so far feels ok.  That said, we’re focussing solidly on the modelling of our business logic at present, so haven’t yet brought the model “out to play” as it were.

Hopefully the above gives a wee bit of insight into where we are with it.

We’ve decided to plump for the MetaData model approach to keep the main objects slightly cleaner – an example for us would be:

namespace MyCompany.Models.Entities
	public class MyCompanyUser
		public int UserId { get; set; }

		public string Username { get; private set; }

		public void SetUsername(string newUsername)
			if (Username != null)
				throw new ArgumentException("You cannot update your username once set");

			//TODO: where do we ensure that a username doesn't already exist?
			Username = newUsername;

and then in a separate class:

namespace MyCompany.Models.Entities
	public class MyCompanyUserMetaData
		[Required(ErrorMessage="Your password must be between 6 and 20 characters.")]
		[StringMinimumLength(6, ErrorMessage="Your password must be at least 6.")]
		public string Password { get; set; }

		[Required(ErrorMessage="Your username must be between 6 and 20 characters.")]
		[StringLength(20, MinimumLength=6, ErrorMessage="Your username must be between 6 and 20 characters.")]
		[MyCompanyUserUsernameDoesNotStartWithCM(ErrorMessage="You cannot use the prefix 'CM-' as part of your username")]
		[CaseInsensitiveRegularExpression(@"^[\w\-!_.]{1}[\w\-!_.\s]{4,18}[\w\-!_.]{1}$", ErrorMessage = "Your username must be between 6 and 20 characters and can only contain letters, numbers and - ! _ . punctuation characters")]
		public string Username {get;set;}

With all of this in place you’re all well and good for the MVC world, though unit testing just doesn’t care about your Annotations so your simple unit tests:

public void SetUsername_UsernameTooShort_ShouldThrowExceptionAndNotSetUsername()
	// Arrange
	testUser = new MyCompanyUser();
	// Act

	// Assert
	Assert.Throws(() => testUser.SetUsername("12345")); // length = 5
	Assert.That(testUser.Username, Is.Null, "Invalid Username: Username is not null");

won’t give you the expected results as the logic of that is based upon the DataAnnotation.

What was our solution?

After much reading around (there didn’t seem to be an awful lot out there covering this) we took a two step approach.  First was to allow SetUsername to validate against the DataAnnotations like so:

public void SetUsername(string newUsername)
	if (Username != null)
		throw new ArgumentException("You cannot update your username once set");

	Validator.ValidateProperty(newUsername, new ValidationContext(this, null, null) { MemberName = "Username" });

	//TODO: where do we ensure that a username doesn't already exist?
	Username = newUsername;

Validator is well documented and there are a few examples out there of people doing this within their setters.  Essentially validating the input for a particular MemberName (Username in this case).

The second step was necessary because of the approach we’d taken with the MetaData class above, and it was a mapping in the TestFixtureSetup within our unit tests:

TypeDescriptor.AddProviderTransparent(new AssociatedMetadataTypeTypeDescriptionProvider(typeof(MyCompanyUser), typeof(MyCompanyUserMetaData)), typeof(MyCompanyUser));

This line (though I’ve yet to look down at the source code level) would appear to just be a standard mapping for the class to tell it where to find the metadata/annotations.

After putting those two things in place, the unit tests successfully validate against the annotations as well as any coded business logic, so jobs a good un!

Was it the right solution?

This is where I ask you, the person daft enough to suffer this blog post!  I have no idea if there is a better way to do this or how this will pan out as we propagate up to the MVC level – will I be causing us headaches taking this approach, will it simply not work because of overlap between the way MVC model binder validates versus what we’ve done down at the domain level?

It’s still early days for the project, and the above feels like a nice way to validate down at a business domain level, but how it pans out as we propagate wider and start letting other projects consume, hydrate and update the models… well that’s anyone’s guess!

Comments very much welcome on this one folks 🙂

Using T4 to generate enums from database lookup tables

I’m sure a fair few people will be working on projects like us where we have a database backend with referential integrity, including a number of lookup tables.  A lot of the time in this situation you also want to mirror the lookup values in your code (as enums for us).  Most of the time, it’s relatively easy to just manually create both sets of entries as they will rarely change once created.  Or so we hope!

I quite fancied learning about T4, and the first example I could think of was this tie up between database lookup tables and code enums. 

I love the idea that the output from your T4 work is available at compile time and available directly in your code once you’ve created the template – the synching of things between a database and your code base is an obvious first play.

So with that in mind, lets crack on.

Initial Setup

I’ve created a simple console app and a simple DB with a couple of lookup tables – simple ‘int / string’ type values.  I installed T4 Toolbox to get extra code generation options within the ‘Add New…’ dialog, though it turns out my final solution didn’t actually require it – that said, the whole T4 Toolbox project looks very interesting, so I’ll keep an eye on that.


This will generate a file ‘’, and the base content of the file is:


Add a reference to your DB

At this point, I would have loved to use linq to sql to generate my enums, as it’s a friendly/syntacitcally nice way of getting at data within the database.

That said, this proved far more difficult than I’d have hoped – any number of people had made comments about it, and saying if you ensure System.Core is referenced and you import System.Linq job should be a good un.  It wasn’t in my case.

Thankfully, this wasn’t the end of the investigation.  I managed to find an example online that used a SQLConnection… old skool it was to be!

So what does the code look like…

The code I generated turned into the following, and I’m sure you’ll agree it aint that far away from the sort of code we’d write day in day out.

<#@ template language="C#" hostspecific="True" debug="True" #>
<#@ output extension="cs" #>
<#@ assembly name="System.Data" #> 
<#@ import namespace="System.Data" #>
<#@ import namespace="System.Data.SqlClient" #>
    SqlConnection sqlConn = new SqlConnection(@"Data Source=tombola009;Initial Catalog=TeamDev;Integrated Security=True");
namespace MyCompany.Models.Enums
	public enum TicketType
		string sql = string.Format("SELECT Id, Name FROM LOOKUP_TABLE_1 ORDER BY Id");
        SqlCommand sqlComm = new SqlCommand(sql, sqlConn);

        IDataReader reader = sqlComm.ExecuteReader();

        System.Text.StringBuilder sb = new System.Text.StringBuilder();
        while (reader.Read())
            sb.Append(TidyName(reader["Name"].ToString()) + " = " + reader["Id"] + "," + Environment.NewLine + "\t\t");
        sb.Remove(sb.Length - 3, 3);

<#= sb.ToString() #>
	public enum TicketCategory
		sql = string.Format("SELECT Id, Area, Name FROM LOOKUP_TABLE_2 ORDER BY Id");
        sqlComm = new SqlCommand(sql, sqlConn);

        reader = sqlComm.ExecuteReader();

        sb = new System.Text.StringBuilder();

        while (reader.Read())
            sb.Append(TidyName(reader["Area"].ToString()) + "_" + TidyName(reader["Name"].ToString()) + " = " + reader["Id"] + "," + Environment.NewLine + "\t\t");

        sb.Remove(sb.Length - 3, 3);


<#= sb.ToString() #>

    public string TidyName(string name)
        string tidyName = name;

		tidyName = tidyName.Replace("&", "And").Replace("/", "And").Replace("'", "").Replace("-", "").Replace(" ", "");
        return tidyName;


The ‘TidyName’ method was in there just to try to tidy up the obvious string issues that could crop up.  I could have regex replaced anything that wasn’t a word character, though I think this gives me a bit more flexibility and allows customisable rules.

This basically generates me the following .cs file:

namespace MyCompany.Models.Enums
	public enum TicketType
		Problem = 1,
		MAC = 2,

	public enum TicketCategory
		Website_Affiliates = 1,
		Website_Blog = 2,
		Website_CentrePanel = 3,
		Website_CSS = 4,
		Website_Deposit = 5,
		Website_Flash = 6,
		Website_GameRules = 7,
		Website_GameChecker = 8,
		Website_HeaderAndFooter = 9,
		Website_HelpContent = 10,
		Website_Images = 11,
		Website_LandingPage = 12,
		Website_MiscPage = 13,
		Website_Module = 14,
		Website_Multiple = 15,
		Website_MyAccount = 16,
		Website_myTombola = 17,
		Website_Newsletters = 18,
		Website_Playmantes = 19,
		Website_Refresh = 20,
		Website_Registrations = 21,
		Website_Reports = 22,
		Website_TermsAndConditions = 23,
		Website_WinnersPage = 24,
		Website_Other = 25,

From that point on, if there are extra lookup values added, a simple click of the highlighted button below will re-run the templates and re-generate the CS files.


Next Steps

I’m utterly sure there must be an easy way to use linq to sql to generate the code above and I’m just missing it, so that’s the next play area.  I’m going to be playing with the POCO stuff for EF4, so I think the above has given me a taster for it all.

As with all initial plays with this sort of thing, I’ve barely scratched the surface of what T4 is capable of, and I’ve had to rely upon a lot of existing documentation.  I’ll play with this far more over the coming weeks – I can’t believe I’ve not used it before!

Dev Week 2010

Well, I was lucky enough to have my employer pay for a trip for me and the dev team lead to DevWeek this year.  First time I’ve been to DevWeek, and hopefully not the last.  I attended the 3 days of the full conference and from the outset it showed itself as a very well organised and run conference.  The Barbican was a cracking venue, and I found the overall structure of the event very useful.

My mindset at the start of the week was very much about attending those sessions I felt I’d learn the most from (well duh!), and those that would have the most useful impact on the current dev stream I’m in.  We’re in the process of re-architecting the site, both hardware and software, and part of that re-write will be the move from a predominantly classic ASP (with elements thrown in for good measure) across to using: MVC2; Entity Framework 4/a.n.other ORM or indeed linq-to-sql; intelligent caching; automated builds/continuous integration; and bringing in elements like test first development, etc.  So the talks were about targetting those areas to maximise gain. 

I’ll cover my thoughts on the talks that had the most impact on me.

Day 1

Keynote – 97 Things Every Programmer Should Know

Kevlin Henney – Curbralan@KevlinHenney

A cracking opening keynote which delivered a talk from the above title, one of a few in the O’Reilly 97 Things series.  I confess to never hearing about this series, though I shall indeed be purchasing the book.  The 97 things are covered on their wiki, and Kevlin for his part distilled a number of them as part of the talk, interspersing with personal experience (of which the guy has a bucketload!).

Some of the key premises that stuck for me in this talk were:

Do Lots of Deliberate Practice – very much the idea that performing a task with the sole aim of mastering and improving your ability to do that task is what makes us (essentially) better at our jobs.  The obvious geek term that springs to mind here is grokking, and I think that any developer who actively involves themselves with deliberate practice will be the richer for it.

Learn to Estimate – struck a chord merely because we all face this, and I’ve never worked in a job when I haven’t had to estimate (and then subsequently commit to) timelines.  As Giovanni points out in his contribution, the difference between estimation (“that’ll take me 4 weeks”) versus target (“I need that in 2 weeks”) versus commitment (“I can’t do that in 2 and a half weeks but no less”). We as developers will always face this, and as we discuss with our managers, what they are after is a commitment of time – it’s how we arrive at that that is important.

There were many other points that sat well with me, and I’d really recommend grabbing the book to pickup on quite a stash of collective wisdom.

Objects of Desire

Kevlin Henney – Curbralan@KevlinHenney

I don’t think I’ve seen a room so packed!  There were as many people on the floor as there were on chairs – clearly this one struck a major chord with us all.  The concept was an interesting one, especially for someone who’s programmed on the .net stack since 1.1, and has attempted to be OO since day one.  The aim was to relay (among other things) effective OO practice, pitfalls, and techniques.

This was one of those talks that helped reinforce the direction that I’m steadily taking (loose coupling, the effective use of patterns, etc.) isn’t far off the beaten track, and the talk really solidified my thinking.  These first two talks were the first time I’d seen Kevlin talk, and I hope to see more in future – an incredibly accomplished talker, buckets of experience, and someone who clearly fully understands their subject matter.

Design for Testing

Kevin Jones – Rock Solid Knowledge@KevinRJones

After reading through The Art of Unit Testing and playing around with testing myself, I found this talk again helped solidify that my thinking on the subject was going in the right direction – loosely coupling, staying away from concrete types preferring interfaces/contracts, separate your layers/concerns, test a single element of the code, dependency injection is your friend and the use of inversion of control containers.

I’d not seen Unity in practice, so to see the syntax/usage of that was particularly useful, and I think I’ll give that one a whirl as we move forward with our re-architecture. Expect a separate blog post from me on this one as I document the setup we finally arrive at.

Great talk, great demos.

Day 2

Entity Framework in the .NET Framework 4 and Visual Studio 2010

Eric Nelson – Microsoft@EricNel

Another cracking talk from Eric – it’s always nice to feel you’re getting a genuine (not marketing led) talk on any microsoft related product, and where it’s good there will be gushing, but where it’s not, you can be sure you’ll hear about it.  Entity Framework 1 clearly caused Microsoft no end of trouble in terms of adoption rates, and it seems to have used this as a clarion call to improve EF4 in every way possible.  Are there still issues? Yup, without doubt.  Does it now look like a viable provider for a solid data access strategy – absolutely!

I think the key things from this talk for me were:

  • play with it in more anger, and attempt to model our existing schema on it (thankfully a weeks holiday will allow this, I’ll try to spend some of it sat coding in the sun!)
  • Investigate POCO support – I like the idea that the domain model is designed and written as a separate concept away from the data access strategy, and then (with the clever use of T4 and some other shenanigans that I’ll have to play with) the coupling is brought in (though it doesn’t strictly ‘muddy’ your domain model)
  • Investigate T4 – seriously, why haven’t I started playing with this – Eric mentioned the Tangible T4 Editor which I’ll have a look at, and investigate any other tooling that may help.

I feel almost convinced that the use of Entity Framework 4 is the data access strategy that we need as we re-architect our site, and I look forward to getting past the testing and proof of concept stages to see what it can really deliver.

Exception Handling

Jeffrey Richter – Wintellect@JeffRichter

Superb talk, and this highlighted that I need to start thinking about exceptions differently to how I currently do.  He highlighted the myth that “Exceptions should only be used for exceptional situations”, and an exceptional situation isn’t something that rarely happens, it’s when a method doesn’t do what it says it’s going to do.

ProcessPayment(Customer, Order, CardDetails)

If the above method does not actually process the payment (for any reason), then throw an exception – hopefully of a type that is useful/meaningful.  The caveat to that is that you should only catch exceptions that you can meaningfully do something about.

So what if the customers payment was declined?  What if the customer is blocked from placing orders? What if the card has some issue with it?

All of these feel very much like reasons to throw business related exceptions.

There was an awful lot more to this talk, and I feel bad distilling it to the above, but for me, this was the key gain from that talk – that I need to start thinking about exceptions in a wholly different way.

Day 3

Improving Quality with an Automated Build Process

Simon Brown – Coding the Architecture@simonbrown

Another of those ‘it’s obvious, but I bet not many of us are doing it’ talks – the talk focussed on Simons work on an internet banking site, what they used in order to help them improve quality, streamline their processes in terms of software delivery, and just generally make their lives easier.

There was a lot of discussion in this talk around tooling, and I shall have to do a thorough investigation of those tools available with regards continuous integration and build management.  Simon used Nant, Nant Contrib, Cruise Control.Net, NCover, NCover Explorer and a number of other tools.

If I had one criticism of this talk it’s that I’d have liked to see a bit more hands on/working with the tools, though I can only imagine how sensitive their code base is, so can understand if this was the reason this wasn’t possible.

Overall though a very thought provoking talk, and one that will see me doing an awful lot more reading and playing in the coming weeks.

Closing Thoughts?

What about the rest then?

The other talks I attended were all good too, though obviously the blog post would have gotten stupidly long (well, it is stupidly long, but it would have been far more painful to read than it already is!).

One of the big things for me from all of this (and I apologise unreservedly to @ericnel for this), Azure – I’m just not convinced.  I’ve joined the UKAzure fan site in order to monitor developments on this, but it just doesn’t feel mature enough to be a production platform.  Strategically I can see where Microsoft are going with this, and over time clearly improvements will be made.  I’d just feel if I got involved now, I’d be beta testing in order to feedback for the next version of the product, and with so many other technologies available for me to play with, this one has been bumped off the list.

All in all though, a fantastic conference, and one that leaves me feeling significantly fired up about the technology that I’m going to be employing over the coming months – thanks!

MSBuild – the voyage of the noob

I’ve been determined to have a play with build automation/continuous integration for a while now and just have always found something more fun to play with (ORM, MVC, etc.), though I know as the team where I work move forward, there needs to be some control and some vision on how all of our work should hang together.  With that in mind, this weekend I started to read up on MSBuild (yup, I know there are other build managers out there, but I thought I’d start with that as my learning platform and move on from there).

Why do I need to modify the default build?

Why does anyone really I suppose, but I like what we get from it.  As we move forward, the following I think will be useful to us:

  • automating unit test runs on successful builds
  • auto-deploying to our development server
  • minifying and concating javascript and css
  • ensuring coding style rules are followed (once I setup a set of company rules for us)
  • other things I haven’t imagined… there will be lots!

So where do I learn?

This was my first stumbling block.  There are a lot of resources on MSBuild, and trudging through them to find the one that was right for my learning style and approach was a nightmare.  I though to start out with the task that was at the forefront of my mind (concat/minify JS/CSS), but I just didn’t find any resources that were straightforward (my failing more than the resources available I’m sure!)

I’ve grabbed a few useful ones on my delicious bookmarks, and in particular, a significant thanks must go to the Hashimi brothers for their fantastic series on dnrTV and finishing up with some Stack Overflow discussion.

So what did I learn?

Firstly, a quick look at a post by Roy Osherove highlighted to me some of the tools available.  I found two other visual build tools: MS Build Sidekick and MSBuild Explorer, both of which I found very useful in actually *seeing* the build process, but after a watch through those dnrTV vids, I though I’d try something straight forward – concating CSS files into a ‘deploy’ css.

Get into the .csproj file

Unload your project, right click on it, and select ‘edit <projectname.csproj>’


MSBuild projects seem to be broken down quite succinctly into Targets, Tasks, Items, and Properties.  For my particular need, I needed to look at Items and Targets.

The schema in MSBuild is incredibly rich – you get intellisense for the most part, but because you can define your own schema elements, you are never going to get 100% intellisense.

You have a number of different ‘DependsOn’ items (mostly defined in Microsoft.CSharp.targets file), so you can create tasks that hang onto some of these like so:


This is telling the build process that I have a target called ‘ConcatenateCSS’ that should happen before the ‘BuildDependsOn’ target (roughly speaking!)

I then created that target with the following:


Which to me, looks bloody complex! I had to find some help on this one naturally.  But essentially, we have created a target called ‘ConcatenateCSS’ which is going to execute before the build.  We create an ItemGroup (and this is where the intellisense falls over) called ‘InFiles’, and we tell it to include everything ending in .css under the _assets\css folder (it seems the **\\*.css is the wildcard for recursion too, though I may be wrong on this!), and we want to exclude _assets\css\site.css (more on this in a sec).

I then send a message (which will be seen on ‘output’ during build which tells us it’s happening, and then use the combination of ‘ReadLinesFromFile’ and ‘WriteLinesToFile’.  The %(InFiles.Identity) in the ReadLinesFromFile essentially turns this into a foreach loop, and Identity is one of the MSBuild defaults.  So this is essentially, foreach of the files we’ve identified, output the contents to the ‘Lines’ variable/parameter.  We then Write the whole lot back to our file using the @(Lines) variable.

Now, on each build, we generate a single css file (site.css) that our site can reference, but all edits go in via the broken files.  Yes, there are more elegant ways to do this, and yes, I will likely do that in time, but I’ve made a start!

Where next?

I’d be lying if I said I could do the above without some solid examples and help, so the next steps for me are creating a solid understanding of the core concepts, playing with the tools, and looking to solve some of our core business issues as we move forward in order to take some of the human elements out of the build process.  Obviously I have to investigate continuous integration and see where that all fits in too, but I’m happy with the start I’ve made.

A better way to check for validity in emails?

I’ve had a method that I’ve used from time to time to validate email addresses, trying to cater for the common problems that have been seen with addresses.  This weekend I had cause to look at it and thought there must be a better way of representing it all.

Couple of thoughts crossed my mind:

  1. I’m not throwing exceptions anywhere, and although I know the method, so use it as I’d expect, perhaps I should be throwing a FormatException? or some others?
  2. It’d be easy to make this an extension method, but I guess it’d be an extension to System.String, and doesn’t really feel right as it serves such a focussed purpose.
  3. Should I be doing any other checks in the code that I’m not already?

I’ll have a read around and look at refactoring, but thought I’d post it here so that I have a record of the ‘before’ and ‘after’ views.

public static string ValidateEmail(string email, out string error)
		error = "";

		// Pre-formatting steps
		email 	= email.Trim().Replace(" ", "");
		email 	= email.Replace(",", ".");												// mostly, commas are full stops gone wrong
		email 	= (email.EndsWith(".")) ? email.Substring(0, email.Length-1) : email;	// kill any full stop at the end of an address
		email	= email.Replace(@"""", "");												// remove " in the email address
		email	= (email.StartsWith("'")) ? email.Substring(1) : email;					// remove ' at the start of the address
		email	= (email.EndsWith("'")) ? email.Substring(0, email.Length-1) : email;	// remove ' at the end of the address

		// STEP 1	- No '@' symbol in Email
		if (!email.Contains("@"))
			error = "Email contains no '@' symbol.";
			return "";

		// STEP 2	- More than 1 '@'symbol in Email
		if (email.Split('@').Length > 2)
			error = "Email contains too many '@' symbols.";
			return "";

		// STEP 3	- No .com, at end of addresses
		//			- Invalid characters ()<>,?/\|^!"£$%^&* ??? in address
		Regex _regex = new Regex(@"^[-\w._%+']+@[-\w.]+\.[\w]{2,4}$", RegexOptions.IgnoreCase);
		if (!_regex.IsMatch(email))
			error = "Email address appears invalid.";
			return "";
		return email;
		error = "Unknown error with email address.";
		return "";

Orphaned SQL Server Users

Been blogged about all over the place, but I wanted a central place to remember it.

After restore of a database from another server, often the user account can become unassigned from an SQL server login.

The following sorts it:

sp_change_users_login 'update_one', 'orphaned_login', 'sql_username'

jobs a good un.

Now I never need hunt again – huzzah 🙂

Interesting times and justifying ones existence :)

Well, wasn’t yesterday an interesting day!  Had a conversation with a friend about what it is I actually do.  They didn’t feel that I was selling myself effectively enough via this blog, though thankfully this blog was never about that – it (I hope) shows that I’m keen to learn, keen to do more, and never content with knowing ‘enough’.  They asked in particular for me to clarify what work I had done whilst working on suite-e, and looking back over our work schedules, project documents, and just generally over the functionality in there, it turned out to be quite a list:

  • User Controls / Custom Server Controls
  • Using .net forms and role-based security with the membership and role providers
  • Use of Enterprise Library application blocks for Data Access, Exception Management and Logging
  • Extention methods
  • Significant Ajax use (both ASP.NET Ajax and Telerik Ajax wrappers)
  • Linq (minor)
  • Facade design pattern (5 tier solution, UI -> Business Facade -> Business -> Dal Facade -> Dal)
  • Significant use of inheritance throughout the data and UI layers
  • Interface use (minor, where necessary)
  • Custom/3rd Party Controls (Telerik)
  • Hand rolled URL routing for friendly URLs (sitting atop
  • Use of Themes, Masterpages, including browsercaps useage to allow CSS targetting more effectively for cross browser
  • jQuery use (my input minor)
  • CSS
  • Xhtml
  • > 80 table relational data model, significantly more stored procs, cursors, temp tables
  • Web services to authenticate licensing of the product
  • Windows services to manage email send from the CRM module
  • Significant and ongoing refactoring, including fxcop use when readying the solution for microsoft testing
  • Upgrade from .net 1.1 through 2.0, and then incorporating 3.5 elements when applicable

Suite-e as a product has obviously had more than one developer work on it, though it felt good whilst writing this up to realise how far I’d come and what technologies I’d learned and put into practice during the implementation.  I essentially architected the vast majority of the product, both code and SQL schema from its early days through to the modular CMS, Product Catalogue, E-commerce, CRM and Events management solution that it is today, with over 170 files and 51k lines of code across 6 projects, over 80 tables, significantly more stored procedures… the list goes on 🙂

It’s incredibly understandable that a blog would give people a perception of who you are, indeed it’s a very personal blog, so it certainly should, though I hope this blog also gives folks a perception of the sort of developer I am, that will keep ploughing on and learning as much as I am able, because that is what is ‘fun’ to me.