Localisation of your ASP.NET MVC 3 Routes

Our core product has recently undergone a localisation exercise as we plan to launch it in other european countries.  One of the first things we needed was to localise the routes on a per-country basis.

We started out remarkably luckily in that every route we delivered in the app was already custom.  We didn’t like the default route handler (Controller/Action/{stuff}) URL structure, and although we could have gone down the custom route handler approach, there were a few things that steered us away from that.

  1. we wanted full flexibility from an SEO point of view – as we dev’d we had no idea what would work well from an SEO point of view, so having each potential route customisable to whatever our SEO company desired was going to be a bonus.
  2. longer term plans will see us delivering a content management system to deliver an awful lot of the content – at that point, we may well be delivering custom routes via the DB too, so having a flexible routing system was essential.

Why not the default routes?

An example of some of the ‘out of the box’ routes we’d have gotten with the default route handler, versus what we actually wanted:

/MyAccount/UpdatePersonalDetails –> my-account/personal-details

/Winners/ByGame/{GameName} –> winners/by-game/{game-name}

Although generally, the conversion was a hyphen between caps and a full lowercasing, we found that replacing the default route handler with a custom ‘HyphenateAndLowercaseRouteHandler’ just didn’t answer enough of our use cases.

I’m sure google, bing and the other search engines will happily look at pascal cased words and discern them, though I as a human find it easier to read /our-new-game-has-paid-out-3-million-so far than /OurNewGameHasPaidOut3MillionSoFar.

One of the big selling points for not using the default routing was flexibility – we can change the routes without having to refactor/rename controllers or action methods, so there is a real separation there.

So, we started to build up our routing table with custom entries for each controller/action such as:

 routes.MapRoute("GameHistory", "game-history/{gameName}/{room}",
							controller = "BingoGamesHistory",
							action = "Index",
							gameName = "Bandit",
							room = "the-ship"
						}, namespaces);

and to date, across the whole front end application we have 183 custom routes.

Localising the Routes

It almost feels sham-like to be writing a blog post about this, though I still see questions on stack overflow about it, so thought I’d write this up.

What we did in the above example was replace the string (“game-history/{gameName}/{room}” with a localised resource – we now have a LocalRoutes which has something like the following:


and the routes.MapRoute command in global.asax replaces the string representation of the route with LocalisedRoutes.GameHistory_General.

Obviously from this point on, it’s then just a matter of adding a LocalisedRoutes.GameHistory.it, or LocalisedRoutes.GameHistory.es etc. to get the represnetation of the routes for those countries, and in our CI deployment the plan is to alter the web.config depending upon the deployment:

<globalization uiCulture="it-IT" culture="it-IT" />

Jobs a good un Smile

What next?

As I say, the next big phase of our project will include a content management system, so may well require us to have runtime routes injected into the routing table – I’ve never done it, but it’s something to be aware of. 

Sample Project

I’ve put together a simple project that demonstrates the above which will be something that folks can base their solutions upon if they they are having difficulty with the above description.  The example only localises routes, so the UI still remains in english, but you get the idea.

Download the example at google code

Unit testing complex scenarios – one approach

This is another of those ‘as much for my benefit’ as it is for the community posts.  On a sizeable project at work we’ve hit a ‘catch up on tests’ phase.  We don’t employ TDD, though obviously understand that testing is very important to the overall product (both in confidence on release, and confidence that changes to the code will break the build if functionality changes).  Our code coverage when we started this latest phase of work was terrible (somewhere around 20% functionality coverage with 924 tests) – after a couple of weeks of testing we’re up to fairly high coverage on our data access/repositories (>90%) and have significantly more tests (2,600 at last count).

We are following a UI –> Service –> Repository type pattern for our architecture which works very well for us – we’re using IoC, though perhaps only because of testability – the loose coupling benefits are obviously there.

We’re now at the point of testing our service implementations, and have significantly more to think about.  At the data access layer, external dependencies were literally only the database.  At service layer, we have other services as external dependencies, as well as repositories, so unit testing these required considerably more thought/discussion.  Thankfully I work with a very good team, so the approach we’ve taken here is very much a distillation of the outcomes from discussion with the team.

Couple of things about our testing:

Confidence is King

The reason we write unit tests is manifold, but if I were to try to sum it up, it’s confidence.

Confidence that any changes to code that alter functionality break the build.

Confidence that the code is working as expected.

Confidence that we have solidly documented (through tests) the intent of the functionality and that someone (more often than not another developer in our case) has gone through a codebase and has reviewed it enough to map the pathways through it so that they can effectively test it.

Confidence plays a huge part for us as we implement a Continuous Integration process, and the longer term aim is to move towards Continuous Delivery.  Without solid unit testing at it’s core, I’d find it difficult to maintain the confidence in the build necessary to be able to reliably deploy once, twice or more per day.

Test Functionality, Pathways and Use Cases, Not Lines of Code

100% code coverage is a lofty ideal, though I’d argue that if that is your primary goal, you’re thinking about it wrong.  We have often achieved 100% coverage, but done so via the testing of pathways through the system rather than focussing on just the lines of code.  We use business exceptions and very much take the approach that if a method can’t do what it advertises, an exception is thrown.

Something simple like ‘ValidateUserCanDeposit’ can throw the following:

/// <exception cref="PaymentMoneyLaunderingLimitException">Thrown when the user is above their money laundering limit.</exception>
/// <exception cref="PaymentPaymentMethodChangingException">Thrown when the user is attempting to change their payment method.</exception>
/// <exception cref="PaymentPaymentMethodExpiredException">Thrown when the expiry date has already passed</exception>
/// <exception cref="PaymentPaymentMethodInvalidStartDateException">Thrown when the start date hasn't yet passed</exception>
/// <exception cref="PaymentPlayerSpendLimitException">Thrown when the user is above their spend limit.</exception>
/// <exception cref="PaymentPlayerSpendLimitNotFoundException">Thrown when we are unable to retrieve a spend limit for a user.</exception>
/// <exception cref="PaymentOverSiteDepositLimitException">Thrown when the user is over the sitewide deposit limit.</exception>

and these are calculated often by calls to external dependencies (in this case there are 4 calls away to external dependencies) – the business logic for ‘ValidateUserCanDeposit’ is:

  1. Is the user over the maximum site deposit limit
  2. Validate the user has remaining spend based upon responsible gambling limits- paymentRepository.GetPlayerSpendLimit

    – paymentRepository.GetUserSpendOverTimePeriod

  3. GetPaymentMethodCurrent- paymentRepository.GetPaymentMethodCurrent

    – paymentRepository.GetCardPaymentMethodByPaymentMethodId

    – OR paymentRepository.GetPaypalPaymentMethodByPaymentMethodId

  4. if we’re changing payment method, ensure:- not over money laundering limit

So testing a pathway through this method, we can pass and fail at each of the lines listed above.  A pass is often denoted as silence (our code only gets noisy when something goes wrong), but each of those external dependencies themselves can throw potentially multiple exceptions.

We employ logging of our exceptions so again, we care that logging was called.

Testing Framework and Naming Tests

NUnit is our tool of choice for writing unit tests – syntax is expressive, and it generally allows for very readable tests.  I’m a big fan of the test explaining the authors intent – being able to read and understand unit tests is a skill for sure, though once you’ve read a unit test, having it actually do what the author intended it to is another validator.

With that in mind, we tend to take the approach ‘MethodUnderTest_TestState_ExpectedOutcome’ approach.  A few examples of our unit test names:

  • GetByUserId_ValidUserId_UserInCache_GetVolatileFieldsReturnsValidData_ReturnsValidUserObject
  • GetPlaymateAtPointInTime_GivenCorrectUserAndDate_ValidPlaymateShouldExist
  • GetCompetitionEntriesByCompetitionId_NoEntries_ShouldReturnEmptyCollection
  • GetTransactionByUserIdAndTransactionId_DbException_ShouldThrowCoreSystemSqlException

Knowing what the author intended is half the battle when coming to a test 3months from now because it’s failing after some business logic update.

Mocking Framework

We use Moq as a mocking framework, and I’m a big fan of the power it brings to testing – yup, there are quite a number of steps to jump through to effectively setup and verify your tests, though again, these add confidence to the final result.

One note about mocking in general, and any number of people have written on this in far more eloquent terms than I.  Never just mock enough data to pass the test.

If we have a repository method called ‘GetTransactionsByUserAndDate’, ensure that your mocked transactions list also includes transactions from other users as well as transactions for the same user outside of the dates specified – getting a positive result when that is the only data that exists is one thing, getting a positive result when you have a diverse mocked data set with things that should not be returned again adds confidence that the code is doing specifically what it should be.

Verifying vs. Asserting

We try very much to maintain a single assert per test (and only break that when we feel it necessary) – it keeps the mindset on testing a very small area of functionality, and makes the test more malleable/less brittle.

Verifying on the other hand (a construct supported by Moq and other framekworks) is something that we are more prolific with.

For example, if ‘paymentRepository.GetPlayerSpendLimit’ above throws an exception, I want to verify that ‘paymentRepository.GetUserSpendOverTimePeriod’ is not called – I also want to verify that we logged that exception.

The Assert from all of that is that the correct exception is thrown from the main method, but the verifies that are called as part of that test add confidence.

In our [TestTearDown] method we tend to place our ‘mock.Verify()’ methods to ensure that we verify those things that are able to be after each test.

Enough waffle – where’s the code?

That one method above ‘ValidateUserCanDeposit’ has ended up with 26 tests – each one models a pathway through that method.  There is only one success path through that method – every other test demonstrates error paths.  So for example:

public void ValidateUserCanDeposit_GetPaymentMethodCurrent_ThrowsPaymentMethodNotFoundException_UserBalanceUnderMoneyLaunderingLimit_ShouldReturnPaymentMethod()
	var user = GetValidTestableUser(ValidUserId);
	user.Balance = 1m;

	// remaining spend
	paymentRepository.Setup( x => x.GetPlayerSpendLimit(ValidUserId)).Returns( new PlayerSpendLimitDto { Limit = 50000, Type = 'w' }).Verifiable();
	paymentRepository.Setup( x => x.GetUserSpendOverTimePeriod(ValidUserId, It.IsAny(), It.IsAny())).Returns( 0 ).Verifiable();

	// current payment method
	paymentRepository.Setup( x => x.GetPaymentMethodCurrent(ValidUserId))
						.Throws( new PaymentMethodNotFoundException("") ).Verifiable();

	IPaymentMethod paymentMethod = paymentService.ValidateUserCanDeposit(user, PaymentProviderType.Card);

	Assert.That(paymentMethod.CanUpdatePaymentMethod, Is.True);
	paymentRepository.Verify( x => x.GetCardPaymentMethodByPaymentMethodId(ValidCardPaymentMethodId), Times.Never());
	LogVerifier.VerifyLogExceptionCalled(logger, Times.Once());

That may seem like a complex test, but I’ve got the following from it:

  • The author’s intent from the method signature:upon calling ValidateUserCanDeposit

    a call within that to GetPaymentMethodCurrent has thrown a PaymentMethodNotFoundException

    at that point, the users balance is below the money laundering limit for the site

    so the user should get a return that indicates that they can update their payment method

  • That those methods that I expect to be hit *are* hit (using moq’s .Verifiable())
  • That those methods that should not be called aren’t (towards the end of the test, Times.Never() verifies
  • That we have logged the exception once and only once

Now that this test (plus the other 25) are in place, if a developer is stupid enough to bury an exception or remove a logging statement that the build will fail.

Is this a good approach to testing?

I guess this is where the question opens up to you guys reading this.  Is this a good approach to testing?  The tests don’t feel brittle.  They feel like they’re focussing on one area at a time.  They feel like they are sure about what is going on in the underlying code.


Ways to improve them?

How do you unit test in this sort of situation? What software do you use? What problems have you faced?  Keen to get as much information as possible and hopefully help inform each other.

I’d love to get feedback on this.  It feels like it’s working well for us, but that doesn’t necessarily mean it’s right/good.

MSBuild, YuiCompressor and making CSS and Javascript titchy

Both Google (via PageSpeed) and Yahoo (via YSlow and their Developer Network guidelines) (among many others) tell us that how quickly our page loads and how optimised the site is for fast download is important – Google announced that the speed your page loads is important, and Yahoo have a number of guidelines highlighting the same thing.

I’ve done a lot of work in other areas (distributed caching to avoid hitting the DB, output caching of pages to avoid any unecessary parsing, etc.) but I thought I’d start to focus on the performance on the web side of things.

After almost 2 hours googling and a final chat on twitter, I thought I’d have a look at YUICompressor first.  I quite like the ‘being part of the build’ nature of it all, and it fits in with my desire to learn a little more about MSBuild to help in the ongoing mission to ensure we have a good Continuous Integration process within the workplace.

The documentation for YUI isn’t bad so long as you want to piggyback onto the Pre and Post-build events (found from the properties window in your web project/build tab), but this has never really felt clean to me, so I thought I’d piggyback onto the ‘AfterBuild’ target within the .csproj file instead.

Here’s what I did.

Download and Setup YUICompressor

YUICompressor is available from here, and comes as a zip with a couple of DLLs in it.

In order to tie in with the CI process, I keep a ‘lib’ folder within my solution folder for our project for all external dependencies and these get checked into source control along with your solution – one of the early goals of CI is repeatability, and including (and referencing) local resources allows you to build the project on any clean machine.

Tie into your MSBuild file for your project

We use a ‘Web.Resources’ project, which acts as a pseudo-cdn, so all static resources (scripts, css, images, flash) go into this and keep the core web solution a little cleaner.  It’s another task that assists in speeding up your site too as some older browsers have a limit of 2 concurrent requests per domain – splitting your static resources into another domain (even a sub domain) increases the concurrency of downloads and hence speeds up load times.

In visual studio, right click on your project with the CSS/Javascript in, unload project, and then right click and ‘edit’.  You’ll be presented with the .csproj file (which as anyone reading this will already know is also an MSBuild file).

Towards the end of the file, you will see a section like this:

<!-- To modify your build process, add your task inside one of the targets below and uncomment it. 
     Other similar extension points exist, see Microsoft.Common.targets.
<Target Name="BeforeBuild">
<Target Name="AfterBuild">

We’re going to move the ‘Afterbuild’ target outside of the comment, and replace it with the following:

  <UsingTask TaskName="CompressorTask" AssemblyFile="../lib/Yahoo.Yui.Compressor/Yahoo.Yui.Compressor.dll" />
  <Target Name="AfterBuild">
      <MainSiteCssOutputFile Condition=" '$(MainSiteCssOutputFile)'=='' ">tombola/css/tombola.compiled.css</MainSiteCssOutputFile>
      <MicroSiteCssOutputFile Condition=" '$(MicroSiteCssOutputFile)'=='' ">cinco/css/cinco.compiled.css</MicroSiteCssOutputFile>
      <!--<JavaScriptOutputFile Condition=" '$(JavaScriptOutputFile)'=='' ">JavaScriptFinal.js</JavaScriptOutputFile>-->
      <!-- Single files, listed in order of dependency -->
      <MainSiteCssFiles Include="$(SourceLocation)css/core/reset.css" />
      <MainSiteCssFiles Include="$(SourceLocation)css/core/site.css" />
      <MainSiteCssFiles Include="$(SourceLocation)tombola/css/site.css" />
      <MicroSiteCssFiles Include="$(SourceLocation)css/core/reset.css" />
      <MicroSiteCssFiles Include="$(SourceLocation)css/core/site.css" />
      <MicroSiteCssFiles Include="$(SourceLocation)cinco/css/site.css" />
      <!--<JavaScriptFiles Include="$(SourceLocation)jquery-1.3.2.js"/>-->
      <!-- All the files. They will be handled (I assume) in alphabetically. -->
      <!-- <CssFiles Include="$(SourceLocation)*.css" />
            <JavaScriptFiles Include="$(SourceLocation)*.js" />
    <CompressorTask CssFiles="@(MainSiteCssFiles)" DeleteCssFiles="false" CssOutputFile="$(MainSiteCssOutputFile)" CssCompressionType="YuiStockCompression" PreserveAllSemicolons="True" DisableOptimizations="Nope" EncodingType="Default" LineBreakPosition="-1" LoggingType="ALittleBit" ThreadCulture="en-gb" IsEvalIgnored="false" />
    <CompressorTask CssFiles="@(MicroSiteCssFiles)" DeleteCssFiles="false" CssOutputFile="$(MicroSiteCssOutputFile)" CssCompressionType="YuiStockCompression" PreserveAllSemicolons="True" DisableOptimizations="Nope" EncodingType="Default" LineBreakPosition="-1" LoggingType="ALittleBit" ThreadCulture="en-gb" IsEvalIgnored="false" />

You’ll see there’s a lot going on there, so lets break it down.  Firstly, you will see I’ve commented out the javascript minification and concatenation – you should (after working through the CSS stuff) be able to come up with your own use case for javascript on your site.


  <UsingTask TaskName="CompressorTask" AssemblyFile="../lib/Yahoo.Yui.Compressor/Yahoo.Yui.Compressor.dll" />

This is basically creating a ‘task’ by pointing to the functionality within an assembly (in this case, our local Yahoo.Yui.Compressor install. In my own head I have this down as a ‘using’ statement for MSBuild (in a similar way to Import, though I know you can do a lot more with the UsingTask msbuild command that I haven’t yet delved into.

  <Target Name="AfterBuild">
      <MainSiteCssOutputFile Condition=" '$(MainSiteCssOutputFile)'=='' ">mainsite/css/tombola.compiled.css</MainSiteCssOutputFile>
      <MicroSiteCssOutputFile Condition=" '$(MicroSiteCssOutputFile)'=='' ">microsite/css/cinco.compiled.css</MicrositeCssOutputFile>

This basically builds up the variables $(MainSiteCssOutputFile) and $(MicroSiteCssOutputFile). I could have just as easily called them $(Bob) and $(Fred), though I’m a fan of self documenting variables 😉

      <!-- Single files, listed in order of dependency -->
      <MainSiteCssFiles Include="$(SourceLocation)css/core/reset.css" />
      <MainSiteCssFiles Include="$(SourceLocation)css/core/site.css" />
      <MainSiteCssFiles Include="$(SourceLocation)tombola/css/site.css" />
      <MicroSiteCssFiles Include="$(SourceLocation)css/core/reset.css" />
      <MicroSiteCssFiles Include="$(SourceLocation)css/core/site.css" />
      <MicroSiteCssFiles Include="$(SourceLocation)cinco/css/site.css" />

A few arrays of items to be included in the compression

    <CompressorTask CssFiles="@(MainSiteCssFiles)" DeleteCssFiles="false" CssOutputFile="$(MainSiteCssOutputFile)" CssCompressionType="YuiStockCompression" PreserveAllSemicolons="True" DisableOptimizations="Nope" EncodingType="Default" LineBreakPosition="-1" LoggingType="ALittleBit" ThreadCulture="en-gb" IsEvalIgnored="false" />
    <CompressorTask CssFiles="@(MicroSiteCssFiles)" DeleteCssFiles="false" CssOutputFile="$(MicroSiteCssOutputFile)" CssCompressionType="YuiStockCompression" PreserveAllSemicolons="True" DisableOptimizations="Nope" EncodingType="Default" LineBreakPosition="-1" LoggingType="ALittleBit" ThreadCulture="en-gb" IsEvalIgnored="false" />

The clever stuff 🙂 This is where the CompressorTask picks up the CSS files to compress/join @(MainSiteCssFiles) and compresses them down into the CssOutputFile $(MainSiteCssoutputFile).

The options on this CompressorTask are numerous, and I’d recommend referring to the main YUICompressor site to get the settings correct for your environment – I’ve stuck with the defaults, but you can increase the amount of compression, delete original files, etc. etc. etc.

What about the Javascript?

I’ve left the javascript parts of the CompressorTask commented out in the main post above, and I’m about to go and play with those now, though it seems like it’ll be pretty much an identical process to that above.

Does it make any difference?

I’ve moved from a 85 rating to an 87 rating on YSlow – wow you say, was it really worth it?  When we know that even an extra second on load speed can significantly affect revenue for companies (god, I wish I could find decent resources to back that up after seeing people show them in talks on this sort of thing) it very much is an ‘Every little helps’ approach.  The jump of 2 was without the concat or join of javascript, so I hope to achieve perhaps another 1 point there too.  From there, smushing of images, perhaps spriting up images that can be, and generally just trying to eek out every last ounce of performance without any additional hardware costs.

What’s next?

After chatting to a few of those ‘clever people™’ that I follow on twitter, in particular @red_square and @stack72, they both recommended Chirpy which looks very interesting, and I really like the idea of .LESS and the concept of variables within the CSS, so I may well look at that next (will need more buyin from the team as we’ll all have to install it, but that’s never been a problem).

At least for now, our build is automated with the optimised versions of our static resources.

We’re hiring, and we quite like .net developers

I normally wouldn’t use my blog for this sort of thing, though we don’t really have an outlet on our corporate site, so this is the easiest place to do it.  I quite like working at my employer, so thought I’d use this as one of the channels to get the job advertised – I’ve tweeted about it too, please RT if you see it!

We’re hiring, and unlike expensify, we do really quite like .net developers.  We do our best to ‘do good things’, and although we don’t pass the joel test, a score of roughly 7ish with an aim to improve upon our build automation/daily builds/continuous deployment means that I personally find it a good place to work and learn a lot.  There is a team here who care about the work they undertake, they try to learn from one another, and they do their best to leave the code base in a better state than when they found it…

We like to attend UK Developer community events (@NEBytes, @DeveloperDay, @scottishdevs, etc.) and try to better ourselves in any way we can find.

Who are we?

We’re an online bingo company based in Sunderland, though I wouldn’t let any of that put you off Winking smile  Bingo isn’t my life, though as with any business, you can love the job without having to love the subject matter…

There’s a decent sized team here – 8 .net devs (2 ‘game’ guys, 6 web app), infrastructure team (5), flash/client team (5), and a creative team (4) all contribute so it’s a pretty good place to bounce ideas.

We’re re-architecting the current site (MVC3 front end, business/data access tier, DI/IoC, Linq, distributed caching) and that will be going live fairly soon. 

We’re moving into Europe with similar technology and this role would focus on the delivery of that.

The job spec – verbatim

Job Summary

In order to support the growing business, new developers will be required to work on projects building all new website applications for the UK and future European businesses.

Job Responsibilities & Tasks:

A technical specialist within the tombola Operations team, focusing on providing:

  • Website application development.
  • Backend application development.
  • SQL development.
  • Problem and Incident management.
  • Support and maintenance of existing software
  • Keep abreast of industry developments in the technical arena and make recommendations to management where appropriate.


  • Website application development.
  • Developing applications for Windows based systems and Web Applications using IIS 7+.
  • Developing applications accessing an SQL Server backend.
  • Object Oriented programming and design.
  • Visual Studio


  • C#
  • TSQL
  • ASP.NET MVC 3 / 2
  • JavaScript / jQuery
  • Ajax
  • LINQ
  • Internationalisation of .net apps
  • Continuous Integration / Build Management
  • Understanding of theory and application of design patterns


  • Must be passionate about their chosen career path, you’ll be working with people that love what they do – you should too.
  • Must have strong team working and communication skills as well as being confident in working alone.
  • Must be able to work well under pressure, and meet deadlines.
  • Must be highly motivated.
  • Must have good time management skills and be able to multi-task.

Details, details?

We’re looking for 2 people, and salary range will very much depend upon skillset but realistically we’re looking at £30-35k.


Please get in touch with me initially on twitter and I’ll give you corporate email addresses to find out more.

IIS, Optimising Performance, 304 status codes, and one stupid browser…

Well, I thought I’d start my play in earnest after last weeks DevWeek, I thought I’d experiment with various performance improvements that came out of Robert Boedigheimer’s (@boedie) talk.

First up, a play with expiry of content.  We host all of our ‘assets’ (images, css, javascript, and flash) from a content delivery network style setup.  We don’t currently use a CDN for anything other than our games, but the concept is the same – so long as the assets are hosted on a separate URL to the content, then the location of those assets isn’t an issue.

Setting Up Expiry for Assets in IIS 7

In IIS manager left click on the website, folder or indeed file that you wish to set expiry on.


From the ‘IIS’ section in the main pane (make sure you’re on features view for this) double click on ‘Http Response Headers’.


You will see in the right hand pane the option to ‘Set Common Headers…’


This gives you the following dialog:



You can see here, I’ve enabled ‘Expire Web content’ and am expiring it after 20 days.  You can set a fixed expiry time too, though I’ve never done this – I can imaging it’s more maintenance overhead to ensure you always keep content ‘cached’ at various points in time, though it’s there if your use case demands it.


Fiddlers 3(04)

Ok, so that’s all yes? Well, yup, that’s it.  Fire up fiddler and load up one of your assets – you should see something like the following:


So, all is well on first load – we get a status code 200 and we can see that caching is enabled and has a max-age of 1728000 (20d * 24h * 60m * 60s), so we know that our next request should be cached and shouldn’t hit the server.

Hit Refresh…


Erm… why are you still hitting my server even if only to be told 304 (not modified), resource hasn’t changed…  So why the round trip?

Turns out hitting refresh on any browser will indeed make that round trip – the refresh button is almost an override for the local browser cache and says ‘go and double check for me’ – I’ve tested in IE8/9, Firefox and Chrome and they all do this.

So how do I avoid the round trip for non modified resources?

Turns out you’re already doing it.  Instead of clicking refresh, click into the address bar and press return.  You will find the resource loads up again no problem, but there is now no round trip to the server and no 304 response.  Well, it loads up no problem in IE or Firefox, but there’s another pesky browser on the block…

Hmmm, Google Chrome? Why won’t you play ball?

Seems that pressing return in Chrome behaves in the same way as if you’d hit the ‘refresh’ button and still issues the request (and naturally gets the 304).

Should I worry?

Well, no – we have only been testing single resources here.  If I navigate to a page by typing in the URL and pressing return (first run) you’ll get the usual 200 status codes, and you’ll get the usual assets caching.  If you press return to ‘reload’ that page in chrome, then you will get all of the round trips back and forth with the corresponding 304s.

But, if you navigate to that page (after you’ve had your code 200’s) via either a bookmark or a google search (essentially, via a hyperlink) then jobs a good un and it doesn’t issue the requests.

I’m not sure why Chrome behaves differently to the other browsers in this regard – I could understand if it didn’t have a ‘refresh’ button, but it has.


I write this up as it caused an hour or so’s pain as I played around with IIS caching, as I’ve recently switched to Chrome as my default browser.  It was only chance that I tried the assets in the other browsers when Chrome wasn’t doing as it claimed it should be that I realised it was just a chrome side effect and really only came into play when I was ‘debugging’.

Hope it’s useful to others.

DevWeek 2011 – A Week of Geek

Another year, another incredibly lucky software developer gets to spend a week at the DevWeek conference.  For regular readers of the blog (hey mum!) you’ll know I attended last year and you can see my write up of it here.

I look at my own career over the past 12 months and realise just how much DevWeek helped hone and solidify a lot of my thinking – I’ve moved on massively since then and have put a great deal of last years learning into practice within products that I’ve worked on, so definitely was a worthwhile spend (thankfully my employer also thinks so!)

I had to choose my sessions far more carefully this year, it was a downside of returning I feel, but there felt like a lot of repetition on sessions – that said, there was still a great deal of new content (and I enjoyed @JeffRichter’s talk on exceptions so much last year I re-attended this year – sad eh!).

Day 1 Highlights

Ajax and ASP.NET MVC

K. Scott Allen – @OdeToCode

The lightbulb went on at the start of this talk and just didn’t switch off – I’ve not done so much on the Ajax side of MVC, though he made it all incredibly easy, and although I’ve watched the videos he delivered via pluralsight, it was far easier to contextualise and sank in better being in the room (and able to ask those stupid questions!).

He covered the client validation updates in MVC3 (the main reason we switched to MVC3 when it first launched), and covered a great deal around the topic as folks asked questions.

Massive tangibles to take directly onto a project I’m working on and I look forward to getting some of this into code asap.

NoSQL, Is it the future?

Gary Short – @GaryShort

Oh how I wish I went to that day on NoSQL last year in Scotland…  Presented brilliantly (as per really, after seeing presentations at DDD this is the usual craic with Gary).  He covered the various products available and the general use cases for them, obviously the history to NoSQL, and covered a lot of use cases.  I could waffle more but I’d only highlight my ignorance – realistically the cleverness in the talk and the outcome for me was that I have to try this – I have to grab a small project at work that I can pickup in my spare time and experiment with it.  I work in an environment where it’s easy to test the limits of our RDBMS solution as we have a massive amount of traffic, especially writes, and I think it’d be worthy of investigation – definitely more on this to follow from me!

Day 2 Highlights

Model Binding in ASP.NET MVC 3

K. Scott Allen – @OdeToCode

I’m going to start to sound like an @OdeToCode fan boy, but his presentation style and knowledge of the topic when asked a question just rocks – one of those people that qualifies in my book as a Dev Rock Star, so him catching up with him at lunch to have a chat through the questions I’d asked during the session was just superb!

There was a great deal of validation (personal, not model) in this session on the ways we’re currently doing model binding, though again, there were significant tangibles around the extensibility in MVC3 that may well make certain aspects of what we do around model binding easier.

Modern Javascript

K. Scott Allen – @OdeToCode

ok, ok, it’s turning into the Scott Allen show… doh!  I can only hope the reticence to ask questions during this question meant that others were similarly thinking ‘Oh Christ, I don’t do any of this!’.

I got that functions are heavily important (I sort of got this from consuming jquery, I’d just not written my own in the same vein).  Closures are an area I really need to read up on, and in general I just need to work at this one.

For any developer who consumed jquery and just ‘gets things done’, but has never written their own class, implemented their own closure, etc. I’d say do as I plan to do and get to learning – there are so many opportunities that I’ve utterly missed.  Exciting times.

Day 3 Highlights

Do’s and Don’ts of ASP.NET MVC

Hadi Hariri – @hhariri

Again, as much a personal validation session, but loads of little tangibles from this one – lots around IDependencyResolver and Nuget that I just need to *do* – we use Unity in ControllerFactory guise currently so I really need to get this sorted.  Nuget I’ve been consistently positively surprised by, There was a lot more in this session and I’ve got a bucket full of notes to work through but a crackign session.

Improving Website Performance and Scaleability While Saving Money

Robert Boedigheimer – @boedie

Possibly the best talk of the week for me – and there wasn’t a design pattern, a mention of SOLID, or indeed a unit test in sight.  Robert presented a lot of absolutely practical steps that can be taken to improve performance on your website, he highlighted some of the tools he used (and how he uses them), and it was just practical advice after practical advice.  There is content from this that absolutely will be going into my every day work from next week onwards, and the team are going to despair as I wave the flag for the tools/concepts I got from this talk.

Closing Thoughts?

Well, the above (as well as just being in a focused environment where I got to talk dev) was superb.  I personally think if I can continue to be funded for it I’d return every 2 years rather than annually to give the talks a full chance to rotate, though that’s not a criticism of the hosts or those people who give talks, I’ve seen it on other speaker circuits too (DeveloperDeveloperDeveloper events for example) and it’s one of those ‘either shut up and do a talk Tez or let them get on with it’ – I really have no desire to do so, so will be happy with what I get!  Thanks to all of the organisers and speakers, it has absolutely rejuvenated my geek batteries

Onwards and upwards now, fired up geek on board!

DDD9 – or 10 hours in a car and lots of learning

This weekend saw myself and a few of the developers from work take the long drive down from the relative comfort of the north east down to that there Reading for a day full of community driven talks about our craft.  I won’t give you any real background on DDD9 as @philpursglove has a cracking post on this here.

The sessions I attended were as follows:

.Net Collections Deep Dive

Gary Short (@GaryShort)

Having not seen Gary present before, this talk came as a fantastic start to the day – his presentation style is superb, and the content was perfect for me.

He covered the different collection types, and I got an awful lot from the flow of the talk on which collection types to use in different situations.

Key takeaways for me on this one were:

  • the performance aspects of growing collections (the doubling of the arrays, and more problematically the CopyTo that goes on to enable that growing).
  • if you know the size of a collection before you initialise it, specify it (IList<Stuff> stuff = new List<Stuff>(12))
  • Prefer .AddRange() to looping through a collection and using .Add().

He covered a lot more in this talk (sorting algorithms and when to consider using your own sort etc.) but all in all, cracking start to the day.

CQRS – Fad or Future

Ian Cooper (@ICooper)

I arrived at this talk having seen Ian present before on different subjects so knowing he was good, and having utter cynicism on CQRS (Command Query Responsibility Segregation).  It seemed to me (note the past tense) like an utter waste of time and something that allowed/supported a weaker architecture.  Very much a toy for the cool kids.

Well, the talk has convinced me that I need to look into this as a lot of what Ian said resonated with me.  Having a thin ‘read’ layer that (in our case) would return our View Models completely away from the domain objects makes so much sense – do I really need a 7 table join ‘User’ object *just* to present the pertinent details to the user allowing them to change their password (caching in place or no!).

The domain object would get involved on the commands – the changing of something, and at this point the domain model is a godsend naturally as it validates whether or not what you are doing is valid.

I like that it simplifies down our Service and Repository layers, and can see some real potential from it all.

It’s not something I’m going to rush out and implement tomorrow, but it’s something that has now hit the reading list as something that I must understand with an aim to be better informed when I make architectural design decisions in our software – I will no longer rule it out as a fad that’s for sure.

Functional Alchemy – how to keep your c# code DRY

Mark Rendle (@markrendle)

This guy was a great speaker – fantastic talk, his knowledge of the functional aspects introduced to the .net framework was huge, and he certainly used them well in his examples.

What this talk highlighted is just how shit I am!  There are aspects of the .net framework that I have very little understanding of (am I supposed to admit this publicly? lol).  I use and consume Func<T> and Action<T, TResult> on a day to day basis on the framework with no problem (and love them).  Does not knowing how to ‘roll my own’ affect my day to day job? Not really.  Do I want to know them to add to my toolbelt?  Damn sure.  Some of the stuff he was doing with (in his own words) abuse of Actions and Functions was incredible.  There are some really nice patterns that come out of this, and it’s linked for me to the Monads talk that Mike Hadlow gave as things I ‘Must try harder’ at.

I suck Smile

Is your code SOLID?

Nathyn Gloyn (@nathyngloyn)

Very good talk covering the origins and usages of SOLID, including code samples that demonstrated it.

I would encourage anyone that doesn’t understand these principles to read up on them (a google search will reveal much), they very much apply to all software development and even a high level understanding of them *will* make you a better software developer.  They’re not all prescriptive, they are tools to use as and when you feel it applicable, but having them in your mind while you design software is gold dust.

Thankfully the talk for me helped solidify the approaches we are taking at my current employer are the ‘right’ ones, and it was a ‘warm blanket’ type talk that made me feel like I wasn’t an utter numpty.

CSS is code, how do we avoid the usual code problems?

Helen Emerson (@helephant)

I felt like a lone sheep during this one, as I use (and really like) reset.css files (though I understand why Helen doesn’t), I thankfully work in an environment where backward compatibility means that if it doesn’t have rounded corners in IE6 but it ‘works’ then jobs a good un, so I very much felt Helen’s pain when she explained some of the hoops she has to jump through to ensure cross browser remains as similar as possible.

Although a good talk, the key gain from this talk for me was at the end when the community chipped in.  So for example there’s the following products/projects I need to look at:

  • dotLess/SASS – means of treating CSS as a programming language with variables/mixins etc.
  • Selenium, Browserlabs and Mogotest as means of testing UIs

What I did also get from this was Helen’s blog which I’ve subscribed to – she has a lot of very good stuff to say and I’d recommend folks that have any interest in front end have a read of it.

Beginners Guide to Continuous Integration

Paul Stack (@stack72)

This guy has been helping me a lot over Christmas with setting up our CI build, so was nice to finally meet him and say hello (surprising how many people just recognised me yesterday – who’d have thought that I stand out at 6’ 9”! Lol).

He didn’t really cover anything new for me, though really again helped solidify that continuous integration is the ‘Right Thing’ to do.

Most importantly from this talk (and the conversations I had with him over Christmas) – do it in small steps – don’t try to achieve everything straight off.  Set it up, get it checking out and building first – light stays green?  Do a run through of unit tests next.  Light stays green? Look at code coverage and other tasks.  Light stays green? Deployment… etc. etc. etc.

This way, over time you’ll build up the build that you *want* without all that significant up front cost.


Another brilliant day run by the community, for the community, and I can only thank all the organisers and speakers for taking the time out to organise it, along with a huge thanks to Microsoft for the venue – the cookies were ace!

Loads of learning points for me, but that’s a huge positive, not a negative!

It was incredible to see on one of the first slides of the day that the North East is getting a DDD event in the Autumn – yay, I can do the geek dinners, the beers, and still get home to a comfy bed!

Roll on DDD Scotland!

2010 – A year in geek

I’ve found it incredibly cathartic to read a few others’ blog posts summarising not only the year that has gone, but their aims for the year ahead – this has been an incredibly busy year for me in geek terms, and I thought I’d write it up, as another hopefully cathartic exercise.

The year starts…

2010 started for me after only four months in a new job after escaping an agency environment in August last year – I honestly didn’t know how badly I had it in my previous role until I started in my current – I took quite a hefty pay cut to switch jobs, but the previous role (I should really say roles, as I was stupidly doing the IT Manager and Dev team lead roles) had me in the last 6 months of it working comfortably 60 hour weeks – I was knackered, home life was suffering, I couldn’t switch off, I was stressed (and anyone who knows me knows I just don’t do stress).

My current role is pretty much idyllic for me – job description is Senior Developer, but we all know that hides a multitude of sins.  Basically, I get to specify technical direction, I get to do staff mentoring/staff support, I get to be involved in the community, but (best of all) I get to spend about 75% of my usable time developing.  Pig in shit I believe is the term they use Winking smile

Legacy Code

Oddly, the first real achievement this year involved minor improvements to our payment system (based upon  legacy code – classic asp – ewww!)  I write it here not because I’m proud of the technology, but of the analytical approach we took, the change process we had in place for the little and often changes to it, and the overall effect of those changes – conservative estimates by our financial officer put us at just over 1% extra turnover.  Now that doesn’t sound a lot, until you see how much the company turns over – needless to say, they were very happy with the work!

Site Rewrite

This has been the big focus for me from around April, and it’s been huge – our existing site is a mix of a lot of classic ASP with a number of .net projects dotted around – the technical debt in there is huge and changes, be it new functionality or modifications to existing functionality are just incredibly costly.  The aim (and I’ve read any number of posts that say this is a bad idea) was to re-write the whole thing into something that was:

a) more maintainable

b) easier to deploy

c) of a far higher overall quality

d) minimised technical debt

e) easier to extend

With that in mind, the technologies that myself and the team have worked on this year have been wide ranging.


The move away from web forms and into MVC has been a revelation.  I lament now the occasional need to maintain our legacy code as once you grok the separation of concerns involved in MVC2 (I heartily recommend both the Steve Sanderson book and the TekPub video series as learning resources). moving back to web forms (especially legacy) is a mare.  I’d say out of all the things covered this year, this is the biggest ‘win’ for me – I can see me using this pattern (and asp.net mvc) for a long time to come as my primary means of delivery over the web.

Testing Software (Unit, Integration, n’all that jazz)

I daren’t call this test driven development as we tend to write our tests after we’ve got the functionality in place – our specifications and business rules in most areas of the rewrite haven’t carried over verbatim, so writing unit tests ahead of time was rarely practicable.  That said, the project is now up to 290+ unit/integration tests, and I suspect before launch that number will nearly double.

It’s very easy during code reviews for team members to validate the logic, outcomes and goals in the unit tests up front so that they form almost a code contract which then goes on to define behaviour and functionality within the code.  It also (assuming business knowledge of the area under test) allows people to highlight potential missing tests or modifications to existing tests.

Learning wise, blogs have been the most use during the year for unit testing, though I would say a must purchase is ‘The Art of Unit Testing’ by Roy Osherove.  It got me thinking about unit testing in a very different way and has led (I hope) to me simplifying unit tests but writing more of them, using mocking frameworks to deliver mocks/doubles, and generally being a big advocate of testing.

Design Patterns

Obviously MVC goes without saying, though this year has seen me read a lot around software design and the patterns used therein.  I feel I now have a solid handle on a great deal more software design from an implementation point of view (the theory was never really that difficult, but turning that into an implementation…).  We’ve used the Service pattern extensively, Repository, I’d like to think we’ve used Unit of Work in a few places, the Factory pattern.  They’ve all seen the light of day (necessarily so) in this project.

There’s a fantastic post by Joel Spolsky about the Duct Tape Programmer which I’d urge everyone to read if they haven’t done so, and it’s about finding that balancing act between software design for software design’s sake (the pure view) versus getting the job done – there’s always a balancing act to be had, and hopefully I’ve stayed on the right line with regards to this.  It’s very easy when focussing on the design of the software to over engineer or over complicate something that should be (and is) a relatively straight forward task.

Uncle Bob must get a mention this year, as his SOLID principles have been a beacon – you don’t always adhere to them, you don’t always agree where they apply, but you can’t deny that as underlying principles of OOD they are a good foundation.

Business Exceptions

Two talks immediately spring to mind when I look at the approach we’ve taken with business exceptions, the first was delivered at DevWeek which I was lucky enough to attend in April (see the post here), the second was delivered by Phil Winstanley (@plip) at DDD Scotland this year.

We’ve very much using exceptions as a means of indicating fail states in methods now, and I love it – coupled with our logging, it feels like we will rarely have unhandled exceptions (and when we do, they are logged), and the overall software quality because of this feels far superior.

I understand the concerns that have been raised around the performance of exceptions (cost to raise etc.) and the desire to not use exceptions for program flow, though I think we’ve struck a happy balance and my testing (albeit rudimentary) earlier in the year suggested to me that the performance of these things was something that just wasn’t a concern.

Continuous Integration

Something that’s been on the back burner for too long now, and only the past week have I made any headway with it, but already it’s a love affair.  I suspect the quality of the information we get out of the system as we move forward will pay dividends, and as we begin to automate deployment/code coverage, and I get more heavily into MSBuild, this is going to be something that I don’t think I’ll want to give up on larger projects.


I now subscribe to approximately 160 blogs, which sounds like a lot, but thankfully not everyone posts as often as @ayende, so jobs a good un with regards to keeping up – I find 5-10mins at the end of the day lets me have a quick scan through those posts that have come in, discount the ones I’m not interested in, skim read the ones I am and star them (google reader stylee) ready for a more thorough read when I get to work the next day.  This may seem a large commitment, but remember I’ve come from a job where approximately 60hrs a week I was ‘working’ (not geeking I hasten to add, just client liaison, product delivery, bug fixing, and sod all innovation)  I now find my working week is down to approx 40hrs work, and between 5 and 15hrs per week on geek stuff depending on the week and what’s on – the extra time I get for self development is just my investment in my career really, and I talk to so many other people on twitter who do exactly the same.


Getting our own local microsoft tech user group (@NEBytes, http://www.nebytes.net) has been fantastic this year – we’ve had some superb speakers, and I know that once a month I get to catch up with some cracking geeks and just talk random tech.  The guys who run it Andrew Westgarth (@apwestgarth), Jon Noble (‘@jonoble), Ben Lee (@bibbleq) and Damian Foggon (@foggonda) do a fantastic job, and I look forward to more of this in 2011.

I managed to attend DevWeek this year, and wrote up a number of things from it, but it was a fantastic week.  Thankfully work saw the benefit so are sending me again in 2011, so hopefully I’ll meet up with folks there and learn as much as I did this year.

Developer Developer Developer days.  These are superb!  Hopefully we can get one organised closer to home in 2011, but the two I attended this year (Scotland and Reading earlier in the year) were packed full of useful stuff, and the organisers need to be praised for them.

Geek Toys – The iPad

I couldn’t round off the year without humbly admitting that I was wrong about the iPad when it launched – I didn’t see the point at all, and was adamant it was going to flop.  Then in October I found myself the owner of one (erm… I actually paid for it too – I have no idea what was going on there!).

Well, revelation doesn’t do it justice – it’s the ultimate geek tool!  Thankfully a lot of the books I buy are available as ebooks also, and I’ve found more and more I’m moving away from print and reading geek books on my ipad – epub format is best (for annotations and the like), though PDF works a treat too.  Aside from that, tweetdeck is a cracking app on the ipad, and it lets me stay in touch with geeks more regularly than I would otherwise have done.  Reeder is my final tool of choice, and the way it represents blogs you’ve not read yet is fantastic.

I’d suggest any geek that loves quick access to their blogs, their books, and tweetdeck (though naturally the ipad does a whole lot more) have a play with one and see if it could be the answer for you too – I’m hooked.

And what of 2011?

Well, I’m over the moon with the way 2010 has gone really – all I can ask is to maintain my geek mojo, my thirst for learning, and a cracking bunch of people to work with and life will be grand Smile

A very quick PS to add a technoarti tag VBXP4MC892BG so that I can claim my blog via them

TeamCity – Install and Setup (Basics)

Been a while since I posted and I thought that the past few days warranted getting my thoughts down as we’ve just setup our first foray into Continuous Integration/Build Automation with TeamCity.  We’re in the process of rewriting the corporate site from classic asp/asp.net into an MVC2 front end with some solid (though not always SOLID) design behind it.  We’ve written a lot of unit tests (though many more to go), and thought it was about time we looked at the whole CI/Build side of things.  I’d hasten to add, the following post will remain at a fairly basic level, as that’s where I’m at at the moment – hopefully something in here will be useful, though it’s as much about documenting the steps for the team I work with and whenever I write something like this down it always helps solidify it in the grey matter.

Why Continuous Integration/Build Automation?

The answers for us fit pretty much into the standard reasons behind CI – primarily ensuring quality, though easing the deployment burden was certainly a part of it.  CI completes the circle really – you’ve written your quality code, you’ve written your unit tests (and any other tests, integration, UI, etc.), so why not have an easy way to get all of that validated across your whole team, making sure that the quality remains and that you don’t have the manual task of pushing the code out to your dev servers? 

Continuous Integration helps with all of this, and a whole lot more, though the ‘more’ part is something that will come in time for us I think – we now have a working checkin build (I’ll detail the steps I went through) so that at least gives us ongoing feedback.

TeamCity was the immediate choice for us as we don’t really qualify for a TFS setup and CruiseControl.net seemed to have a higher learning curve (I may be mis-representing it here mind).

Before going through the detail of the install, a quick shout out to Paul Stack (@stack72), the fantastic Continuous Integration book from Manning, and the as yet unread MS Build book from Microsoft – these as well as the blog posts from many have helped massively in getting this setup.

Team City 6 – Install

Generally, the defaults in the setup were fine.  I made sure that all options were enabled with regards to the service etc. – I can’t see the use case when you wouldn’t want this, but it’s worth stating.


I changed the path to the build server configuration to a more general location – it initially suggested my own windows account user area, though I was unsure (and couldn’t find easy documentation) on whether this would make the configuration available to others, so I defaulted to a different path.


With regards to server port (for the TeamCity web administration tool) I changed the default port too.  Although it’s recommended that the build server remains as a single purpose box, I felt uncomfortable going with the default port 80 just in case we ever chose to put a web server on there for any other purpose.


I also chose to stick with the default and ran the service under the SYSTEM account – it doesn’t seem to have affected anything adversely and I’d rather do that than have to create a dedicated account.

Team City – Initial Setup

Initially you are asked to create an administrator account – do so, though if you’re in a team of people, there is an option later to create user accounts for each user – far better to do that and leave the admin account separate.  In the free version you can have up to 20 users, so it’s ideal for small teams.

Create a Project

The first steps in linking up your project to TeamCity is to create your project.


Here, you can give the project any name and description – it can (though doesn’t have to) match the project name in Visual Studio.


TeamCity from this point on holds your hand fairly effectively.


oh, ok – thanks Smile <click>

The build configuration page has a lot of options, but some of the pertinent ones (at least early doors – once you have more experience, which I don’t, then the others will certainly come into play).

Name your build – I named ours ‘checkin build’ as I intend for it to happen after every checkin… does what it says on the tin kinda thing.

Build number format – I left this as the default ‘{0}’ – it may be prudent to tie it in later on with the Subversion version number, but for now, we want to get a working CI process.

Artifact Paths – very much steered clear of this at the moment – it seems there’s a lot of power in these, though I haven’t touched on them enough.

Fail Build If – I went with the defaults plus a couple of others – ‘build process exit code is not zero’, ‘at least one test failed’, ‘an error message is logged by the build runner’.

Other than that, I pretty much stuck with the defaults.

Version Control Settings


I deliberately selected to checkout to the agent as I suspect this’ll give me more scalability in future – the build server can have multiple build agents on other machines from what I understand (kinda the distributed computing model?) and those agents can handle the load if there are very regular/large team checkins.  I think there are limitations on build agents in the free version, but again – if we use this solidly, and need more, then the commercial license isn’t too badly priced.

I also chose a different checkout directory to the default, just because – no solid reason here other than I have a lot of space on the D: drive.

Our project is significant (24 VS projects at last count, a lot of them testing projects (1 unit and 1 integration per main project), and initially I experimented with ‘clean all files before build’ but the overall build was taking approximately 8mins (delete all, checkout all, build, unit test) – I’m going to try to not clean the files and do a ‘revert’ instead but at present, I don’t have any experience on which is better – certainly cleaning all files worked well, but 8mins seemed a while.

Attach to a VCS Root

The important part – linking up your project to your source control (subversion in our case).


Click ‘Create and attach…’  Most of the settings in here are defaults, but you will notice further down the page it defaults to subversion 1.5 – we’re using 1.6, so double check your own setup.


I also experimented with enabling ‘revert’ on the agent ala:


With an aim to bringing down the overall build time – I haven’t played enough to warrant feedback yet, though I suspect the revert will work better than a full clean and checkout.

Build Steps

The CI build will be broken into a number of steps, but firstly we need to get the core project building on the agent.  There will be a lot more to learn on this one, but for now, what worked well for us was the following:


Our Solution file contains all the information we need to work out what needs to be built, and TeamCity supports it so jobs a good un.  As I extend the base build then this method will still just work as I’ll be modifying the .csproj files belonging to the solution anyway.

Build Step 2

This one was slightly more convoluted, but basically, giving relative paths to the DLLs that contain the unit tests is the way forward here.


Make sure you target the right framework version (I didn’t initially, though the error messages from TeamCity are pretty good in letting you figure it out).

Build Triggering

We want this all to trigger whenever we checkin to our source control system (in our case, subversion), so when we click on ‘build triggering’ and ‘add trigger’, selecting ‘VCS Trigger’ will get you everything you need:


Are we there yet?

Well, just about – you will see the admin interface has a ‘run’ button against this configuration (top right of browser), lets do an initial run and see what the problems are (if any).  You can monitor the build by clicking on the ‘agents’ link at the top of the page and then clicking on the ‘running’ link under the current build.

Should you get the message:

… Microsoft.WebApplication.targets" was not found…

This basically happens because you don’t have web deployment projects (or indeed VS2010) installed on the build server.  The path of least resistance is to copy the C:\Program Files\MSBuild folder over to the build machine’s Program Files folder (if x64, make sure you put it in the x86 one).  You should find the build just works after that.

Ok, Build is working – Tell me about it!

Notifications were the last thing I setup (make sure you’ve setup a user account for yourself before you do this, the admin account shouldn’t necessarily have notifications switched on).  Click on ‘My Settings & Tools’ at the top and then ‘Notification Rules’.

I’ve setup an email notifier (which will apparently tell me of the first failed build, but only the first after a successful), and I’ve downloaded the windows tray notifier (My Settings & Tools, General, right hand side of the page) which is setup likewise.

Next Steps?

There are a lot of other tasks I want to get out of this, not just from a CI point of view.  I’ve deliberately (as @stack72 suggested) kept the initial setup ‘simple’ – getting a working setup was far more important than getting an all encompassing setup that does everything I want from the off.  I can now see the guys doing their checkins and the tests passing, I’m now far more aware if someone has broken the build (and lets face it, we’ll all deliberately break it to see that little tray icon turn red), and I know there’s so much more that I can do.

Next priorities are:

  1. Learn MSBuild so that I can perform tasks more efficiently in the default build – e.g. I want to concatenate and minify all CSS on the site, I want to minify our own Javascript, etc.
  2. Setup deployment on the checkin build – I suspect this will use Web Deployment Projects (which themselves generate MSBuild files so are fully extensible) to get our checked in code across to our dev servers.
  3. Setup a nightly build that runs more tests.  As you can see above, we build and run unit tests for our checkin build – I want to run nightlies that perform both unit and integration tests – I want the nightly to deploy to dev also, but to promote the files necessary to our staging server (not publish them) so that we can at any point promote a nightly out to our staging and then (gulp) live servers.

I’d urge anyone working on projects where deployment is a regular and pain in the arse task, or if there are a few of you and you’ve taken unit testing and TDD (be that test first or just good solid functionality coverage), my view now is that Continuous Integration is the tool you need. 

It’s the new Santa Claus – It knows when you’ve been coding, it knows when you’re asleep, it knows if you’ve been hacking or not, so code good for goodness sake!

As per all of my other posts, the above is from a novice CI person – any feedback that anyone can give, any little nuggets of advice, any help at all – I’ll soak it up like a sponge – this has been a lot of fun, and there’s definitely a warm glow knowing it’s now in place, but there’s a long way to go – feedback *very* welcome!

Windows Phone 7: Why I’m not ready for it yet

I promised folks who were asking on twitter that I’d write up to say why I wasn’t gong to stick with my Windows Phone 7 device at the moment.  Firstly, a quick background (anyone who actually reads the shite that I pour out on twitter can ignore this first bit).


Anyone who knows me knows I’ve been hoping for good things for WP7 for months now.  I really fancied getting into WP7 dev (I still do), and with my contract on my iPhone running out in September 2010, I was ripe to pickup a new phone.

Android (even in its sexiest guise of HTC Desire) didn’t appeal – behind the HTC wrapper was what felt to me a clunky interface.  Sure, massively configurable and there is generally always an ‘app for that’, but it didn’t wow me at all.

iPhone 4 annoyed me – I wanted it to rock, and it does to some extent, but I refused to have to wear a condom just to make calls – stupid, stupid design and every time I tried it I saw the signal plummet as I have hands like a sasquatch.

So, as the hype built around WP7, this was me – this was where I was going, and this was the toy that was going to cover my next 18months of being tied to a carrier!

I followed the build up enthusiastically, I read developer blog posts, I attended dev events whenever there was a sniff of WP7 and loved the enthusiasm from the developer community at the impending launch. 

So what went wrong?

I, like quite a few others hunted high and low for a phone on launch day – thankfully I got the last one in the O2 shop in Newcastle and rushed it home to start the integration of it into my lifestyle.

I’d like to start out not by saying what went wrong, but what went right.

What Windows Phone 7 does well

The phone UI is stunning

It’s odd to think that a relatively simple text focussed interface could be considered ‘stunning’ but I think it’s that simplicity that makes it work so well.  I’d add a massive caveat to this and say ‘do NOT just look at screenshots of this thing if you’re considering getting one – get into a store and demo one’.  The animations/transitions and overall flow of the UI rock, and really shouldn’t be underestimated by only looking at screenshots – have a play.

The hubs are superb

This is going to sound odd after you read one of the reasons why I’m returning the phone, but in all seriousness, these are implemented very well, and with more integration options (discussed later) will absolutely rock as a means of centralising functionality – they’re superbly thought out.  If you’re happy using Facebook and Windows Live to supplement your contacts and pictures, then these guys really do do a cracking job.


I loved the handling of email – the interface worked well, and the little touches ‘tap to the left of the message to mark it’ etc. really do become intuitive immediately – very good UX here.  The fact that I couldn’t mark my pop account as ‘leave mail on server’ was a pain in the arse until I worked out I could use imap – I hope those on pop3 only get the ability to ‘leave mail’ soon though – or this may piss people off.

Phone speed

Bloody fast – nuff said?  Seriously, the transitions, the animations, the starting up of the core apps – all really fast, and put my iPhone 3G (as well as the iPhone 4) to shame.

What didn’t work well for me

Occasional crashes aside (I mention them first, but only to discount them really).  My iPhone 3g still crashes from time to time (in terms of app crashes etc.) so lets not worry about those from a WP7.  I will caveat the below by saying these are my findings and it may be that they are incorrect (if so, I’d love to hear it).

Picture Hub

I have a flickr pro account and have over 500 images on there (random family stuff and the occasional lucky shot of something arty!)  I really didn’t want to have to download all of my images just to re-upload them to live so I could see them in the picture hub.  Yup, live does have a ‘flickr link’, but it only updates in your status that you’ve uploaded some pictures – it’s not tied into picture hub.  This unfortunately made picture hub fairly useless for me, which is a massive shame, as it really is a nicely implemented bit of software.  I think for me, in a ‘maximising market share’ mindset, Microsoft should now focus in future updates on opening up other means of social media into the hubs (as you’ll see below, my problem with people hub is similar)

People Hub

Love it, love it, love it.  Hate that it only integrates with facebook, and hate that I don’t have enough control over that flow.

I’m a person (as anyone who knows me will testify) that uses twitter as their primary means of social media.  Facebook is nice enough, though I only dip in/out of it to see how old school friends are doing, and how many more thousand muscle-bound men my sister has added as ‘friends’ so I can take the piss.  It’s not my primary means of staying in touch.

Integrating facebook works well, and you can even tell it only to link up those people you’ve already got as contacts in your phone so you don’t see *every* idiot you’ve ever added.  Great!  Except when you go into the ‘what’s new’ section of the people hub you still see the updates from those eejits, even though they’re not strictly in your ‘contacts’ – would be nice to have that limited.

Another failing is twitter integration – c’mon guys, aside from facebook it’s the primary social media platform!  I’d love to have my contact hub littered with random scouse expletives from @philsherry, catch up on geek things (or indeed how many cups of tea have been consumed today) from @apwestgarth, see what workmates are tweeting, etc. etc.

This one wasn’t really a deal breaker on its own for me, but added up to the overall effect.

Applications List

Swiping right from the start page and you’re into the list of apps you’ve installed (as well as those that already come with it).

I feel this section of the phone was an afterthought after the utterly brilliant start/home screen.

Seriously guys, the only way I can see my apps list is as an alphabetised single column list?

I’d only installed about 10-12 apps, and this page was already pissing me off – I dread to think what’d happen if I had the 30-40 I’d had on my iPhone!  People definitely need more control of this page – perhaps breaking off into separate pages keeping core ‘phone’ stuff on its own page or something like that – either way, it needs thought.


This one may be most controversial of all, but after 18months with an iphone (and therefore itunes) I think the zune software is clunky and nowhere near as friendly to use.  I could go into an awful lot of detail, but it’ll turn into a holywar of comments I suspect, suffice to say I personally prefer itunes by a long margin.

Is that it?

Well yes – I know it doesn’t sound a lot, and on the whole I think this phone is superb.  It’s just these things adding up for me that make me nervous about a £99 outlay then £35/month for 18months on the hope that a future software update will improve things.

So it’s enough for me at least (who really does have to consider what he spends his money on) to return to my out of contract iphone on my £30/month tariff for the time being.

What next?  Well, if the above are addressed, I’ll definitely be back.  So long as I don’t feel like I have to sell my soul to Windows Live to enjoy the key aspects of the phone (and this from a Microsoft fan boy!), then I’d happily own the phone again in future.

Android with Gingerbread sounds interesting, but proof will be in the pudding on that one to make sure they haven’t continued with the clunk.

Short term, I’ve decided to buy an ipad instead.  I have bought a lot of ebooks through manning, apress etc. and I subscribe to a lot of blogs – having an easy to use device that will let me co-ordinate all of that in a nice form factor I think will help my ‘geek’, so that’s short term.

Long term, I’d like to return to Windows Phone 7 (hopefully with some app dev experience), fingers crossed they build upon what is a great core with some of the things I’m after.


As per usual, your comments would be great on this – I’m guessing it has the potential to generate a lot of debate (though people may just not care of course!), and hopefully this hasn’t come across as any sort of ‘windows phone 7 is shit because…’ type post – I really do like the first iteration of the OS, and I think it’s got massive potential to generate competition in the market.  It just doesn’t feel ready yet for my personal needs.