#DDDnorth 2 write up – October 2012 – Bradford

#dddNorth crowd scene, waiting for swag!

Stolen from Craig Murphy (@camurphy) as it’s the only pic I saw with me on it (baldy bugger, green t-shirt front right) – thanks Craig!

Another 5:45am alarm woke me on a cold morning to signal the start of another days travelling on a saturday for a developer developer developer event, this time with Ryan Tomlinson, Steve Higgs, Phil Hale and Dominic Brown from work.  I’ve been to a fair few of these now, and it still overwhelms me that so many people are willing to give up their Saturdays (speakers and delegates alike) and attend a day away from friends, family (and bed!) to gather for a day with their peers to learn from each other.

Lions and tigers and hackers! Oh my!

Phil Winstanley, @plip

Phil highlighted that the threat landscape has and is changing now – we’re moving away from paper and coin as our means of transactions and everything is existing in the online space, it’s virtual, and it’s instantaneous.  Identity has become a commodity, and we now all exist in the online space somewhere – facebook are making the money they are because our identities and those of our relationships are rich with information about who we are and what we like.

He brought over some very good anecdotal evidence from Microsoft around the threat landscape and how it’s growing exponentially, there are countries and terrorist organisations involved in this (more in the disruption/extraction space) but everyone is at risk – estimated 30% of machines have some form of malware on them and a lot of the time it’s dormant.

Groups like anonymous are those that folks should be most scared of – at least when a country hacks you there are some morals involved, whereas groups like anonymous don’t really care about the fallout or whom and what they affect, they’re just trying to make a point.

The takeaway from this rather sobering talk from me was to read the Security Development Lifecycle – we all agreed as developers that although we attempt to code secure software, none of us were actually confident enough to say that we categorically do create secure software.

I’ve seen Phil give presentations before and really like his presentation style and this talk was no different – a cracking talk with far more useful information than I could distil in a write up.

Asnyc c# 5.0 – patterns for real world use

Liam Westley, @westleyl

I’ve not done anything async before and although I understand the concepts, what I really lacked was some real world examples, so this talk was absolutely perfect for me.

Liam covered a number of patterns from the ‘Task-based Asynchronous Pattern’ white paper, in particular the .WhenAll (all things are important) and .WhenAny (which covers a lot of other use cases like throttling, redundancy, interleaving and early bailout) patterns.  More importantly, he covered these with some cracking examples that made each use case very clear and easy to understand.

Do I fully understand how I’d apply async to operations in my workplace after this talk? No, though that wasn’t the aim of it (I need to spend more time with aync/await in general to do that).

Do I have use cases for those patterns that he demoed and want to apply them?  Absolutely, and I can’t wait to play!

Fantastically delivered talk, well communicated, and has given me loads to play with – what more could you want from a talk?

BDD – Look Ma, No Frameworks

Gemma Cameron, @ruby_gem

I approached this talk with some scepticism – I’ve read a lot about BDD in the past, I’ve seen a talk by Gojko Adzic very recently at Lean Agile Scotland around ‘busting the myths’ in BDD, and although the concepts are fine, I just haven’t found BDD compelling.  Gemma’s talk (although very well executed) didn’t convince me any further, but the more she talked, the more I realised that the important part in all of this is DISCUSSION (something I feel we do quite well at my workplace).  I guess we as a community (developers) aren’t always ideal at engaging the product owner/customer and fully understand what they want, and it was primarily this point which was drilled home early in the talk.  Until you have a clear understanding early on by bringing stakeholders together, arriving at a common understanding and vocabulary, how can you possibly achieve the product they wish.  I buy this 100%.

This is where the talk diverged for some it seems – a perhaps misplaced comment about ‘frameworks are bad’ was (I feel) misinterpreted as ‘all frameworks are bad’, whereas really to me it felt like a ‘frameworks aren’t the answer, they’re just a small part of the solution’ – it jumps back to the earlier part about discussion – you need to fully understand the problem before you can possible look at technology/frameworks and the like.  I’m personally a big fan of frameworks when there is a usecase for them (I like mocking frameworks for what they give me for example), but I think this point perhaps muddied some of the waters for some.  She did mention the self shunt pattern which I’ll have to read more on to see if it could help us in our testing.

A very thought provoking talk, and I can imagine this will generate some discussion on monday with work colleagues – in particular about engagement with the client (product owner/customer) in order to ensure we are getting the requirements correctly – hopefully we’re doing everything we need to be doing here.

Web Sockets and SignalR

Chris Alcock, @calcock

I’m sure chris won’t mind a plug for his morning brew – a fantastic daily aggregation of some of the biggest blog posts from the previous day.  This is the first opportunity I’ve had to see Chris talk, and it’s odd after subscribing to morning brew for years now you feel like you know someone (thankfully got to chat to him at the end of the session and ask a performance related question).

I’ve played recently with SignalR in a personal project so had a little background to it already, though that wasn’t necessary for this talk.  Chris did a very good job of distilling websockets both in ‘how’ and ‘what’ and covered examples of them in use at the http level which was very useful.  He then moved on to SignalR both in the Persistent Connection (low level) and Hub (high level) APIs.  It’s nice to see that the asp.net team are bringing signalR under their banner and it’s being officially supported as a product (version 1 anticipated later this year)

This was a great talk for anyone who hasn’t really had any experience of signalR and wants to see just what it can do – like me, I’m sure that once you’ve seen it there will be a LOT of use cases you can think of in your current work where signalR would give the users a far nicer experience.

Event Driven Architectures

Ian Cooper, @ICooper

The talk I was most looking forward to on the day, and Ian didn’t disappoint.  We don’t have many disparate systems (or indeed disparate service boundaries) within our software, but for those that do exist, we’re currently investigating messaging/queues/service busses etc. as a means of passing messages effectively between (and across) those boundaries.

Ian distilled Service Oriented Architecture (SOA) well and went on to different patterns within Event Driven Architectures (EDA) and although the content is indeed complex, delivered as effectively as it could have been done.  I got very nervous when he talked about the caching of objects within each system and the versioning of them, though I can see entirely the point of it and after further discussion it felt like a worthy approach to making the messaging system more efficient/lean.

The further we at work move towards communication between systems/services the points in this talk will become more and more applicable and have only helped validate the approach we were thinking of taking.

This talk wins my ‘talk of the day’ award* (please allow 28 days for delivery, terms and conditions apply) as it took a complex area of distributed architecture and distilled into 1 hour what I’ve spent months reading about!

And Ian – that’s the maddest beard I’ve ever seen on on a speaker Winking smile

Summary

Brilliant brilliant day.  Lots of discussion in the car on the way home and a very fired up developer with lots of new things to play with, lots of new discussion for work, and lots of new ideas.  Isn’t this why we attend these events?

Massive thanks to Andrew Westgarth and all of the organisers of this, massive thanks to the speakers who gave up their time to come and distil this knowledge for us, and an utterly huge thanks to the sponsors who help make these events free for the community.

I’ll be at dunDDD in November, and I’m looking forward to more of the same there – will be there the friday night with Ryan Tomlinson, Kev Walker and Andrew Pears from work – looking forward to attending my first geek dinner!

ASP.NET MVC4 – Using WebForms and Razor View Engines in the same project for mobile teamplate support

NOTE: All content in this post refers to ASP.NET MVC 4 (Beta) and although it has a go live license, it has not gone RTM yet.  Although the process has been remarkably smooth, please work on a branch with this before considering it in your products!

 

We’ve been presented with an opportunity to create a mobile friendly experience for our italian site.  Our italian offering front end is an asp.net MVC 3 site using the webforms view engine (we started the project before razor was even a twinkling in microsoft’s eye), and is pretty standard in terms of setup.

There are a number of different ways of making a site mobile friendly – scott hanselman has a written a number of great articles on how he achieved it on his blog, and responsive design is very much a hot topic in web design at the moment (and that is a cracking book) and there are a lot of resources out there (both microsoft stack and otherwise) around learning the concepts.

Our italian site although div based and significantly more semantically laid out than our UK site (sorry!) would have still been a considerable task to turn into a responsive design as a first pass.  Our mobile site *will not* need to have every page that the non-mobile site has though – the purpose of the site is different, and the functionality in the site will be also.

Along comes ASP.NET MVC 4 (albeit still in beta, but it has a go live license) with its support for mobile.  I really should care about how it works under the covers (perhaps a follow up post), though for now, basically if you have a View (Index.aspx) then placing a mobile equivalent (Index.mobile.aspx) allows you to provide a generic mobile version of a page.

Upgrade your MVC3 Project to MVC4

Basically, follow: http://www.asp.net/whitepapers/mvc4-release-notes#_Toc303253806

There were no problems in this step for us – we have a large solution, and there were a number of dependent projects that were based upon MVC3, but these were all easily upgraded following the steps at that URL.

Setting up your view engines

We previously had removed Razor as a view engine from the project to remove some of the checks that go on when attempting to resolve a page, so our Global.asax had the following:

// we're not currently using Razor, though it can slow down the request pipeline so removing it
ViewEngines.Engines.Clear();
ViewEngines.Engines.Add(new WebFormViewEngine());

and it now has:

ViewEngines.Engines.Clear();
ViewEngines.Engines.Add(new RazorViewEngine());
ViewEngines.Engines.Add(new WebFormViewEngine());

The order is important – if you want your mobile views to use Razor in a WebForms view engine project, then razor must be the first view engine the framework looks to. If however you want to stick with webforms (or indeed you are only using razor) then your settings above will be different/non-existant.

Creating the mobile content

We started by creating Razor layout pages in shared in exactly the same way that you would add a master page.  Open Views/Shared and right click, Add Item, and select an MVC4 Layout Page.  Call this _Mobile.cshtml, and setup the differing sections that you will require.

To start with, as a trial I thought I’d replace the homepage, so navigate to Views/Home, right click, and ‘Add View…’ – create ‘Index.mobile’ and select Razor as the view engine – select the _Mobile.cshtml page as the layout.

Ok, we now have a non-mobile (webforms view engine) and a mobile (razor view engine) page – how do we test?

Testing your mobile content

The asp.net website comes to help again.  They have a great article on working with mobile sites in asp.net MVC4 (which indeed is far better than the above, though doesn’t cover the whole ‘switching view engines’ aspects).

I installed the tools listed in that article, and loaded up the site in the various testing tools and was presented with the following:

image

That’s Chrome in the background rendering out the standard site, upgraded to MVC4 but still very much using the webforms view engine and master pages, and Opera Mobile Emulator (pretending to be a HTC Desire) in the foreground using Razor view engine and layout pages.

Conclusion

The rest, as they say, is just hard work Smile  We very much intend to make the mobile site responsive and our CSS/HTML will be far more flexible around this, though with media queries (some examples media queries) and the book above in hand, that will be the fun part.

The actual process of using both Razor and WebForms view engines in the same project was a breeze and means that longer term the move over to Razor for our core site should be far more straight forward once we’ve worked through any teething troubles we have around the work above.  Razor as a view engine is far more conscise and (dare I say it!) pretty than webforms and the gator tags, so I look forward to using it in anger on a larger project like this.

It may be longer term that there are pages on the site that lend themselves towards not having duplicate content in which case we will investigate making the core design more responsive in places, but for now, we have a workable solution to creating mobile content thanks to the mobile support in ASP.NET MVC4.

 

Hope that was useful.

Visual Studio 11 (Beta) – Two week review

I’ve been using VS11 as my primary dev environment now for a few weeks so thought I’d write up my findings on the IDE as a replacement for VS2010.  It’s worth stating my general usage and projects so that you get a feel for where my thinking is coming from – if I get waffly (anyone who knows me knows I can tend to) then feel free to skip to the TL;DR section.

Machine Spec

Running under Win 7 Professional on an Dell M6500 Precision laptop – I5 2.66 with 8GB ram.  Both OS and Visual Studio are running on an SSD (though it’s not a particularly fast one, it’s still considerably faster than a 7200RPM disk).

Plugins

My predominant development is MVC front end to c# domain/services/repositories (yup, you heard it, repositories Smile with tongue out) – for front end work we use Chirpy (css concat/minification) and Web Workbench (for working with SCSS files).  Source control is sorted with AnkhSVN.

On top of that, which installation of Visual Studio would be complete without Resharper, so I’ve been keeping that up to date via the Resharper 7 EAP.

User Interface

This is the element that seems to have generated the most discussion and the most feedback from the community.  When I first saw the screenshots, I thought the whole thing looked a bit bland, though very much saw where they were trying to go with it – it just didn’t look as ‘pretty’ as VS2010.

First few days after install while I was still using both IDEs I was still in that mindset and found the UI a little bland.

Over the past two weeks though, the more I’ve worked in the environment, the more I see the point of it – the tooling just blends into the background.  It still very much feels like it’s working as hard for me (if not harder) than VS2010 was, but it’s not so apparent.  I can very much just get on with the job of coding.

I love the more minimalist UI – I’ve now grabbed a plugin for VS2010 to hide the file/edit etc. menu’s as it feels nicer not having them (like other products, a quick tap of the alt key brings you back into a comfort zone).

It felt odd that the toolbars seemed (at least initially) to be separate/jumpy – the toolbars differed between solution explorer having focus and the editor having focus, and I had to customise this from the default in order to get a UI that I was happy with, though after two weeks I’ve gone to having no toolbars and I don’t miss them (I never used to use much on them that often, and keyboard shortcuts are indeed ftw!)

Solution Explorer

imageI still have uncertainty about the solution explorer – it’s the one area where things feel like a step backwards.  The glyph approach in VS11 certainly keeps the UI subtle, but in a very large solution (is 81 projects very large? feels it!) it feels very much like the distinction between projects, solution folders, folders, files, modified files, etc. etc. etc. all blends into one.

This is definitely an area whereby when someone skins up a ‘vs2010 solution explorer’ for VS11 I’ll install it.

Don’t get me wrong, I can work with it, and the search tools make it far easier to find something (though with Resharper that was never hard anyway = CTRL+T for the win), but overall it doesn’t feel as much of a win in this one window.

 

Performance

Prefixing this by saying it’s very much a beta, though I don’t know why I’m bothering as the performance on the whole is far better.

Startup time from ‘open solution’ to being ready I’ve timed as being a smidge longer (it’s a technical term) – not enough to worry me, but certainly noticeable.

I really like the concept of what they seem to be trying to achieve (the projects loaded counts down as VS works through the loading, and you get a visual indicator in Solution Explorer on which projects are loaded and which aren’t – I suspect the intention is to let you get cracking on those projects that are loaded while it loads up the others.  In reality though, the UI is a little too sluggish at this point, so I find it better to just wait until the solution has loaded.

Once you have everything loaded/available though, the story changes entirely – everything feels more responsive, quicker to navigate and just generally ‘better’ – because of the cleaning of the UI mentioned above, I feel far more ‘in the code’ than I ever have done in VS2010.

Build Times

If there is one reason and one reason only to upgrade to VS11, this is it.  If you ever have a boss that talks about the cost/benefit of anything, demonstrate to him a big project build via VS2010 versus VS11.

Initial/Clean build on our 81 project solution in VS2010 takes between 40 and 60 seconds depending upon what else the machine has going on and how long VS2010 has been running.  VS11 is faster on initial/clean build, but not massively – 30-45 seconds.

Where we really gain is in subsequent builds – the build manager in VS11 seems to have taken parallelization very seriously.  VS2010 and we’re still looking at around 40 seconds on a build after a code change down in the guts of the solution.  VS11 and that’s averaging around 8-10 seconds.

This is the productivity carrot that makes it easy to sell this as a product really.  I find myself building more often simply because it’s not so impactful, and I find myself spending more time just ‘Getting Things Done’.

There is a caveat to this – I noticed a few occasions whereby in a sequential build all was tickety boo, but on a parallel build we were getting occasional (but not consistent) build errors with unmet reference dependencies.

Turns out it was our fault, and the project really didn’t have a reference to that dependency, but because in a sequential build it was getting addressed before it got to that project, it never generated an error, whereas I suspect in the parallel build world we were getting something akin to a race condition.

I can’t confirm this – it could be just my lack of understanding (most likely!) but anyway, adding the references to the project that was generating the intermittent build errors has resolved the problem.

Tooling

I’ve very much tortured myself here – part of me wishes I hadn’t, though I thought I’d see fully what it had to offer, so I’m trialing the Ultimate SKU of VS11 (I only have a license for Professional in 2010).

I know that some of this tooling exists within VS2010, though I can’t comment on it in there so this is a ‘clean’ review of it in VS11.

Code Clones

I was dreading running this – much as I’m very active in ensuring code quality, and we have a cracking team working on the product, it’s a product that is coming up to 2 years old, so I expected this to find some laziness dotted here and there.

Overwhelmingly, there wasn’t as much as I’d thought, and a lot of the issues reported were around some of our commonality in exception handling/logging (which in fairness should be AOP’d at some point, but it’s not what I’d consider duplication in the traditional sense).

First run through builds up an index so that subsequent runs through are a lot faster (I assume it takes a delta of changes to the codebase or something similar).

The way it highlights the issues is very elegant and it picked up a few issues that were indeed laziness/unawareness of devs (myself included!) that we’ve managed to refactor nicely and improve upon the code quality without any real cost to productivity – larger scale code reviews for things like this can take an age!

You can look through your ‘Exact’ or ‘Strong’ matches in no time at all and it just all feels very much more of a streamlining and an easing of quality management.

Love it, and will be sure to run it now periodically as part of my ‘ensure the boy scouts have been in the code base’ reviews.

Calculate Code Metrics for Solution

I’m aware that this at least is in VS2010 Ultimate SKU too, so I don’t know what (if anything) is different between the versions, but again, this is another tool to really allow me to see the codebase at a ‘big picture’ level very easily – I don’t have to so readily monitor checkins because the code metrics will identify code that has a bad maintainability index, areas where cyclomatic complexity has gone a bit awry, etc. – bloody useful, and again one that will come into my regular reviews to save me time.

Unit Test Code Coverage

I’ve used other solutions for this in the past, so again, it’s nice to see it baked in – though it’s a shame it’s Ultimate only.  I very much take the approach with Unit Testing of ‘ensure the functionality is covered’ while not worrying too much about % (other than as an indicator where the former may be suffering), and the interface on this makes it very easy to look through and pick out areas that are lacking testing.

I’ve added > 30 // TODO UNIT TEST comments to the codebase today, and again, it took no time at all to find those.

TL;DR

With this iteration of Visual Studio it feels very much like the workflow/lifecycle of what we do as developers has been at the forefront, and it’s difficult to find anything (other than the solution explorer) where it doesn’t feel like a significant improvement over the previous iteration.  I’ve only scratched the surface over the past two weeks of running with it, but I will very much be convincing our management to upgrade when it releases, and will do my best to attempt to get them to justify the cost of a few SKUs of Ultimate for all of the bells and whistles that are brought about within that SKU.

Positives

  • Build times have to sit at number one – productivity, productivity, productivity!
  • Tooling is superb in assisting you in ‘getting things done’
  • The UI chrome just blends into the background letting you ‘get things done’
  • Did I mention that I feel far more able to just ‘get things done’? Smile

Negatives

  • Solution Explorer on the whole is better/faster/more responsive, though this is one area where the lack of chrome (imo) makes things a bit harder
  • Startup time – although not significantly slower is certainly a little slower

We’re hiring!

We’re looking for 2 developers to join a growing team in Sunderland who support and build software for tombola bingo – currently the UKs largest online bingo site.  We are branching out into italy, spain and elsewhere which has seen significant expansion to the team.  We are looking to place the developers within the web team.

There are 11 of us on the web team, and we are developers who love to be good at what we do – we’re passionate about delivering quality and are looking for likeminded people.

We don’t score too badly on the joel test or by others’ standards

  • We use subversion for source control
    We realise that some people would have us apologise for this, though we’re going to debate the use of dvcs soon and svn works (most of the time) for us
  • We have made our own custom build scripts with msbuild
    And although we don’t yet practice continuous deployment, we’re not that far off
  • We don’t make daily builds, but we do have team city
    We use it actively for deployments/testing/automated smoke testing and are constantly improving, looking at code quality metrics etc.
  • We do have a bug database and we try to fix bugs before writing new code
    You know how it is… Winking smile
  • We employ scrum to manage our projects
    We do dailies, we work to (generally) two week sprints, we use scrumwise help with the overall picture and we’re big fans of the transparency and flexibility of agile.
  • We do regular code reviews and mentoring
    If you have a skills shortfall, chances are someone has it and will be willing to support
  • We talk a lot
    We all get stuck, we all need to bounce ideas off people, we all need opinions from time to time.  As a team, we talk to each other a lot to resolve difficulties or just get a perspective on potential solutions
  • We grok good software design principles
    We may not always apply all of the SOLID principles, and our code isn’t littered with “cool design pattern X”, but we know how and when to apply these and our codebase stands up well to scrutiny

The Roles

The roles are primarily for C# asp.net MVC developers with the usual skillset: asp.net MVC 3 (or 4 or 2!), decent HTML/javascript/CSS skills (though don’t sweat this one!), decent SQL skills, and some understanding of some of the above bullets would make you stand out.

With the expansion of the company, there are always opportunities to get into new technology too, and we’re always playing with new stuff – HTML5 for games is a hot topic at the moment (and all of the associated technologies), social media (ewww, we all hate that term, but you know what we mean!).

The only real caveat aside from the above is that you must be passionate about what you do – you will obviously want to make good software, and want to work with people who enjoy doing the same.

Salaries for the roles are competitive (and negotiable) and based upon experience.

There’s obviously fear in the market at the moment – people don’t want to move when they’re in safe positions and the current economic is making people hesitant.  We’re a company operating in one of the few growth areas and are only ever seeing a need for expansion.

Get in touch for a chat

If any of the above sounds interesting, or you have any questions, please get in touch.  We’re pretty nice guys/girls and if that’s as far as it goes, at least we’ll have had a chance to meet you and you us Smile

Contact either myself, Terry Brown – Project Lead, @terry_brown, 07968 765 139)

or Ian Walshaw – Operations Manager, @ian_walshaw1973, 07850 507 629)

Agencies – we have a preferred supplier list, please don’t contact us if you’re not on it.

Localisation of your ASP.NET MVC 3 Routes

Our core product has recently undergone a localisation exercise as we plan to launch it in other european countries.  One of the first things we needed was to localise the routes on a per-country basis.

We started out remarkably luckily in that every route we delivered in the app was already custom.  We didn’t like the default route handler (Controller/Action/{stuff}) URL structure, and although we could have gone down the custom route handler approach, there were a few things that steered us away from that.

  1. we wanted full flexibility from an SEO point of view – as we dev’d we had no idea what would work well from an SEO point of view, so having each potential route customisable to whatever our SEO company desired was going to be a bonus.
  2. longer term plans will see us delivering a content management system to deliver an awful lot of the content – at that point, we may well be delivering custom routes via the DB too, so having a flexible routing system was essential.

Why not the default routes?

An example of some of the ‘out of the box’ routes we’d have gotten with the default route handler, versus what we actually wanted:

/MyAccount/UpdatePersonalDetails –> my-account/personal-details

/Winners/ByGame/{GameName} –> winners/by-game/{game-name}

Although generally, the conversion was a hyphen between caps and a full lowercasing, we found that replacing the default route handler with a custom ‘HyphenateAndLowercaseRouteHandler’ just didn’t answer enough of our use cases.

I’m sure google, bing and the other search engines will happily look at pascal cased words and discern them, though I as a human find it easier to read /our-new-game-has-paid-out-3-million-so far than /OurNewGameHasPaidOut3MillionSoFar.

One of the big selling points for not using the default routing was flexibility – we can change the routes without having to refactor/rename controllers or action methods, so there is a real separation there.

So, we started to build up our routing table with custom entries for each controller/action such as:

 routes.MapRoute("GameHistory", "game-history/{gameName}/{room}",
						new
						{
							controller = "BingoGamesHistory",
							action = "Index",
							gameName = "Bandit",
							room = "the-ship"
						}, namespaces);

and to date, across the whole front end application we have 183 custom routes.

Localising the Routes

It almost feels sham-like to be writing a blog post about this, though I still see questions on stack overflow about it, so thought I’d write this up.

What we did in the above example was replace the string (“game-history/{gameName}/{room}” with a localised resource – we now have a LocalRoutes which has something like the following:

image

and the routes.MapRoute command in global.asax replaces the string representation of the route with LocalisedRoutes.GameHistory_General.

Obviously from this point on, it’s then just a matter of adding a LocalisedRoutes.GameHistory.it, or LocalisedRoutes.GameHistory.es etc. to get the represnetation of the routes for those countries, and in our CI deployment the plan is to alter the web.config depending upon the deployment:

<globalization uiCulture="it-IT" culture="it-IT" />

Jobs a good un Smile

What next?

As I say, the next big phase of our project will include a content management system, so may well require us to have runtime routes injected into the routing table – I’ve never done it, but it’s something to be aware of. 

Sample Project

I’ve put together a simple project that demonstrates the above which will be something that folks can base their solutions upon if they they are having difficulty with the above description.  The example only localises routes, so the UI still remains in english, but you get the idea.

Download the example at google code

Unit testing complex scenarios – one approach

This is another of those ‘as much for my benefit’ as it is for the community posts.  On a sizeable project at work we’ve hit a ‘catch up on tests’ phase.  We don’t employ TDD, though obviously understand that testing is very important to the overall product (both in confidence on release, and confidence that changes to the code will break the build if functionality changes).  Our code coverage when we started this latest phase of work was terrible (somewhere around 20% functionality coverage with 924 tests) – after a couple of weeks of testing we’re up to fairly high coverage on our data access/repositories (>90%) and have significantly more tests (2,600 at last count).

We are following a UI –> Service –> Repository type pattern for our architecture which works very well for us – we’re using IoC, though perhaps only because of testability – the loose coupling benefits are obviously there.

We’re now at the point of testing our service implementations, and have significantly more to think about.  At the data access layer, external dependencies were literally only the database.  At service layer, we have other services as external dependencies, as well as repositories, so unit testing these required considerably more thought/discussion.  Thankfully I work with a very good team, so the approach we’ve taken here is very much a distillation of the outcomes from discussion with the team.

Couple of things about our testing:

Confidence is King

The reason we write unit tests is manifold, but if I were to try to sum it up, it’s confidence.

Confidence that any changes to code that alter functionality break the build.

Confidence that the code is working as expected.

Confidence that we have solidly documented (through tests) the intent of the functionality and that someone (more often than not another developer in our case) has gone through a codebase and has reviewed it enough to map the pathways through it so that they can effectively test it.

Confidence plays a huge part for us as we implement a Continuous Integration process, and the longer term aim is to move towards Continuous Delivery.  Without solid unit testing at it’s core, I’d find it difficult to maintain the confidence in the build necessary to be able to reliably deploy once, twice or more per day.

Test Functionality, Pathways and Use Cases, Not Lines of Code

100% code coverage is a lofty ideal, though I’d argue that if that is your primary goal, you’re thinking about it wrong.  We have often achieved 100% coverage, but done so via the testing of pathways through the system rather than focussing on just the lines of code.  We use business exceptions and very much take the approach that if a method can’t do what it advertises, an exception is thrown.

Something simple like ‘ValidateUserCanDeposit’ can throw the following:

/// <exception cref="PaymentMoneyLaunderingLimitException">Thrown when the user is above their money laundering limit.</exception>
/// <exception cref="PaymentPaymentMethodChangingException">Thrown when the user is attempting to change their payment method.</exception>
/// <exception cref="PaymentPaymentMethodExpiredException">Thrown when the expiry date has already passed</exception>
/// <exception cref="PaymentPaymentMethodInvalidStartDateException">Thrown when the start date hasn't yet passed</exception>
/// <exception cref="PaymentPlayerSpendLimitException">Thrown when the user is above their spend limit.</exception>
/// <exception cref="PaymentPlayerSpendLimitNotFoundException">Thrown when we are unable to retrieve a spend limit for a user.</exception>
/// <exception cref="PaymentOverSiteDepositLimitException">Thrown when the user is over the sitewide deposit limit.</exception>

and these are calculated often by calls to external dependencies (in this case there are 4 calls away to external dependencies) – the business logic for ‘ValidateUserCanDeposit’ is:

  1. Is the user over the maximum site deposit limit
  2. Validate the user has remaining spend based upon responsible gambling limits- paymentRepository.GetPlayerSpendLimit

    – paymentRepository.GetUserSpendOverTimePeriod

  3. GetPaymentMethodCurrent- paymentRepository.GetPaymentMethodCurrent

    – paymentRepository.GetCardPaymentMethodByPaymentMethodId

    – OR paymentRepository.GetPaypalPaymentMethodByPaymentMethodId

  4. if we’re changing payment method, ensure:- not over money laundering limit

So testing a pathway through this method, we can pass and fail at each of the lines listed above.  A pass is often denoted as silence (our code only gets noisy when something goes wrong), but each of those external dependencies themselves can throw potentially multiple exceptions.

We employ logging of our exceptions so again, we care that logging was called.

Testing Framework and Naming Tests

NUnit is our tool of choice for writing unit tests – syntax is expressive, and it generally allows for very readable tests.  I’m a big fan of the test explaining the authors intent – being able to read and understand unit tests is a skill for sure, though once you’ve read a unit test, having it actually do what the author intended it to is another validator.

With that in mind, we tend to take the approach ‘MethodUnderTest_TestState_ExpectedOutcome’ approach.  A few examples of our unit test names:

  • GetByUserId_ValidUserId_UserInCache_GetVolatileFieldsReturnsValidData_ReturnsValidUserObject
  • GetPlaymateAtPointInTime_GivenCorrectUserAndDate_ValidPlaymateShouldExist
  • GetCompetitionEntriesByCompetitionId_NoEntries_ShouldReturnEmptyCollection
  • GetTransactionByUserIdAndTransactionId_DbException_ShouldThrowCoreSystemSqlException

Knowing what the author intended is half the battle when coming to a test 3months from now because it’s failing after some business logic update.

Mocking Framework

We use Moq as a mocking framework, and I’m a big fan of the power it brings to testing – yup, there are quite a number of steps to jump through to effectively setup and verify your tests, though again, these add confidence to the final result.

One note about mocking in general, and any number of people have written on this in far more eloquent terms than I.  Never just mock enough data to pass the test.

If we have a repository method called ‘GetTransactionsByUserAndDate’, ensure that your mocked transactions list also includes transactions from other users as well as transactions for the same user outside of the dates specified – getting a positive result when that is the only data that exists is one thing, getting a positive result when you have a diverse mocked data set with things that should not be returned again adds confidence that the code is doing specifically what it should be.

Verifying vs. Asserting

We try very much to maintain a single assert per test (and only break that when we feel it necessary) – it keeps the mindset on testing a very small area of functionality, and makes the test more malleable/less brittle.

Verifying on the other hand (a construct supported by Moq and other framekworks) is something that we are more prolific with.

For example, if ‘paymentRepository.GetPlayerSpendLimit’ above throws an exception, I want to verify that ‘paymentRepository.GetUserSpendOverTimePeriod’ is not called – I also want to verify that we logged that exception.

The Assert from all of that is that the correct exception is thrown from the main method, but the verifies that are called as part of that test add confidence.

In our [TestTearDown] method we tend to place our ‘mock.Verify()’ methods to ensure that we verify those things that are able to be after each test.

Enough waffle – where’s the code?

That one method above ‘ValidateUserCanDeposit’ has ended up with 26 tests – each one models a pathway through that method.  There is only one success path through that method – every other test demonstrates error paths.  So for example:

[Test]
public void ValidateUserCanDeposit_GetPaymentMethodCurrent_ThrowsPaymentMethodNotFoundException_UserBalanceUnderMoneyLaunderingLimit_ShouldReturnPaymentMethod()
{
	var user = GetValidTestableUser(ValidUserId);
	user.Balance = 1m;

	// remaining spend
	paymentRepository.Setup( x => x.GetPlayerSpendLimit(ValidUserId)).Returns( new PlayerSpendLimitDto { Limit = 50000, Type = 'w' }).Verifiable();
	paymentRepository.Setup( x => x.GetUserSpendOverTimePeriod(ValidUserId, It.IsAny(), It.IsAny())).Returns( 0 ).Verifiable();

	// current payment method
	paymentRepository.Setup( x => x.GetPaymentMethodCurrent(ValidUserId))
						.Throws( new PaymentMethodNotFoundException("") ).Verifiable();

	IPaymentMethod paymentMethod = paymentService.ValidateUserCanDeposit(user, PaymentProviderType.Card);

	Assert.That(paymentMethod.CanUpdatePaymentMethod, Is.True);
	paymentRepository.Verify( x => x.GetCardPaymentMethodByPaymentMethodId(ValidCardPaymentMethodId), Times.Never());
	LogVerifier.VerifyLogExceptionCalled(logger, Times.Once());
}

That may seem like a complex test, but I’ve got the following from it:

  • The author’s intent from the method signature:upon calling ValidateUserCanDeposit

    a call within that to GetPaymentMethodCurrent has thrown a PaymentMethodNotFoundException

    at that point, the users balance is below the money laundering limit for the site

    so the user should get a return that indicates that they can update their payment method

  • That those methods that I expect to be hit *are* hit (using moq’s .Verifiable())
  • That those methods that should not be called aren’t (towards the end of the test, Times.Never() verifies
  • That we have logged the exception once and only once

Now that this test (plus the other 25) are in place, if a developer is stupid enough to bury an exception or remove a logging statement that the build will fail.

Is this a good approach to testing?

I guess this is where the question opens up to you guys reading this.  Is this a good approach to testing?  The tests don’t feel brittle.  They feel like they’re focussing on one area at a time.  They feel like they are sure about what is going on in the underlying code.

Overkill?

Ways to improve them?

How do you unit test in this sort of situation? What software do you use? What problems have you faced?  Keen to get as much information as possible and hopefully help inform each other.

I’d love to get feedback on this.  It feels like it’s working well for us, but that doesn’t necessarily mean it’s right/good.

MSBuild, YuiCompressor and making CSS and Javascript titchy

Both Google (via PageSpeed) and Yahoo (via YSlow and their Developer Network guidelines) (among many others) tell us that how quickly our page loads and how optimised the site is for fast download is important – Google announced that the speed your page loads is important, and Yahoo have a number of guidelines highlighting the same thing.

I’ve done a lot of work in other areas (distributed caching to avoid hitting the DB, output caching of pages to avoid any unecessary parsing, etc.) but I thought I’d start to focus on the performance on the web side of things.

After almost 2 hours googling and a final chat on twitter, I thought I’d have a look at YUICompressor first.  I quite like the ‘being part of the build’ nature of it all, and it fits in with my desire to learn a little more about MSBuild to help in the ongoing mission to ensure we have a good Continuous Integration process within the workplace.

The documentation for YUI isn’t bad so long as you want to piggyback onto the Pre and Post-build events (found from the properties window in your web project/build tab), but this has never really felt clean to me, so I thought I’d piggyback onto the ‘AfterBuild’ target within the .csproj file instead.

Here’s what I did.

Download and Setup YUICompressor

YUICompressor is available from here, and comes as a zip with a couple of DLLs in it.

In order to tie in with the CI process, I keep a ‘lib’ folder within my solution folder for our project for all external dependencies and these get checked into source control along with your solution – one of the early goals of CI is repeatability, and including (and referencing) local resources allows you to build the project on any clean machine.

Tie into your MSBuild file for your project

We use a ‘Web.Resources’ project, which acts as a pseudo-cdn, so all static resources (scripts, css, images, flash) go into this and keep the core web solution a little cleaner.  It’s another task that assists in speeding up your site too as some older browsers have a limit of 2 concurrent requests per domain – splitting your static resources into another domain (even a sub domain) increases the concurrency of downloads and hence speeds up load times.

In visual studio, right click on your project with the CSS/Javascript in, unload project, and then right click and ‘edit’.  You’ll be presented with the .csproj file (which as anyone reading this will already know is also an MSBuild file).

Towards the end of the file, you will see a section like this:

<!-- To modify your build process, add your task inside one of the targets below and uncomment it. 
  
     Other similar extension points exist, see Microsoft.Common.targets.
<Target Name="BeforeBuild">
</Target>
<Target Name="AfterBuild">
</Target>
-->

We’re going to move the ‘Afterbuild’ target outside of the comment, and replace it with the following:

  <UsingTask TaskName="CompressorTask" AssemblyFile="../lib/Yahoo.Yui.Compressor/Yahoo.Yui.Compressor.dll" />
  
  <Target Name="AfterBuild">
    <PropertyGroup>
      <MainSiteCssOutputFile Condition=" '$(MainSiteCssOutputFile)'=='' ">tombola/css/tombola.compiled.css</MainSiteCssOutputFile>
      <MicroSiteCssOutputFile Condition=" '$(MicroSiteCssOutputFile)'=='' ">cinco/css/cinco.compiled.css</MicroSiteCssOutputFile>
      <!--<JavaScriptOutputFile Condition=" '$(JavaScriptOutputFile)'=='' ">JavaScriptFinal.js</JavaScriptOutputFile>-->
    </PropertyGroup>
    <ItemGroup>
      <!-- Single files, listed in order of dependency -->
      <MainSiteCssFiles Include="$(SourceLocation)css/core/reset.css" />
      <MainSiteCssFiles Include="$(SourceLocation)css/core/site.css" />
      <MainSiteCssFiles Include="$(SourceLocation)tombola/css/site.css" />
      <MicroSiteCssFiles Include="$(SourceLocation)css/core/reset.css" />
      <MicroSiteCssFiles Include="$(SourceLocation)css/core/site.css" />
      <MicroSiteCssFiles Include="$(SourceLocation)cinco/css/site.css" />
      <!--<JavaScriptFiles Include="$(SourceLocation)jquery-1.3.2.js"/>-->
      <!-- All the files. They will be handled (I assume) in alphabetically. -->
      <!-- <CssFiles Include="$(SourceLocation)*.css" />
            <JavaScriptFiles Include="$(SourceLocation)*.js" />
            -->
      <!--
          JavaScriptFiles="@(JavaScriptFiles)"
          JavaScriptOutputFile="$(JavaScriptOutputFile)"
          ObfuscateJavaScript="True"
          DeleteJavaScriptFiles="false"
-->
    </ItemGroup>
    <CompressorTask CssFiles="@(MainSiteCssFiles)" DeleteCssFiles="false" CssOutputFile="$(MainSiteCssOutputFile)" CssCompressionType="YuiStockCompression" PreserveAllSemicolons="True" DisableOptimizations="Nope" EncodingType="Default" LineBreakPosition="-1" LoggingType="ALittleBit" ThreadCulture="en-gb" IsEvalIgnored="false" />
    <CompressorTask CssFiles="@(MicroSiteCssFiles)" DeleteCssFiles="false" CssOutputFile="$(MicroSiteCssOutputFile)" CssCompressionType="YuiStockCompression" PreserveAllSemicolons="True" DisableOptimizations="Nope" EncodingType="Default" LineBreakPosition="-1" LoggingType="ALittleBit" ThreadCulture="en-gb" IsEvalIgnored="false" />
  </Target>

You’ll see there’s a lot going on there, so lets break it down.  Firstly, you will see I’ve commented out the javascript minification and concatenation – you should (after working through the CSS stuff) be able to come up with your own use case for javascript on your site.

So:

  <UsingTask TaskName="CompressorTask" AssemblyFile="../lib/Yahoo.Yui.Compressor/Yahoo.Yui.Compressor.dll" />

This is basically creating a ‘task’ by pointing to the functionality within an assembly (in this case, our local Yahoo.Yui.Compressor install. In my own head I have this down as a ‘using’ statement for MSBuild (in a similar way to Import, though I know you can do a lot more with the UsingTask msbuild command that I haven’t yet delved into.

  <Target Name="AfterBuild">
  
    <PropertyGroup>
      <MainSiteCssOutputFile Condition=" '$(MainSiteCssOutputFile)'=='' ">mainsite/css/tombola.compiled.css</MainSiteCssOutputFile>
      <MicroSiteCssOutputFile Condition=" '$(MicroSiteCssOutputFile)'=='' ">microsite/css/cinco.compiled.css</MicrositeCssOutputFile>
    </PropertyGroup>

This basically builds up the variables $(MainSiteCssOutputFile) and $(MicroSiteCssOutputFile). I could have just as easily called them $(Bob) and $(Fred), though I’m a fan of self documenting variables 😉

 
    <ItemGroup>
      <!-- Single files, listed in order of dependency -->
      <MainSiteCssFiles Include="$(SourceLocation)css/core/reset.css" />
      <MainSiteCssFiles Include="$(SourceLocation)css/core/site.css" />
      <MainSiteCssFiles Include="$(SourceLocation)tombola/css/site.css" />
      <MicroSiteCssFiles Include="$(SourceLocation)css/core/reset.css" />
      <MicroSiteCssFiles Include="$(SourceLocation)css/core/site.css" />
      <MicroSiteCssFiles Include="$(SourceLocation)cinco/css/site.css" />
    </ItemGroup>

A few arrays of items to be included in the compression

  
    <CompressorTask CssFiles="@(MainSiteCssFiles)" DeleteCssFiles="false" CssOutputFile="$(MainSiteCssOutputFile)" CssCompressionType="YuiStockCompression" PreserveAllSemicolons="True" DisableOptimizations="Nope" EncodingType="Default" LineBreakPosition="-1" LoggingType="ALittleBit" ThreadCulture="en-gb" IsEvalIgnored="false" />
    <CompressorTask CssFiles="@(MicroSiteCssFiles)" DeleteCssFiles="false" CssOutputFile="$(MicroSiteCssOutputFile)" CssCompressionType="YuiStockCompression" PreserveAllSemicolons="True" DisableOptimizations="Nope" EncodingType="Default" LineBreakPosition="-1" LoggingType="ALittleBit" ThreadCulture="en-gb" IsEvalIgnored="false" />

The clever stuff 🙂 This is where the CompressorTask picks up the CSS files to compress/join @(MainSiteCssFiles) and compresses them down into the CssOutputFile $(MainSiteCssoutputFile).

The options on this CompressorTask are numerous, and I’d recommend referring to the main YUICompressor site to get the settings correct for your environment – I’ve stuck with the defaults, but you can increase the amount of compression, delete original files, etc. etc. etc.

What about the Javascript?

I’ve left the javascript parts of the CompressorTask commented out in the main post above, and I’m about to go and play with those now, though it seems like it’ll be pretty much an identical process to that above.

Does it make any difference?

I’ve moved from a 85 rating to an 87 rating on YSlow – wow you say, was it really worth it?  When we know that even an extra second on load speed can significantly affect revenue for companies (god, I wish I could find decent resources to back that up after seeing people show them in talks on this sort of thing) it very much is an ‘Every little helps’ approach.  The jump of 2 was without the concat or join of javascript, so I hope to achieve perhaps another 1 point there too.  From there, smushing of images, perhaps spriting up images that can be, and generally just trying to eek out every last ounce of performance without any additional hardware costs.

What’s next?

After chatting to a few of those ‘clever people™’ that I follow on twitter, in particular @red_square and @stack72, they both recommended Chirpy which looks very interesting, and I really like the idea of .LESS and the concept of variables within the CSS, so I may well look at that next (will need more buyin from the team as we’ll all have to install it, but that’s never been a problem).

At least for now, our build is automated with the optimised versions of our static resources.

We’re hiring, and we quite like .net developers

I normally wouldn’t use my blog for this sort of thing, though we don’t really have an outlet on our corporate site, so this is the easiest place to do it.  I quite like working at my employer, so thought I’d use this as one of the channels to get the job advertised – I’ve tweeted about it too, please RT if you see it!

We’re hiring, and unlike expensify, we do really quite like .net developers.  We do our best to ‘do good things’, and although we don’t pass the joel test, a score of roughly 7ish with an aim to improve upon our build automation/daily builds/continuous deployment means that I personally find it a good place to work and learn a lot.  There is a team here who care about the work they undertake, they try to learn from one another, and they do their best to leave the code base in a better state than when they found it…

We like to attend UK Developer community events (@NEBytes, @DeveloperDay, @scottishdevs, etc.) and try to better ourselves in any way we can find.

Who are we?

We’re an online bingo company based in Sunderland, though I wouldn’t let any of that put you off Winking smile  Bingo isn’t my life, though as with any business, you can love the job without having to love the subject matter…

There’s a decent sized team here – 8 .net devs (2 ‘game’ guys, 6 web app), infrastructure team (5), flash/client team (5), and a creative team (4) all contribute so it’s a pretty good place to bounce ideas.

We’re re-architecting the current site (MVC3 front end, business/data access tier, DI/IoC, Linq, distributed caching) and that will be going live fairly soon. 

We’re moving into Europe with similar technology and this role would focus on the delivery of that.

The job spec – verbatim

Job Summary

In order to support the growing business, new developers will be required to work on projects building all new website applications for the UK and future European businesses.

Job Responsibilities & Tasks:

A technical specialist within the tombola Operations team, focusing on providing:

  • Website application development.
  • Backend application development.
  • SQL development.
  • Problem and Incident management.
  • Support and maintenance of existing software
  • Keep abreast of industry developments in the technical arena and make recommendations to management where appropriate.

Knowledge/Experience:

  • Website application development.
  • Developing applications for Windows based systems and Web Applications using IIS 7+.
  • Developing applications accessing an SQL Server backend.
  • Object Oriented programming and design.
  • Visual Studio

Skills:

  • C#
  • TSQL
  • ASP.NET MVC 3 / 2
  • HTML / XHTML
  • JavaScript / jQuery
  • Ajax
  • LINQ
  • Internationalisation of .net apps
  • Continuous Integration / Build Management
  • Understanding of theory and application of design patterns

Competencies

  • Must be passionate about their chosen career path, you’ll be working with people that love what they do – you should too.
  • Must have strong team working and communication skills as well as being confident in working alone.
  • Must be able to work well under pressure, and meet deadlines.
  • Must be highly motivated.
  • Must have good time management skills and be able to multi-task.

Details, details?

We’re looking for 2 people, and salary range will very much depend upon skillset but realistically we’re looking at £30-35k.

Interested?

Please get in touch with me initially on twitter and I’ll give you corporate email addresses to find out more.

IIS, Optimising Performance, 304 status codes, and one stupid browser…

Well, I thought I’d start my play in earnest after last weeks DevWeek, I thought I’d experiment with various performance improvements that came out of Robert Boedigheimer’s (@boedie) talk.

First up, a play with expiry of content.  We host all of our ‘assets’ (images, css, javascript, and flash) from a content delivery network style setup.  We don’t currently use a CDN for anything other than our games, but the concept is the same – so long as the assets are hosted on a separate URL to the content, then the location of those assets isn’t an issue.

Setting Up Expiry for Assets in IIS 7

In IIS manager left click on the website, folder or indeed file that you wish to set expiry on.

image

From the ‘IIS’ section in the main pane (make sure you’re on features view for this) double click on ‘Http Response Headers’.

image

You will see in the right hand pane the option to ‘Set Common Headers…’

image

This gives you the following dialog:

image

 

You can see here, I’ve enabled ‘Expire Web content’ and am expiring it after 20 days.  You can set a fixed expiry time too, though I’ve never done this – I can imaging it’s more maintenance overhead to ensure you always keep content ‘cached’ at various points in time, though it’s there if your use case demands it.

 

Fiddlers 3(04)

Ok, so that’s all yes? Well, yup, that’s it.  Fire up fiddler and load up one of your assets – you should see something like the following:

image

So, all is well on first load – we get a status code 200 and we can see that caching is enabled and has a max-age of 1728000 (20d * 24h * 60m * 60s), so we know that our next request should be cached and shouldn’t hit the server.

Hit Refresh…

image

Erm… why are you still hitting my server even if only to be told 304 (not modified), resource hasn’t changed…  So why the round trip?

Turns out hitting refresh on any browser will indeed make that round trip – the refresh button is almost an override for the local browser cache and says ‘go and double check for me’ – I’ve tested in IE8/9, Firefox and Chrome and they all do this.

So how do I avoid the round trip for non modified resources?

Turns out you’re already doing it.  Instead of clicking refresh, click into the address bar and press return.  You will find the resource loads up again no problem, but there is now no round trip to the server and no 304 response.  Well, it loads up no problem in IE or Firefox, but there’s another pesky browser on the block…

Hmmm, Google Chrome? Why won’t you play ball?

Seems that pressing return in Chrome behaves in the same way as if you’d hit the ‘refresh’ button and still issues the request (and naturally gets the 304).

Should I worry?

Well, no – we have only been testing single resources here.  If I navigate to a page by typing in the URL and pressing return (first run) you’ll get the usual 200 status codes, and you’ll get the usual assets caching.  If you press return to ‘reload’ that page in chrome, then you will get all of the round trips back and forth with the corresponding 304s.

But, if you navigate to that page (after you’ve had your code 200’s) via either a bookmark or a google search (essentially, via a hyperlink) then jobs a good un and it doesn’t issue the requests.

I’m not sure why Chrome behaves differently to the other browsers in this regard – I could understand if it didn’t have a ‘refresh’ button, but it has.

 

I write this up as it caused an hour or so’s pain as I played around with IIS caching, as I’ve recently switched to Chrome as my default browser.  It was only chance that I tried the assets in the other browsers when Chrome wasn’t doing as it claimed it should be that I realised it was just a chrome side effect and really only came into play when I was ‘debugging’.

Hope it’s useful to others.

DevWeek 2011 – A Week of Geek

Another year, another incredibly lucky software developer gets to spend a week at the DevWeek conference.  For regular readers of the blog (hey mum!) you’ll know I attended last year and you can see my write up of it here.

I look at my own career over the past 12 months and realise just how much DevWeek helped hone and solidify a lot of my thinking – I’ve moved on massively since then and have put a great deal of last years learning into practice within products that I’ve worked on, so definitely was a worthwhile spend (thankfully my employer also thinks so!)

I had to choose my sessions far more carefully this year, it was a downside of returning I feel, but there felt like a lot of repetition on sessions – that said, there was still a great deal of new content (and I enjoyed @JeffRichter’s talk on exceptions so much last year I re-attended this year – sad eh!).

Day 1 Highlights

Ajax and ASP.NET MVC

K. Scott Allen – @OdeToCode

The lightbulb went on at the start of this talk and just didn’t switch off – I’ve not done so much on the Ajax side of MVC, though he made it all incredibly easy, and although I’ve watched the videos he delivered via pluralsight, it was far easier to contextualise and sank in better being in the room (and able to ask those stupid questions!).

He covered the client validation updates in MVC3 (the main reason we switched to MVC3 when it first launched), and covered a great deal around the topic as folks asked questions.

Massive tangibles to take directly onto a project I’m working on and I look forward to getting some of this into code asap.

NoSQL, Is it the future?

Gary Short – @GaryShort

Oh how I wish I went to that day on NoSQL last year in Scotland…  Presented brilliantly (as per really, after seeing presentations at DDD this is the usual craic with Gary).  He covered the various products available and the general use cases for them, obviously the history to NoSQL, and covered a lot of use cases.  I could waffle more but I’d only highlight my ignorance – realistically the cleverness in the talk and the outcome for me was that I have to try this – I have to grab a small project at work that I can pickup in my spare time and experiment with it.  I work in an environment where it’s easy to test the limits of our RDBMS solution as we have a massive amount of traffic, especially writes, and I think it’d be worthy of investigation – definitely more on this to follow from me!

Day 2 Highlights

Model Binding in ASP.NET MVC 3

K. Scott Allen – @OdeToCode

I’m going to start to sound like an @OdeToCode fan boy, but his presentation style and knowledge of the topic when asked a question just rocks – one of those people that qualifies in my book as a Dev Rock Star, so him catching up with him at lunch to have a chat through the questions I’d asked during the session was just superb!

There was a great deal of validation (personal, not model) in this session on the ways we’re currently doing model binding, though again, there were significant tangibles around the extensibility in MVC3 that may well make certain aspects of what we do around model binding easier.

Modern Javascript

K. Scott Allen – @OdeToCode

ok, ok, it’s turning into the Scott Allen show… doh!  I can only hope the reticence to ask questions during this question meant that others were similarly thinking ‘Oh Christ, I don’t do any of this!’.

I got that functions are heavily important (I sort of got this from consuming jquery, I’d just not written my own in the same vein).  Closures are an area I really need to read up on, and in general I just need to work at this one.

For any developer who consumed jquery and just ‘gets things done’, but has never written their own class, implemented their own closure, etc. I’d say do as I plan to do and get to learning – there are so many opportunities that I’ve utterly missed.  Exciting times.

Day 3 Highlights

Do’s and Don’ts of ASP.NET MVC

Hadi Hariri – @hhariri

Again, as much a personal validation session, but loads of little tangibles from this one – lots around IDependencyResolver and Nuget that I just need to *do* – we use Unity in ControllerFactory guise currently so I really need to get this sorted.  Nuget I’ve been consistently positively surprised by, There was a lot more in this session and I’ve got a bucket full of notes to work through but a crackign session.

Improving Website Performance and Scaleability While Saving Money

Robert Boedigheimer – @boedie

Possibly the best talk of the week for me – and there wasn’t a design pattern, a mention of SOLID, or indeed a unit test in sight.  Robert presented a lot of absolutely practical steps that can be taken to improve performance on your website, he highlighted some of the tools he used (and how he uses them), and it was just practical advice after practical advice.  There is content from this that absolutely will be going into my every day work from next week onwards, and the team are going to despair as I wave the flag for the tools/concepts I got from this talk.

Closing Thoughts?

Well, the above (as well as just being in a focused environment where I got to talk dev) was superb.  I personally think if I can continue to be funded for it I’d return every 2 years rather than annually to give the talks a full chance to rotate, though that’s not a criticism of the hosts or those people who give talks, I’ve seen it on other speaker circuits too (DeveloperDeveloperDeveloper events for example) and it’s one of those ‘either shut up and do a talk Tez or let them get on with it’ – I really have no desire to do so, so will be happy with what I get!  Thanks to all of the organisers and speakers, it has absolutely rejuvenated my geek batteries

Onwards and upwards now, fired up geek on board!