#DDDnorth 2 write up – October 2012 – Bradford

#dddNorth crowd scene, waiting for swag!

Stolen from Craig Murphy (@camurphy) as it’s the only pic I saw with me on it (baldy bugger, green t-shirt front right) – thanks Craig!

Another 5:45am alarm woke me on a cold morning to signal the start of another days travelling on a saturday for a developer developer developer event, this time with Ryan Tomlinson, Steve Higgs, Phil Hale and Dominic Brown from work.  I’ve been to a fair few of these now, and it still overwhelms me that so many people are willing to give up their Saturdays (speakers and delegates alike) and attend a day away from friends, family (and bed!) to gather for a day with their peers to learn from each other.

Lions and tigers and hackers! Oh my!

Phil Winstanley, @plip

Phil highlighted that the threat landscape has and is changing now – we’re moving away from paper and coin as our means of transactions and everything is existing in the online space, it’s virtual, and it’s instantaneous.  Identity has become a commodity, and we now all exist in the online space somewhere – facebook are making the money they are because our identities and those of our relationships are rich with information about who we are and what we like.

He brought over some very good anecdotal evidence from Microsoft around the threat landscape and how it’s growing exponentially, there are countries and terrorist organisations involved in this (more in the disruption/extraction space) but everyone is at risk – estimated 30% of machines have some form of malware on them and a lot of the time it’s dormant.

Groups like anonymous are those that folks should be most scared of – at least when a country hacks you there are some morals involved, whereas groups like anonymous don’t really care about the fallout or whom and what they affect, they’re just trying to make a point.

The takeaway from this rather sobering talk from me was to read the Security Development Lifecycle – we all agreed as developers that although we attempt to code secure software, none of us were actually confident enough to say that we categorically do create secure software.

I’ve seen Phil give presentations before and really like his presentation style and this talk was no different – a cracking talk with far more useful information than I could distil in a write up.

Asnyc c# 5.0 – patterns for real world use

Liam Westley, @westleyl

I’ve not done anything async before and although I understand the concepts, what I really lacked was some real world examples, so this talk was absolutely perfect for me.

Liam covered a number of patterns from the ‘Task-based Asynchronous Pattern’ white paper, in particular the .WhenAll (all things are important) and .WhenAny (which covers a lot of other use cases like throttling, redundancy, interleaving and early bailout) patterns.  More importantly, he covered these with some cracking examples that made each use case very clear and easy to understand.

Do I fully understand how I’d apply async to operations in my workplace after this talk? No, though that wasn’t the aim of it (I need to spend more time with aync/await in general to do that).

Do I have use cases for those patterns that he demoed and want to apply them?  Absolutely, and I can’t wait to play!

Fantastically delivered talk, well communicated, and has given me loads to play with – what more could you want from a talk?

BDD – Look Ma, No Frameworks

Gemma Cameron, @ruby_gem

I approached this talk with some scepticism – I’ve read a lot about BDD in the past, I’ve seen a talk by Gojko Adzic very recently at Lean Agile Scotland around ‘busting the myths’ in BDD, and although the concepts are fine, I just haven’t found BDD compelling.  Gemma’s talk (although very well executed) didn’t convince me any further, but the more she talked, the more I realised that the important part in all of this is DISCUSSION (something I feel we do quite well at my workplace).  I guess we as a community (developers) aren’t always ideal at engaging the product owner/customer and fully understand what they want, and it was primarily this point which was drilled home early in the talk.  Until you have a clear understanding early on by bringing stakeholders together, arriving at a common understanding and vocabulary, how can you possibly achieve the product they wish.  I buy this 100%.

This is where the talk diverged for some it seems – a perhaps misplaced comment about ‘frameworks are bad’ was (I feel) misinterpreted as ‘all frameworks are bad’, whereas really to me it felt like a ‘frameworks aren’t the answer, they’re just a small part of the solution’ – it jumps back to the earlier part about discussion – you need to fully understand the problem before you can possible look at technology/frameworks and the like.  I’m personally a big fan of frameworks when there is a usecase for them (I like mocking frameworks for what they give me for example), but I think this point perhaps muddied some of the waters for some.  She did mention the self shunt pattern which I’ll have to read more on to see if it could help us in our testing.

A very thought provoking talk, and I can imagine this will generate some discussion on monday with work colleagues – in particular about engagement with the client (product owner/customer) in order to ensure we are getting the requirements correctly – hopefully we’re doing everything we need to be doing here.

Web Sockets and SignalR

Chris Alcock, @calcock

I’m sure chris won’t mind a plug for his morning brew – a fantastic daily aggregation of some of the biggest blog posts from the previous day.  This is the first opportunity I’ve had to see Chris talk, and it’s odd after subscribing to morning brew for years now you feel like you know someone (thankfully got to chat to him at the end of the session and ask a performance related question).

I’ve played recently with SignalR in a personal project so had a little background to it already, though that wasn’t necessary for this talk.  Chris did a very good job of distilling websockets both in ‘how’ and ‘what’ and covered examples of them in use at the http level which was very useful.  He then moved on to SignalR both in the Persistent Connection (low level) and Hub (high level) APIs.  It’s nice to see that the asp.net team are bringing signalR under their banner and it’s being officially supported as a product (version 1 anticipated later this year)

This was a great talk for anyone who hasn’t really had any experience of signalR and wants to see just what it can do – like me, I’m sure that once you’ve seen it there will be a LOT of use cases you can think of in your current work where signalR would give the users a far nicer experience.

Event Driven Architectures

Ian Cooper, @ICooper

The talk I was most looking forward to on the day, and Ian didn’t disappoint.  We don’t have many disparate systems (or indeed disparate service boundaries) within our software, but for those that do exist, we’re currently investigating messaging/queues/service busses etc. as a means of passing messages effectively between (and across) those boundaries.

Ian distilled Service Oriented Architecture (SOA) well and went on to different patterns within Event Driven Architectures (EDA) and although the content is indeed complex, delivered as effectively as it could have been done.  I got very nervous when he talked about the caching of objects within each system and the versioning of them, though I can see entirely the point of it and after further discussion it felt like a worthy approach to making the messaging system more efficient/lean.

The further we at work move towards communication between systems/services the points in this talk will become more and more applicable and have only helped validate the approach we were thinking of taking.

This talk wins my ‘talk of the day’ award* (please allow 28 days for delivery, terms and conditions apply) as it took a complex area of distributed architecture and distilled into 1 hour what I’ve spent months reading about!

And Ian – that’s the maddest beard I’ve ever seen on on a speaker Winking smile

Summary

Brilliant brilliant day.  Lots of discussion in the car on the way home and a very fired up developer with lots of new things to play with, lots of new discussion for work, and lots of new ideas.  Isn’t this why we attend these events?

Massive thanks to Andrew Westgarth and all of the organisers of this, massive thanks to the speakers who gave up their time to come and distil this knowledge for us, and an utterly huge thanks to the sponsors who help make these events free for the community.

I’ll be at dunDDD in November, and I’m looking forward to more of the same there – will be there the friday night with Ryan Tomlinson, Kev Walker and Andrew Pears from work – looking forward to attending my first geek dinner!

ASP.NET MVC4 – Using WebForms and Razor View Engines in the same project for mobile teamplate support

NOTE: All content in this post refers to ASP.NET MVC 4 (Beta) and although it has a go live license, it has not gone RTM yet.  Although the process has been remarkably smooth, please work on a branch with this before considering it in your products!

 

We’ve been presented with an opportunity to create a mobile friendly experience for our italian site.  Our italian offering front end is an asp.net MVC 3 site using the webforms view engine (we started the project before razor was even a twinkling in microsoft’s eye), and is pretty standard in terms of setup.

There are a number of different ways of making a site mobile friendly – scott hanselman has a written a number of great articles on how he achieved it on his blog, and responsive design is very much a hot topic in web design at the moment (and that is a cracking book) and there are a lot of resources out there (both microsoft stack and otherwise) around learning the concepts.

Our italian site although div based and significantly more semantically laid out than our UK site (sorry!) would have still been a considerable task to turn into a responsive design as a first pass.  Our mobile site *will not* need to have every page that the non-mobile site has though – the purpose of the site is different, and the functionality in the site will be also.

Along comes ASP.NET MVC 4 (albeit still in beta, but it has a go live license) with its support for mobile.  I really should care about how it works under the covers (perhaps a follow up post), though for now, basically if you have a View (Index.aspx) then placing a mobile equivalent (Index.mobile.aspx) allows you to provide a generic mobile version of a page.

Upgrade your MVC3 Project to MVC4

Basically, follow: http://www.asp.net/whitepapers/mvc4-release-notes#_Toc303253806

There were no problems in this step for us – we have a large solution, and there were a number of dependent projects that were based upon MVC3, but these were all easily upgraded following the steps at that URL.

Setting up your view engines

We previously had removed Razor as a view engine from the project to remove some of the checks that go on when attempting to resolve a page, so our Global.asax had the following:

// we're not currently using Razor, though it can slow down the request pipeline so removing it
ViewEngines.Engines.Clear();
ViewEngines.Engines.Add(new WebFormViewEngine());

and it now has:

ViewEngines.Engines.Clear();
ViewEngines.Engines.Add(new RazorViewEngine());
ViewEngines.Engines.Add(new WebFormViewEngine());

The order is important – if you want your mobile views to use Razor in a WebForms view engine project, then razor must be the first view engine the framework looks to. If however you want to stick with webforms (or indeed you are only using razor) then your settings above will be different/non-existant.

Creating the mobile content

We started by creating Razor layout pages in shared in exactly the same way that you would add a master page.  Open Views/Shared and right click, Add Item, and select an MVC4 Layout Page.  Call this _Mobile.cshtml, and setup the differing sections that you will require.

To start with, as a trial I thought I’d replace the homepage, so navigate to Views/Home, right click, and ‘Add View…’ – create ‘Index.mobile’ and select Razor as the view engine – select the _Mobile.cshtml page as the layout.

Ok, we now have a non-mobile (webforms view engine) and a mobile (razor view engine) page – how do we test?

Testing your mobile content

The asp.net website comes to help again.  They have a great article on working with mobile sites in asp.net MVC4 (which indeed is far better than the above, though doesn’t cover the whole ‘switching view engines’ aspects).

I installed the tools listed in that article, and loaded up the site in the various testing tools and was presented with the following:

image

That’s Chrome in the background rendering out the standard site, upgraded to MVC4 but still very much using the webforms view engine and master pages, and Opera Mobile Emulator (pretending to be a HTC Desire) in the foreground using Razor view engine and layout pages.

Conclusion

The rest, as they say, is just hard work Smile  We very much intend to make the mobile site responsive and our CSS/HTML will be far more flexible around this, though with media queries (some examples media queries) and the book above in hand, that will be the fun part.

The actual process of using both Razor and WebForms view engines in the same project was a breeze and means that longer term the move over to Razor for our core site should be far more straight forward once we’ve worked through any teething troubles we have around the work above.  Razor as a view engine is far more conscise and (dare I say it!) pretty than webforms and the gator tags, so I look forward to using it in anger on a larger project like this.

It may be longer term that there are pages on the site that lend themselves towards not having duplicate content in which case we will investigate making the core design more responsive in places, but for now, we have a workable solution to creating mobile content thanks to the mobile support in ASP.NET MVC4.

 

Hope that was useful.

Unit testing complex scenarios – one approach

This is another of those ‘as much for my benefit’ as it is for the community posts.  On a sizeable project at work we’ve hit a ‘catch up on tests’ phase.  We don’t employ TDD, though obviously understand that testing is very important to the overall product (both in confidence on release, and confidence that changes to the code will break the build if functionality changes).  Our code coverage when we started this latest phase of work was terrible (somewhere around 20% functionality coverage with 924 tests) – after a couple of weeks of testing we’re up to fairly high coverage on our data access/repositories (>90%) and have significantly more tests (2,600 at last count).

We are following a UI –> Service –> Repository type pattern for our architecture which works very well for us – we’re using IoC, though perhaps only because of testability – the loose coupling benefits are obviously there.

We’re now at the point of testing our service implementations, and have significantly more to think about.  At the data access layer, external dependencies were literally only the database.  At service layer, we have other services as external dependencies, as well as repositories, so unit testing these required considerably more thought/discussion.  Thankfully I work with a very good team, so the approach we’ve taken here is very much a distillation of the outcomes from discussion with the team.

Couple of things about our testing:

Confidence is King

The reason we write unit tests is manifold, but if I were to try to sum it up, it’s confidence.

Confidence that any changes to code that alter functionality break the build.

Confidence that the code is working as expected.

Confidence that we have solidly documented (through tests) the intent of the functionality and that someone (more often than not another developer in our case) has gone through a codebase and has reviewed it enough to map the pathways through it so that they can effectively test it.

Confidence plays a huge part for us as we implement a Continuous Integration process, and the longer term aim is to move towards Continuous Delivery.  Without solid unit testing at it’s core, I’d find it difficult to maintain the confidence in the build necessary to be able to reliably deploy once, twice or more per day.

Test Functionality, Pathways and Use Cases, Not Lines of Code

100% code coverage is a lofty ideal, though I’d argue that if that is your primary goal, you’re thinking about it wrong.  We have often achieved 100% coverage, but done so via the testing of pathways through the system rather than focussing on just the lines of code.  We use business exceptions and very much take the approach that if a method can’t do what it advertises, an exception is thrown.

Something simple like ‘ValidateUserCanDeposit’ can throw the following:

/// <exception cref="PaymentMoneyLaunderingLimitException">Thrown when the user is above their money laundering limit.</exception>
/// <exception cref="PaymentPaymentMethodChangingException">Thrown when the user is attempting to change their payment method.</exception>
/// <exception cref="PaymentPaymentMethodExpiredException">Thrown when the expiry date has already passed</exception>
/// <exception cref="PaymentPaymentMethodInvalidStartDateException">Thrown when the start date hasn't yet passed</exception>
/// <exception cref="PaymentPlayerSpendLimitException">Thrown when the user is above their spend limit.</exception>
/// <exception cref="PaymentPlayerSpendLimitNotFoundException">Thrown when we are unable to retrieve a spend limit for a user.</exception>
/// <exception cref="PaymentOverSiteDepositLimitException">Thrown when the user is over the sitewide deposit limit.</exception>

and these are calculated often by calls to external dependencies (in this case there are 4 calls away to external dependencies) – the business logic for ‘ValidateUserCanDeposit’ is:

  1. Is the user over the maximum site deposit limit
  2. Validate the user has remaining spend based upon responsible gambling limits- paymentRepository.GetPlayerSpendLimit

    - paymentRepository.GetUserSpendOverTimePeriod

  3. GetPaymentMethodCurrent- paymentRepository.GetPaymentMethodCurrent

    - paymentRepository.GetCardPaymentMethodByPaymentMethodId

    - OR paymentRepository.GetPaypalPaymentMethodByPaymentMethodId

  4. if we’re changing payment method, ensure:- not over money laundering limit

So testing a pathway through this method, we can pass and fail at each of the lines listed above.  A pass is often denoted as silence (our code only gets noisy when something goes wrong), but each of those external dependencies themselves can throw potentially multiple exceptions.

We employ logging of our exceptions so again, we care that logging was called.

Testing Framework and Naming Tests

NUnit is our tool of choice for writing unit tests – syntax is expressive, and it generally allows for very readable tests.  I’m a big fan of the test explaining the authors intent – being able to read and understand unit tests is a skill for sure, though once you’ve read a unit test, having it actually do what the author intended it to is another validator.

With that in mind, we tend to take the approach ‘MethodUnderTest_TestState_ExpectedOutcome’ approach.  A few examples of our unit test names:

  • GetByUserId_ValidUserId_UserInCache_GetVolatileFieldsReturnsValidData_ReturnsValidUserObject
  • GetPlaymateAtPointInTime_GivenCorrectUserAndDate_ValidPlaymateShouldExist
  • GetCompetitionEntriesByCompetitionId_NoEntries_ShouldReturnEmptyCollection
  • GetTransactionByUserIdAndTransactionId_DbException_ShouldThrowCoreSystemSqlException

Knowing what the author intended is half the battle when coming to a test 3months from now because it’s failing after some business logic update.

Mocking Framework

We use Moq as a mocking framework, and I’m a big fan of the power it brings to testing – yup, there are quite a number of steps to jump through to effectively setup and verify your tests, though again, these add confidence to the final result.

One note about mocking in general, and any number of people have written on this in far more eloquent terms than I.  Never just mock enough data to pass the test.

If we have a repository method called ‘GetTransactionsByUserAndDate’, ensure that your mocked transactions list also includes transactions from other users as well as transactions for the same user outside of the dates specified – getting a positive result when that is the only data that exists is one thing, getting a positive result when you have a diverse mocked data set with things that should not be returned again adds confidence that the code is doing specifically what it should be.

Verifying vs. Asserting

We try very much to maintain a single assert per test (and only break that when we feel it necessary) – it keeps the mindset on testing a very small area of functionality, and makes the test more malleable/less brittle.

Verifying on the other hand (a construct supported by Moq and other framekworks) is something that we are more prolific with.

For example, if ‘paymentRepository.GetPlayerSpendLimit’ above throws an exception, I want to verify that ‘paymentRepository.GetUserSpendOverTimePeriod’ is not called – I also want to verify that we logged that exception.

The Assert from all of that is that the correct exception is thrown from the main method, but the verifies that are called as part of that test add confidence.

In our [TestTearDown] method we tend to place our ‘mock.Verify()’ methods to ensure that we verify those things that are able to be after each test.

Enough waffle – where’s the code?

That one method above ‘ValidateUserCanDeposit’ has ended up with 26 tests – each one models a pathway through that method.  There is only one success path through that method – every other test demonstrates error paths.  So for example:

[Test]
public void ValidateUserCanDeposit_GetPaymentMethodCurrent_ThrowsPaymentMethodNotFoundException_UserBalanceUnderMoneyLaunderingLimit_ShouldReturnPaymentMethod()
{
	var user = GetValidTestableUser(ValidUserId);
	user.Balance = 1m;

	// remaining spend
	paymentRepository.Setup( x => x.GetPlayerSpendLimit(ValidUserId)).Returns( new PlayerSpendLimitDto { Limit = 50000, Type = 'w' }).Verifiable();
	paymentRepository.Setup( x => x.GetUserSpendOverTimePeriod(ValidUserId, It.IsAny(), It.IsAny())).Returns( 0 ).Verifiable();

	// current payment method
	paymentRepository.Setup( x => x.GetPaymentMethodCurrent(ValidUserId))
						.Throws( new PaymentMethodNotFoundException("") ).Verifiable();

	IPaymentMethod paymentMethod = paymentService.ValidateUserCanDeposit(user, PaymentProviderType.Card);

	Assert.That(paymentMethod.CanUpdatePaymentMethod, Is.True);
	paymentRepository.Verify( x => x.GetCardPaymentMethodByPaymentMethodId(ValidCardPaymentMethodId), Times.Never());
	LogVerifier.VerifyLogExceptionCalled(logger, Times.Once());
}

That may seem like a complex test, but I’ve got the following from it:

  • The author’s intent from the method signature:upon calling ValidateUserCanDeposit

    a call within that to GetPaymentMethodCurrent has thrown a PaymentMethodNotFoundException

    at that point, the users balance is below the money laundering limit for the site

    so the user should get a return that indicates that they can update their payment method

  • That those methods that I expect to be hit *are* hit (using moq’s .Verifiable())
  • That those methods that should not be called aren’t (towards the end of the test, Times.Never() verifies
  • That we have logged the exception once and only once

Now that this test (plus the other 25) are in place, if a developer is stupid enough to bury an exception or remove a logging statement that the build will fail.

Is this a good approach to testing?

I guess this is where the question opens up to you guys reading this.  Is this a good approach to testing?  The tests don’t feel brittle.  They feel like they’re focussing on one area at a time.  They feel like they are sure about what is going on in the underlying code.

Overkill?

Ways to improve them?

How do you unit test in this sort of situation? What software do you use? What problems have you faced?  Keen to get as much information as possible and hopefully help inform each other.

I’d love to get feedback on this.  It feels like it’s working well for us, but that doesn’t necessarily mean it’s right/good.

2010 – A year in geek

I’ve found it incredibly cathartic to read a few others’ blog posts summarising not only the year that has gone, but their aims for the year ahead – this has been an incredibly busy year for me in geek terms, and I thought I’d write it up, as another hopefully cathartic exercise.

The year starts…

2010 started for me after only four months in a new job after escaping an agency environment in August last year – I honestly didn’t know how badly I had it in my previous role until I started in my current – I took quite a hefty pay cut to switch jobs, but the previous role (I should really say roles, as I was stupidly doing the IT Manager and Dev team lead roles) had me in the last 6 months of it working comfortably 60 hour weeks – I was knackered, home life was suffering, I couldn’t switch off, I was stressed (and anyone who knows me knows I just don’t do stress).

My current role is pretty much idyllic for me – job description is Senior Developer, but we all know that hides a multitude of sins.  Basically, I get to specify technical direction, I get to do staff mentoring/staff support, I get to be involved in the community, but (best of all) I get to spend about 75% of my usable time developing.  Pig in shit I believe is the term they use Winking smile

Legacy Code

Oddly, the first real achievement this year involved minor improvements to our payment system (based upon  legacy code – classic asp – ewww!)  I write it here not because I’m proud of the technology, but of the analytical approach we took, the change process we had in place for the little and often changes to it, and the overall effect of those changes – conservative estimates by our financial officer put us at just over 1% extra turnover.  Now that doesn’t sound a lot, until you see how much the company turns over – needless to say, they were very happy with the work!

Site Rewrite

This has been the big focus for me from around April, and it’s been huge – our existing site is a mix of a lot of classic ASP with a number of .net projects dotted around – the technical debt in there is huge and changes, be it new functionality or modifications to existing functionality are just incredibly costly.  The aim (and I’ve read any number of posts that say this is a bad idea) was to re-write the whole thing into something that was:

a) more maintainable

b) easier to deploy

c) of a far higher overall quality

d) minimised technical debt

e) easier to extend

With that in mind, the technologies that myself and the team have worked on this year have been wide ranging.

ASP.NET MVC2

The move away from web forms and into MVC has been a revelation.  I lament now the occasional need to maintain our legacy code as once you grok the separation of concerns involved in MVC2 (I heartily recommend both the Steve Sanderson book and the TekPub video series as learning resources). moving back to web forms (especially legacy) is a mare.  I’d say out of all the things covered this year, this is the biggest ‘win’ for me – I can see me using this pattern (and asp.net mvc) for a long time to come as my primary means of delivery over the web.

Testing Software (Unit, Integration, n’all that jazz)

I daren’t call this test driven development as we tend to write our tests after we’ve got the functionality in place – our specifications and business rules in most areas of the rewrite haven’t carried over verbatim, so writing unit tests ahead of time was rarely practicable.  That said, the project is now up to 290+ unit/integration tests, and I suspect before launch that number will nearly double.

It’s very easy during code reviews for team members to validate the logic, outcomes and goals in the unit tests up front so that they form almost a code contract which then goes on to define behaviour and functionality within the code.  It also (assuming business knowledge of the area under test) allows people to highlight potential missing tests or modifications to existing tests.

Learning wise, blogs have been the most use during the year for unit testing, though I would say a must purchase is ‘The Art of Unit Testing’ by Roy Osherove.  It got me thinking about unit testing in a very different way and has led (I hope) to me simplifying unit tests but writing more of them, using mocking frameworks to deliver mocks/doubles, and generally being a big advocate of testing.

Design Patterns

Obviously MVC goes without saying, though this year has seen me read a lot around software design and the patterns used therein.  I feel I now have a solid handle on a great deal more software design from an implementation point of view (the theory was never really that difficult, but turning that into an implementation…).  We’ve used the Service pattern extensively, Repository, I’d like to think we’ve used Unit of Work in a few places, the Factory pattern.  They’ve all seen the light of day (necessarily so) in this project.

There’s a fantastic post by Joel Spolsky about the Duct Tape Programmer which I’d urge everyone to read if they haven’t done so, and it’s about finding that balancing act between software design for software design’s sake (the pure view) versus getting the job done – there’s always a balancing act to be had, and hopefully I’ve stayed on the right line with regards to this.  It’s very easy when focussing on the design of the software to over engineer or over complicate something that should be (and is) a relatively straight forward task.

Uncle Bob must get a mention this year, as his SOLID principles have been a beacon – you don’t always adhere to them, you don’t always agree where they apply, but you can’t deny that as underlying principles of OOD they are a good foundation.

Business Exceptions

Two talks immediately spring to mind when I look at the approach we’ve taken with business exceptions, the first was delivered at DevWeek which I was lucky enough to attend in April (see the post here), the second was delivered by Phil Winstanley (@plip) at DDD Scotland this year.

We’ve very much using exceptions as a means of indicating fail states in methods now, and I love it – coupled with our logging, it feels like we will rarely have unhandled exceptions (and when we do, they are logged), and the overall software quality because of this feels far superior.

I understand the concerns that have been raised around the performance of exceptions (cost to raise etc.) and the desire to not use exceptions for program flow, though I think we’ve struck a happy balance and my testing (albeit rudimentary) earlier in the year suggested to me that the performance of these things was something that just wasn’t a concern.

Continuous Integration

Something that’s been on the back burner for too long now, and only the past week have I made any headway with it, but already it’s a love affair.  I suspect the quality of the information we get out of the system as we move forward will pay dividends, and as we begin to automate deployment/code coverage, and I get more heavily into MSBuild, this is going to be something that I don’t think I’ll want to give up on larger projects.

Community

I now subscribe to approximately 160 blogs, which sounds like a lot, but thankfully not everyone posts as often as @ayende, so jobs a good un with regards to keeping up – I find 5-10mins at the end of the day lets me have a quick scan through those posts that have come in, discount the ones I’m not interested in, skim read the ones I am and star them (google reader stylee) ready for a more thorough read when I get to work the next day.  This may seem a large commitment, but remember I’ve come from a job where approximately 60hrs a week I was ‘working’ (not geeking I hasten to add, just client liaison, product delivery, bug fixing, and sod all innovation)  I now find my working week is down to approx 40hrs work, and between 5 and 15hrs per week on geek stuff depending on the week and what’s on – the extra time I get for self development is just my investment in my career really, and I talk to so many other people on twitter who do exactly the same.

Events

Getting our own local microsoft tech user group (@NEBytes, http://www.nebytes.net) has been fantastic this year – we’ve had some superb speakers, and I know that once a month I get to catch up with some cracking geeks and just talk random tech.  The guys who run it Andrew Westgarth (@apwestgarth), Jon Noble (‘@jonoble), Ben Lee (@bibbleq) and Damian Foggon (@foggonda) do a fantastic job, and I look forward to more of this in 2011.

I managed to attend DevWeek this year, and wrote up a number of things from it, but it was a fantastic week.  Thankfully work saw the benefit so are sending me again in 2011, so hopefully I’ll meet up with folks there and learn as much as I did this year.

Developer Developer Developer days.  These are superb!  Hopefully we can get one organised closer to home in 2011, but the two I attended this year (Scotland and Reading earlier in the year) were packed full of useful stuff, and the organisers need to be praised for them.

Geek Toys – The iPad

I couldn’t round off the year without humbly admitting that I was wrong about the iPad when it launched – I didn’t see the point at all, and was adamant it was going to flop.  Then in October I found myself the owner of one (erm… I actually paid for it too – I have no idea what was going on there!).

Well, revelation doesn’t do it justice – it’s the ultimate geek tool!  Thankfully a lot of the books I buy are available as ebooks also, and I’ve found more and more I’m moving away from print and reading geek books on my ipad – epub format is best (for annotations and the like), though PDF works a treat too.  Aside from that, tweetdeck is a cracking app on the ipad, and it lets me stay in touch with geeks more regularly than I would otherwise have done.  Reeder is my final tool of choice, and the way it represents blogs you’ve not read yet is fantastic.

I’d suggest any geek that loves quick access to their blogs, their books, and tweetdeck (though naturally the ipad does a whole lot more) have a play with one and see if it could be the answer for you too – I’m hooked.

And what of 2011?

Well, I’m over the moon with the way 2010 has gone really – all I can ask is to maintain my geek mojo, my thirst for learning, and a cracking bunch of people to work with and life will be grand Smile

A very quick PS to add a technoarti tag VBXP4MC892BG so that I can claim my blog via them

TeamCity – Install and Setup (Basics)

Been a while since I posted and I thought that the past few days warranted getting my thoughts down as we’ve just setup our first foray into Continuous Integration/Build Automation with TeamCity.  We’re in the process of rewriting the corporate site from classic asp/asp.net into an MVC2 front end with some solid (though not always SOLID) design behind it.  We’ve written a lot of unit tests (though many more to go), and thought it was about time we looked at the whole CI/Build side of things.  I’d hasten to add, the following post will remain at a fairly basic level, as that’s where I’m at at the moment – hopefully something in here will be useful, though it’s as much about documenting the steps for the team I work with and whenever I write something like this down it always helps solidify it in the grey matter.

Why Continuous Integration/Build Automation?

The answers for us fit pretty much into the standard reasons behind CI – primarily ensuring quality, though easing the deployment burden was certainly a part of it.  CI completes the circle really – you’ve written your quality code, you’ve written your unit tests (and any other tests, integration, UI, etc.), so why not have an easy way to get all of that validated across your whole team, making sure that the quality remains and that you don’t have the manual task of pushing the code out to your dev servers? 

Continuous Integration helps with all of this, and a whole lot more, though the ‘more’ part is something that will come in time for us I think – we now have a working checkin build (I’ll detail the steps I went through) so that at least gives us ongoing feedback.

TeamCity was the immediate choice for us as we don’t really qualify for a TFS setup and CruiseControl.net seemed to have a higher learning curve (I may be mis-representing it here mind).

Before going through the detail of the install, a quick shout out to Paul Stack (@stack72), the fantastic Continuous Integration book from Manning, and the as yet unread MS Build book from Microsoft – these as well as the blog posts from many have helped massively in getting this setup.

Team City 6 – Install

Generally, the defaults in the setup were fine.  I made sure that all options were enabled with regards to the service etc. – I can’t see the use case when you wouldn’t want this, but it’s worth stating.

image

I changed the path to the build server configuration to a more general location – it initially suggested my own windows account user area, though I was unsure (and couldn’t find easy documentation) on whether this would make the configuration available to others, so I defaulted to a different path.

image

With regards to server port (for the TeamCity web administration tool) I changed the default port too.  Although it’s recommended that the build server remains as a single purpose box, I felt uncomfortable going with the default port 80 just in case we ever chose to put a web server on there for any other purpose.

image

I also chose to stick with the default and ran the service under the SYSTEM account – it doesn’t seem to have affected anything adversely and I’d rather do that than have to create a dedicated account.

Team City – Initial Setup

Initially you are asked to create an administrator account – do so, though if you’re in a team of people, there is an option later to create user accounts for each user – far better to do that and leave the admin account separate.  In the free version you can have up to 20 users, so it’s ideal for small teams.

Create a Project

The first steps in linking up your project to TeamCity is to create your project.

image

Here, you can give the project any name and description – it can (though doesn’t have to) match the project name in Visual Studio.

image

TeamCity from this point on holds your hand fairly effectively.

image

oh, ok – thanks Smile <click>

The build configuration page has a lot of options, but some of the pertinent ones (at least early doors – once you have more experience, which I don’t, then the others will certainly come into play).

Name your build – I named ours ‘checkin build’ as I intend for it to happen after every checkin… does what it says on the tin kinda thing.

Build number format – I left this as the default ‘{0}’ – it may be prudent to tie it in later on with the Subversion version number, but for now, we want to get a working CI process.

Artifact Paths – very much steered clear of this at the moment – it seems there’s a lot of power in these, though I haven’t touched on them enough.

Fail Build If – I went with the defaults plus a couple of others – ‘build process exit code is not zero’, ‘at least one test failed’, ‘an error message is logged by the build runner’.

Other than that, I pretty much stuck with the defaults.

Version Control Settings

image

I deliberately selected to checkout to the agent as I suspect this’ll give me more scalability in future – the build server can have multiple build agents on other machines from what I understand (kinda the distributed computing model?) and those agents can handle the load if there are very regular/large team checkins.  I think there are limitations on build agents in the free version, but again – if we use this solidly, and need more, then the commercial license isn’t too badly priced.

I also chose a different checkout directory to the default, just because – no solid reason here other than I have a lot of space on the D: drive.

Our project is significant (24 VS projects at last count, a lot of them testing projects (1 unit and 1 integration per main project), and initially I experimented with ‘clean all files before build’ but the overall build was taking approximately 8mins (delete all, checkout all, build, unit test) – I’m going to try to not clean the files and do a ‘revert’ instead but at present, I don’t have any experience on which is better – certainly cleaning all files worked well, but 8mins seemed a while.

Attach to a VCS Root

The important part – linking up your project to your source control (subversion in our case).

image

Click ‘Create and attach…’  Most of the settings in here are defaults, but you will notice further down the page it defaults to subversion 1.5 – we’re using 1.6, so double check your own setup.

image

I also experimented with enabling ‘revert’ on the agent ala:

image

With an aim to bringing down the overall build time – I haven’t played enough to warrant feedback yet, though I suspect the revert will work better than a full clean and checkout.

Build Steps

The CI build will be broken into a number of steps, but firstly we need to get the core project building on the agent.  There will be a lot more to learn on this one, but for now, what worked well for us was the following:

image

Our Solution file contains all the information we need to work out what needs to be built, and TeamCity supports it so jobs a good un.  As I extend the base build then this method will still just work as I’ll be modifying the .csproj files belonging to the solution anyway.

Build Step 2

This one was slightly more convoluted, but basically, giving relative paths to the DLLs that contain the unit tests is the way forward here.

image

Make sure you target the right framework version (I didn’t initially, though the error messages from TeamCity are pretty good in letting you figure it out).

Build Triggering

We want this all to trigger whenever we checkin to our source control system (in our case, subversion), so when we click on ‘build triggering’ and ‘add trigger’, selecting ‘VCS Trigger’ will get you everything you need:

image

Are we there yet?

Well, just about – you will see the admin interface has a ‘run’ button against this configuration (top right of browser), lets do an initial run and see what the problems are (if any).  You can monitor the build by clicking on the ‘agents’ link at the top of the page and then clicking on the ‘running’ link under the current build.

Should you get the message:

… Microsoft.WebApplication.targets" was not found…

This basically happens because you don’t have web deployment projects (or indeed VS2010) installed on the build server.  The path of least resistance is to copy the C:\Program Files\MSBuild folder over to the build machine’s Program Files folder (if x64, make sure you put it in the x86 one).  You should find the build just works after that.

Ok, Build is working – Tell me about it!

Notifications were the last thing I setup (make sure you’ve setup a user account for yourself before you do this, the admin account shouldn’t necessarily have notifications switched on).  Click on ‘My Settings & Tools’ at the top and then ‘Notification Rules’.

I’ve setup an email notifier (which will apparently tell me of the first failed build, but only the first after a successful), and I’ve downloaded the windows tray notifier (My Settings & Tools, General, right hand side of the page) which is setup likewise.

Next Steps?

There are a lot of other tasks I want to get out of this, not just from a CI point of view.  I’ve deliberately (as @stack72 suggested) kept the initial setup ‘simple’ – getting a working setup was far more important than getting an all encompassing setup that does everything I want from the off.  I can now see the guys doing their checkins and the tests passing, I’m now far more aware if someone has broken the build (and lets face it, we’ll all deliberately break it to see that little tray icon turn red), and I know there’s so much more that I can do.

Next priorities are:

  1. Learn MSBuild so that I can perform tasks more efficiently in the default build – e.g. I want to concatenate and minify all CSS on the site, I want to minify our own Javascript, etc.
  2. Setup deployment on the checkin build – I suspect this will use Web Deployment Projects (which themselves generate MSBuild files so are fully extensible) to get our checked in code across to our dev servers.
  3. Setup a nightly build that runs more tests.  As you can see above, we build and run unit tests for our checkin build – I want to run nightlies that perform both unit and integration tests – I want the nightly to deploy to dev also, but to promote the files necessary to our staging server (not publish them) so that we can at any point promote a nightly out to our staging and then (gulp) live servers.

I’d urge anyone working on projects where deployment is a regular and pain in the arse task, or if there are a few of you and you’ve taken unit testing and TDD (be that test first or just good solid functionality coverage), my view now is that Continuous Integration is the tool you need. 

It’s the new Santa Claus – It knows when you’ve been coding, it knows when you’re asleep, it knows if you’ve been hacking or not, so code good for goodness sake!

As per all of my other posts, the above is from a novice CI person – any feedback that anyone can give, any little nuggets of advice, any help at all – I’ll soak it up like a sponge – this has been a lot of fun, and there’s definitely a warm glow knowing it’s now in place, but there’s a long way to go – feedback *very* welcome!

The Performance of Exceptional Things

Following up from my previous blog post, I’ve had some cracking feedback from a number of people both for and against the use of exceptions – it’s one of those areas (as so many are in coding) that really does seem to have its own holy war.

On one side, those that are against the use of exceptions for ‘program flow’ (though I suspect if I looked at use cases in detail, I probably would be too) and see exceptions more for exceptional circumstances.  The approach favoured by this group tends to be in returning state and programming defensively to avoid exceptions wherever possible.

I totally agree with that final statement – if I have a method ‘IsLoggedIn’ and the user isn’t, then a simple ‘false’ will do and I’ll program defensively in that method to ensure that simple things like NullReferenceExceptions etc. aren’t thrown.

The other group like seem to like the concept of Business Exceptions as a means of handling logic, though (like me) they all wondered about the performance of that approach.

My Use Case

In the example code I put together for the last post, I used the business process of logging in the customer as a use case.  I could have equally used the concept of payments into the site, though obviously a far more significant use case that would have had me writing demo code long after it made sense to do so!

In my exceptions (User Not Found, Password Mismatch, Account in various ‘no play’ states), I’ve just done an analysis of yesterdays traffic to our site (which is hitting approx 1.8-2million unique visitors per month), and we have the following errors (all day):

  • User Not Found – 1842
  • Password Mismatch – 1125
  • Account Self Excluded / Account Cooling Off / Account Disabled / Account Closed – 240

So basically, 3207 things that in our new software will throw exceptions throughout a 24hr period, or 134 per hour, or 2.3 per minute.

Obviously there are payment type errors to take into account, which I suspect will be busier, lets say up to 20-30 exceptions per minute (tops).

So just how heavy are these exceptions?

I’ve updated the hosted code I used in the previous post, and have created two approaches to getting user data – one via models, one via exceptions.  The main web navigation at the top of the page will allow you to test with exceptions or test with models.

I basically setup a test to fail login (User Not Found), and iterated through it 10,000 times, and the code is in there both for exceptions and testing returning models.

I then iterated over those 10,000 tests 10 times each.

Yup, I know this isn’t really as indicative a test as it demonstrates best possible outcomes (the exceptions being repeatedly called will obviously do some form of optimisation that is beyond me!), but it’s helpful as one measure when the core thing people mention is performance.

And yup, there *is* a performance hit when throwing exceptions – no denying it.

But when you look at the code, failing login and returning a model (single run of 10,000 fails) averages out at 289.6ms, whereas with Exceptions, the same 10,000 iteration comes out at 624.1ms.  That makes a single exception (my maths is shite, so happy to be corrected on this) take 0.034ms more to throw.

Oops! Ignore the ticks figures below – I actually (stupidly) divided Ticks by 10,000 rather than Stopwatch.Frequency, so they’ll be slightly out – the milliseconds figures reflect reality though.

  Measured in Ticks     Measured in Milliseconds  
Run Exceptions Models   Exceptions Models
1 2150098 1009757   628 290
2 2165310 1018790   624 287
3 2144660 1018190   622 288
4 2136548 1012047   623 293
5 2139677 1009204   621 289
6 2154162 1011982   627 289
7 2146923 1019645   623 290
8 2167315 1026824   623 289
9 2148493 1011428   626 291
10 2156894 1008608   624 290
Avg Ticks 2151008 1014648      
           
Avg Ms 215.1008 101.4648   624.1 289.6
           
Ms per iteration 0.02151008 0.010146   0.06241 0.02896
           
Cost Increase for Ex   0.011364     0.03345

Where are the real stats?

Well, this is where my naivety kicks in and I really must defer to clever people.  Odd to think I’m a senior dev when I can’t effectively dig any further into it than where I’m at currently, but I’ve found a few cracking posts that really help me see that I’m happy with the approach we’re taking with regards to Business Exceptions (I promise to post when this goes live to let you know if the performance hit took our site down though!).

Blog 1 – Rico Mariani

Rico is (as they say) the man, and he really knows his stuff – he certainly sits on the ‘don’t do this’ side of the holy war, and has good reasons.  He highlights that iterative testing like the above is certainly a ‘best case’ and wouldn’t demonstrate typical usage.

http://blogs.msdn.com/ricom/archive/2006/09/25/771142.aspx

Blog 2 – Jon Skeet

I like this one, it kinda supports our approach! lol.  In particular, a great quote from him:

“If you ever get to the point where exceptions are significantly hurting your performance, you have problems in terms of your use of exceptions beyond just the performance.”

http://yoda.arachsys.com/csharp/exceptions2.html

Blog 3 – Krzysztof Cwalina

This is *exactly* how I see our approach to exceptions, and I agree with Jon Skeet, I couldn’t have put it even 10% as good as Krzysztof has.  His bullet point list of Do’s and Don’ts is brilliant.

http://blogs.msdn.com/kcwalina/archive/2005/03/16/396787.aspx

Code Project Post – Vagif Abilov

I thought this one interesting as he’s gone into far more detail in terms of the tests than I have, and his conclusions are interesting.

http://www.codeproject.com/KB/exception/ExceptionPerformance.aspx

Blog 4 – Eric Lippert

Not one so much on performance, as a ‘don’t throw exceptions when you don’t need to’, and there are often ways around throwing exceptions if you code ‘well’.

http://blogs.msdn.com/ericlippert/archive/2008/09/10/vexing-exceptions.aspx

Blog 5 – Krzysztof Cwalina

Another that I’ve linked to just for the quote which very much reflects my thinking:

“One of the biggest misconceptions about exceptions is that they are for “exceptional conditions.” The reality is that they are for communicating error conditions. From a framework design perspective, there is no such thing as an “exceptional condition”. Whether a condition is exceptional or not depends on the context of usage, — but reusable libraries rarely know how they will be used. For example, OutOfMemoryException might be exceptional for a simple data entry application; it’s not so exceptional for applications doing their own memory management (e.g. SQL server). In other words, one man’s exceptional condition is another man’s chronic condition.”

http://blogs.msdn.com/kcwalina/archive/2008/07/17/ExceptionalError.aspx

Exception Management Guidance – Multiple authors

Some good feedback re: exceptions in this post.

http://www.guidanceshare.com/wiki/.NET_2.0_Performance_Guidelines_-_Exception_Management

Closing

I’ve updated the code on Google Code at: http://code.google.com/p/business-exception-example/ to cover both Exceptions and Models if anyone wants a looksy.

Again though, really interested in hearing thoughts on this.  I think from the performance testing I’ve done and the posts I’ve read, I’m happy with our approach, but I’m equally happy for someone to come along and shout NOOOOOOO! and tell me why I’m an idiot :)

Over to you guys, and thanks for all the feedback thus far!

Business Exceptions in c# (as I understand them!)

Thought I’d best caveat the post as this really is just a collection of thoughts from a number of very clever people, and I’ve come to wonder over the past few days (since #dddscot) whether this is a good way to handle business exceptions or not.

My approach has been born out of a cracking talk by Jeffrey Richter at DevWeek this year (see the summary post elsewhere in my blog) where he talked about exception within your software and (as @plip did at dddscot this year) about embracing them.  He talked about exceptions in the following way though:

  1. Exceptions are not just for exceptional circumstances
  2. They are there as a means of saying ‘something hasn’t worked as expected, deal with it’
  3. They should be thrown when they can reliably be managed (be that logging or something else)
  4. They should be useful/meaningful

In my other post, I used the example of ProcessPayment as a method, and the various things that could go wrong during that method, but I thought I’d bring together a simple app that demonstrates how we are using exceptions currently.

The reason for this post

There was a lot of discussion after #dddscot about how folks handle this sort of thing, and really, there were some very clever people commenting!  It’s kinda made me nervous about the approach we’re taking, you all know the crack:

Dev1: “And that new method works even if the input is X, Y, and A?”

Dev2: “It did until you asked me, but now I’m going to have to test it all again!”

Ahhh, self doubt, you have to love it :)

Though I digress – basically, I would love to get some feedback from the community on this one.

Business information – what are the options?

Ok, if we take a simple method call, something like:

ProcessLogin(username, password)

How can we find out if that method fails for whatever reason?  If it does fail, why does it fail?  Was the username wrong?, is their account disabled?, did the password not match up?  This is a relatively straight forward method which is why I’ve chosen it for the demo, though there are any number of things that can go wrong with it.

Option 1 – returning an enum or something that can identify the type of error

So the method signature could be:

public ProcessLoginResult ProcessLogin(string username, string password) {
	// stuff
}

public enum ProcessLoginResult {
	Success,
	Fail_UsernameMismatch,
	Fail_PasswordMismatch,
	Fail_AccountDisabled,
	Fail_AccountCoolingOff,
	Fail_AccountSelfExcluded,
	Fail_AccountClosed
}

You may feel like that’s a lot of fail states, but these are what I work with in my current environment so they have to be included.

Obviously then we have something from the calling code like:

var result = ProcessLogin(username, password);

if (result != ProcessLoginResult.Success) {
	switch(result) {
		case ProcessLoginResult.UsernameMismatch:
		case ProcessLoginResult.PasswordMismatch:
			ModelState.AddModelError("General", "We have been unable to verify your details, etc. etc.");
			break;
		case ProcessLoginresult.[errorstate1]
			return RedirectToAction("ErrorState1", "ErrorPages");
		case ... [for each extra error state]
	}
}

There are obvious pro’s to this approach from my point of view – one is that we’re not throwing exceptions!  People talk a lot about the performance overhead in actually throwing new exceptions – there’s generally a sucking in of teeth as they do this.  I personally have no idea how “expensive” they are to raise, and it’s certainly something I’ll have to look into.

The difficulty here for me though is two-fold:

  1. If I want the richness of business information to return from my methods on failure, I need to come up with (almost) an enum per method to define the states that it can return with?
  2. If I have a different method (e.g. GetUserById(userId)) my only option is to setup the method signature with the user as an out param or pass it down by reference.

Option 2 – Business Exceptions

And this is the approach I’ve taken, though again – feedback very much appreciated!  Each of the possible fail states becomes a potential exception.  So the ProcessLogin method becomes:

/// 
/// Processes the login.  Steps are:
///  - Check the existence of the user
///  - Check the password matches (yup, we'd be hashing them here, no need for the demo)
///  - Check the account status
/// 
/// The username.
/// The password.
/// 
public MyCompanyUser ProcessLogin(string username, string password)
{
	MyCompanyUser user;

	try
	{
		user = dal.GetUserByUsername(username);
	}
	catch (MyCompanyUserNotFoundException)
	{
		//TODO: LOGGING
		throw; // but then pass the exception up to the UI layer as it is most easily able to deal with it from a user perspective
	}

	if (user.Password != password)	
	{
		//TODO: LOGGING
		MyCompanyUserWrongPasswordException ex = new MyCompanyUserWrongPasswordException("Password doesn't match");
		ex.Data.Add("Username", username);
		// potentially if you had an MD5 or something here you could add the hashed password to the data collection too

		throw ex;
	}
	
	switch(user.AccountStatus)
	{
		case AccountStatus.SelfExcluded:
		{
			//TODO: LOGGING
			MyCompanyUserSelfExcludedException ex = new MyCompanyUserSelfExcludedException("User self excluded");
			ex.Data.Add("Username", username);
			throw ex;
 		}	
		case AccountStatus.CoolingOff:
		{
			//TODO: LOGGING
			MyCompanyUserCoolingOffException ex = new MyCompanyUserCoolingOffException("User cooling off");
			ex.Data.Add("Username", username);
			throw ex;
		}	
		case AccountStatus.Disabled:
		{
			//TODO: LOGGING
			MyCompanyUserAccountDisabledException ex = new MyCompanyUserAccountDisabledException("Account disabled");
			ex.Data.Add("Username", username);
			throw ex;
		}	
		case AccountStatus.Closed:
		{
			//TODO: LOGGING
			MyCompanyUserAccountClosedException ex = new MyCompanyUserAccountClosedException("Account closed");
			ex.Data.Add("Username", username);
			throw ex;
		}	
	}
	return user;
}

obviously with this in place I can either Log at this level or log at the UI layer (I don’t have a strong feel architecturally either way).

The process login method call at the UI layer then becomes a little more convoluted:

try
{
	MyCompanyUser user = service.ProcessLogin(model.Username, model.Password);

	return RedirectToAction("LoggedIn", "Home");
}
catch (MyCompanyUserSelfExcludedException)
{
	return RedirectToAction("SelfExcluded", "ErrorPages");
}
catch (MyCompanyUserCoolingOffException)
{
	return RedirectToAction("CoolingOff", "ErrorPages");
}
catch (MyCompanyUserAccountDisabledException)
{
	return RedirectToAction("AccountDisabled", "ErrorPages");
}
catch (MyCompanyUserAccountClosedException)
{
	return RedirectToAction("AccountClosed", "ErrorPages");
}
catch (MyCompanyUserException)
{
	// if we're this far, it's either UserNotFoundException or WrongPasswordException, but we'll catch the base type (UserException)
	// we can log them specifically, handle them specifically, etc. though here we don't care which one it is, we'll handle them the same
	ModelState.AddModelError("General", "We have been unable to match your details with a valid login.  (friendly helpful stuff here).");
}

I don’t know why I find this a more elegant solution though – it certainly doesn’t generate any less code! There is very much a need for good documentation in this one (each method call documenting what types of exceptions can be thrown).

Want to see more?

I’ve put together a test VS2010 project using MVC2 and separate projects for the exception definitions and one for the models/services/dal stuff.

It’s rudimentary, but our core solution as Unity in there as an IoC container, it has interface based Services and Repositories, it has unit tests etc. and it just wasn’t viable (or commercially acceptable) to make any of that available, so I’ve distilled it down to the basics in the solution.

What I’d love now is feedback – how do people feel about this approach (Business Exception led) as opposed to the other?  What other approaches are available?  Is it bad to use exceptions in this way (and I’m fine if the answer is ‘ffs tez, stop this now!’ so long as there’s a good reason behind it!)

The code is available on google code at: http://code.google.com/p/business-exception-example/

and I’ve only created a trunk (subversion) at present at: http://code.google.com/p/business-exception-example/source/browse/#svn/trunk

Feedback pleeeeeez!

Developer Developer Developer Scotland, or summer arrives early in Glasgow!

Who let the sun out?

What a stunning day we were all faced with for #dddscot this year – the drive up from Newcastle (albeit starting at an ungodly hour) was actually fun – great scenery on the way, I’d forgotten what it was like to get out of a built up area – plenty more trips out needed over the summer methinks.  I had high expectations of the event after attending #ddd8 earlier in the year and being overwhelmed by the content there, and the day didn’t disappoint.

Onto the talks I managed to get to:

HTML 5: The Language of the Cloud?

Craig Nicol – @craignicol

A good start to the day, and pertinent for my current role (we’re investigating what HTML5 can do to help us with alternate platform delivery, certainly with a focus on the mobile market).  Craig’s talk was animated (in both senses of the word!), and it was useful to see just where the ‘standards’ were at.  Safe to say at present, and Craig mentioned it a few times during this talk, that if you want to target HTML5 then you really do need to pick your target browser (or generate more work usually and target browserS), as the standards are still significantly in flux.  There is a lot of help out there, and those people creating mashups really are helping in showing which browsers support which elements.

I particularly liked the look of the XForms (forms 2.0) stuff – being able to define something as an ‘email’ field, or a ‘telephone’ or ‘uri’ I think adds significant context to the proceedings and will deliver (for the users) a far richer experience.

As with a lot of emerging technologies though, I certainly think it’s far too early for reliable deployment in all but very controlled environments – even if you implement progressive enhancement well.  Something to follow for sure though.

Overall a very well presented talk, a minimal smattering of the expected ‘this worked 10mins ago!’, but this is HTML5+bits, so to be expected.

Exception Driven Development

Phil Whinstanley – @plip

plip at his usual exuberant self with this talk on exceptions, and it was a useful additional session to one I’d seen at DevWeek earlier in the year given by Jeffrey Richter.  The initial message was ‘exceptions happen’ – we have to learn how to live with them, what to do when they happen, which ones we should fix (and yup, I’m one of those people that hates warnings, so I suspect I’ll have to fix all of them!), which ones we should prioritise – how we make sure we’re aware of them, that sort of thing.

Two very useful additions to my current understanding – one was ‘Exception.Data’ which is essentially a dictionary of your own terms.  At present we’re throwing our own exceptions within our business software (more on that later), but .Data will give us far more information about what parameters were at play when the exception happened – utterly brilliant, and terrifying that I didn’t know about this!

Another was the use of window.onerror in javascript – ensure that you http post (or whatever other mechanism works best for you) when your scripts don’t work – there’s nothing worse than your javascript borking and not being able to repeat it, so make sure you report upon these too.

Some key snippets (some common sense, some not) such as never redirect to an aspx page on a site error (thar be dragons and potential infinite loops), go do static html instead.

plip’s acronym at the end of the session made me chuckle, I shant repeat it, but it had an odd way of sticking in the consciousness ;)

The only thing I thought lacking in this talk (and it’s no real criticism of plip) was the concept that was covered in that talk earlier in the year at DevWeek.  The idea that Exceptions are *not* for exceptional circumstances, they’re there as a means of controlling program flow, of reporting when something didn’t work as expected, and of giving more effective information.

So for example, if I had a method called ‘ProcessLogin(username, password)’ and one of the first checks was ‘does this username exist in the DB’, if it doesn’t, throw new UserNotFoundException.

Of course, if plip had gone down the custom exceptions and business defined exceptions, the talk could comfortably lasted two to three times longer, so I feel the devweek talk and plip’s complemented each other well.

Cracking talk though plip – really did get a lot out of this one, and I think this was the most useful session of the day for me.

A Guided Tour of Silverlight 4

Mike Taulty – @mtaulty

A reminder from Mike that I really need to spend some time looking into Silverlight 4.  I focus very heavily on web development and web technologies, and although I have little interest in desktop development, SL4 I think has a lot of interest in terms of as an intranet based tool with rich GUI.  Of course, I may be better going down the WPF route with that, but there’s something about the versatility of SL4 that appeals.

Cracking talk from Mike as per – always good to see one of the UK evangelists wax lyrical about their current focus, and this was no exception.

What ASP.NET (MVC) Developers can learn from Rails

Paul Cowan – not sure on twitter

I have to prefix this talk by saying that I thought Paul’s presentation style was great, and much as he maligned his irish accent, he was cracking to listen to.

That said – rails… what a bag of shite! lol.  I suspect I may get a number of replies to this, but what I like about MVC2 is that I can focus on architecture and the important stuff, and ‘get the job done’ without too many interruptions.  Ok, I have to add views myself, and a ‘Customer’ entity doesn’t automatically get a controller/views/unit tests associated with it.  But I feel in complete control, and don’t feel constrained at all.

I spent too many years in a unix/perl/python environment, and I really do not miss the command line shite I had to go through to really add value to what I was doing in the programming language.

VS2010 + Resharper deliver a significant number of improvements in the ‘streamlining’ of application development, and I have none of the hassle that came about as part of that rails demo (no matter how much it delivered with just a simple command line).

So I really do apologise to Paul – his presentation was great, but it only reinforced for me that the love affair I’m having with MVC2 at present is well grounded.  God, I sound like such a fanboy!

Real World MVC Architectures

Ian Cooper – @icooper

A few teething troubles at the start (don’t you just hate it when a backup brings your system to its knees), but overall a good presentation – I’d seen Ian’s talk at #ddd8 (prior to really solidly working with MVC), and I thought I’d re-attend this again after spending 2months solidly working with MVC2.  It has certainly reinforced what I’m doing is ‘right’ or at least appears to be good practice.  I’m still sceptical about the overhead that CQRS delivers when implemented in its purest sense, though the principles (don’t muddy up your queries with commands, and vice versa) is a one that obviously all should follow.

Ian had a bit of a mare with his demo code, though more to my benefit as I managed to nab some swag for being ‘that geek’ in the front row pointing it out – yay for swag!

The Close

Colin Mackay and the rest of the guys then spent some time covering the day, handing out significant swag (yay, I won a resharper (or if I can wing it as I have one) a dotTrace license!), and we had the obligatory Wrox Lollipop shot taken.

All in all, it was a cracking day, and well worth that early drive up from Newcastle – I think events like this work so well – getting a room or rooms full of enthusiastic devs, who all just want to be better at their art, and being presented to by people who’ve spend some time working on that art.  There’s nothing finer in the geek world.

Thanks to all organisers and sponsors – great fun was had by all :)

Unit Testing with DataAnnotations outside of MVC

This past week has seen us start on a big project at work to re-architect the site into .net and MVC2.  Naturally we have our models in a separate project, and we have two separate test projects (Unit and Integration) setup to use NUnit.

As it’s early days for us, and our first “real” MVC project I thought I’d write this up, a) as an aid to learning for me, but b) to try to gain feedback from the community on what they do with regards validation on their models.

I can see a few different ways we could have done this (annotate the ViewModels we’ll use on the front end, build in logic into our setters to validate, etc. etc.) but we’re now going down a route that so far feels ok.  That said, we’re focussing solidly on the modelling of our business logic at present, so haven’t yet brought the model “out to play” as it were.

Hopefully the above gives a wee bit of insight into where we are with it.

We’ve decided to plump for the MetaData model approach to keep the main objects slightly cleaner – an example for us would be:

namespace MyCompany.Models.Entities
{
	/// 
	/// 
	/// 
	[MetadataType(typeof(MyCompanyUserMetaData))]
	public class MyCompanyUser
	{
		public int UserId { get; set; }

		public string Username { get; private set; }
		...

		public void SetUsername(string newUsername)
		{
			if (Username != null)
				throw new ArgumentException("You cannot update your username once set");

			//TODO: where do we ensure that a username doesn't already exist?
			Username = newUsername;
		}
	}
}

and then in a separate class:

namespace MyCompany.Models.Entities
{
	public class MyCompanyUserMetaData
	{
		[Required(ErrorMessage="Your password must be between 6 and 20 characters.")]
		[StringMinimumLength(6, ErrorMessage="Your password must be at least 6.")]
		public string Password { get; set; }

		[Required(ErrorMessage="Your username must be between 6 and 20 characters.")]
		[StringLength(20, MinimumLength=6, ErrorMessage="Your username must be between 6 and 20 characters.")]
		[MyCompanyUserUsernameDoesNotStartWithCM(ErrorMessage="You cannot use the prefix 'CM-' as part of your username")]
		[CaseInsensitiveRegularExpression(@"^[\w\-!_.]{1}[\w\-!_.\s]{4,18}[\w\-!_.]{1}$", ErrorMessage = "Your username must be between 6 and 20 characters and can only contain letters, numbers and - ! _ . punctuation characters")]
		public string Username {get;set;}
	}
}

With all of this in place you’re all well and good for the MVC world, though unit testing just doesn’t care about your Annotations so your simple unit tests:

		
[Test]
public void SetUsername_UsernameTooShort_ShouldThrowExceptionAndNotSetUsername()
{
	// Arrange
	testUser = new MyCompanyUser();
			
	// Act

	// Assert
	Assert.Throws(() => testUser.SetUsername("12345")); // length = 5
	Assert.That(testUser.Username, Is.Null, "Invalid Username: Username is not null");
}

won’t give you the expected results as the logic of that is based upon the DataAnnotation.

What was our solution?

After much reading around (there didn’t seem to be an awful lot out there covering this) we took a two step approach.  First was to allow SetUsername to validate against the DataAnnotations like so:

		
public void SetUsername(string newUsername)
{
	if (Username != null)
		throw new ArgumentException("You cannot update your username once set");

	Validator.ValidateProperty(newUsername, new ValidationContext(this, null, null) { MemberName = "Username" });

	//TODO: where do we ensure that a username doesn't already exist?
	Username = newUsername;
}

Validator is well documented and there are a few examples out there of people doing this within their setters.  Essentially validating the input for a particular MemberName (Username in this case).

The second step was necessary because of the approach we’d taken with the MetaData class above, and it was a mapping in the TestFixtureSetup within our unit tests:

TypeDescriptor.AddProviderTransparent(new AssociatedMetadataTypeTypeDescriptionProvider(typeof(MyCompanyUser), typeof(MyCompanyUserMetaData)), typeof(MyCompanyUser));

This line (though I’ve yet to look down at the source code level) would appear to just be a standard mapping for the class to tell it where to find the metadata/annotations.

After putting those two things in place, the unit tests successfully validate against the annotations as well as any coded business logic, so jobs a good un!

Was it the right solution?

This is where I ask you, the person daft enough to suffer this blog post!  I have no idea if there is a better way to do this or how this will pan out as we propagate up to the MVC level – will I be causing us headaches taking this approach, will it simply not work because of overlap between the way MVC model binder validates versus what we’ve done down at the domain level?

It’s still early days for the project, and the above feels like a nice way to validate down at a business domain level, but how it pans out as we propagate wider and start letting other projects consume, hydrate and update the models… well that’s anyone’s guess!

Comments very much welcome on this one folks :)

Using T4 to generate enums from database lookup tables

I’m sure a fair few people will be working on projects like us where we have a database backend with referential integrity, including a number of lookup tables.  A lot of the time in this situation you also want to mirror the lookup values in your code (as enums for us).  Most of the time, it’s relatively easy to just manually create both sets of entries as they will rarely change once created.  Or so we hope!

I quite fancied learning about T4, and the first example I could think of was this tie up between database lookup tables and code enums. 

I love the idea that the output from your T4 work is available at compile time and available directly in your code once you’ve created the template – the synching of things between a database and your code base is an obvious first play.

So with that in mind, lets crack on.

Initial Setup

I’ve created a simple console app and a simple DB with a couple of lookup tables – simple ‘int / string’ type values.  I installed T4 Toolbox to get extra code generation options within the ‘Add New…’ dialog, though it turns out my final solution didn’t actually require it – that said, the whole T4 Toolbox project looks very interesting, so I’ll keep an eye on that.

image

This will generate a file ‘GenerateCommonEnums.tt’, and the base content of the file is:

image

Add a reference to your DB

At this point, I would have loved to use linq to sql to generate my enums, as it’s a friendly/syntacitcally nice way of getting at data within the database.

That said, this proved far more difficult than I’d have hoped – any number of people had made comments about it, and saying if you ensure System.Core is referenced and you import System.Linq job should be a good un.  It wasn’t in my case.

Thankfully, this wasn’t the end of the investigation.  I managed to find an example online that used a SQLConnection… old skool it was to be!

So what does the code look like…

The code I generated turned into the following, and I’m sure you’ll agree it aint that far away from the sort of code we’d write day in day out.

<#@ template language="C#" hostspecific="True" debug="True" #>
<#@ output extension="cs" #>
<#@ assembly name="System.Data" #> 
<#@ import namespace="System.Data" #>
<#@ import namespace="System.Data.SqlClient" #>
<#
    SqlConnection sqlConn = new SqlConnection(@"Data Source=tombola009;Initial Catalog=TeamDev;Integrated Security=True");
    sqlConn.Open();
#>
namespace MyCompany.Models.Enums
{
	public enum TicketType
	{
		<#
		string sql = string.Format("SELECT Id, Name FROM LOOKUP_TABLE_1 ORDER BY Id");
        SqlCommand sqlComm = new SqlCommand(sql, sqlConn);

        IDataReader reader = sqlComm.ExecuteReader();

        System.Text.StringBuilder sb = new System.Text.StringBuilder();
        while (reader.Read())
        {
            sb.Append(TidyName(reader["Name"].ToString()) + " = " + reader["Id"] + "," + Environment.NewLine + "\t\t");
        }
        sb.Remove(sb.Length - 3, 3);

        reader.Close();
        sqlComm.Dispose();
		#>
<#= sb.ToString() #>
	}
	
	public enum TicketCategory
	{
		<#
		sql = string.Format("SELECT Id, Area, Name FROM LOOKUP_TABLE_2 ORDER BY Id");
        sqlComm = new SqlCommand(sql, sqlConn);

        reader = sqlComm.ExecuteReader();

        sb = new System.Text.StringBuilder();

        while (reader.Read())
        {
            sb.Append(TidyName(reader["Area"].ToString()) + "_" + TidyName(reader["Name"].ToString()) + " = " + reader["Id"] + "," + Environment.NewLine + "\t\t");
        }

        sb.Remove(sb.Length - 3, 3);

        reader.Close();

        sqlComm.Dispose();
		#>
<#= sb.ToString() #>
	}
}

<#+
	
    public string TidyName(string name)
    {
        string tidyName = name;

		tidyName = tidyName.Replace("&", "And").Replace("/", "And").Replace("'", "").Replace("-", "").Replace(" ", "");
		
        return tidyName;
    }

#>

The ‘TidyName’ method was in there just to try to tidy up the obvious string issues that could crop up.  I could have regex replaced anything that wasn’t a word character, though I think this gives me a bit more flexibility and allows customisable rules.

This basically generates me the following .cs file:

 
namespace MyCompany.Models.Enums
{
	public enum TicketType
	{
		Problem = 1,
		MAC = 2,

	}
	
	public enum TicketCategory
	{
		Website_Affiliates = 1,
		Website_Blog = 2,
		Website_CentrePanel = 3,
		Website_CSS = 4,
		Website_Deposit = 5,
		Website_Flash = 6,
		Website_GameRules = 7,
		Website_GameChecker = 8,
		Website_HeaderAndFooter = 9,
		Website_HelpContent = 10,
		Website_Images = 11,
		Website_LandingPage = 12,
		Website_MiscPage = 13,
		Website_Module = 14,
		Website_Multiple = 15,
		Website_MyAccount = 16,
		Website_myTombola = 17,
		Website_Newsletters = 18,
		Website_Playmantes = 19,
		Website_Refresh = 20,
		Website_Registrations = 21,
		Website_Reports = 22,
		Website_TermsAndConditions = 23,
		Website_WinnersPage = 24,
		Website_Other = 25,
	}
}

From that point on, if there are extra lookup values added, a simple click of the highlighted button below will re-run the templates and re-generate the CS files.

image

Next Steps

I’m utterly sure there must be an easy way to use linq to sql to generate the code above and I’m just missing it, so that’s the next play area.  I’m going to be playing with the POCO stuff for EF4, so I think the above has given me a taster for it all.

As with all initial plays with this sort of thing, I’ve barely scratched the surface of what T4 is capable of, and I’ve had to rely upon a lot of existing documentation.  I’ll play with this far more over the coming weeks – I can’t believe I’ve not used it before!