LeadDev London 2019

This marks my second time at LeadDev London 2019, and you’ll see from my tweet stream, I get somewhat enthusiastic. The conference was (if possible) even more inclusive, diverse, and welcoming this year – made even more so with it’s positioning during pride month (Meri highlighted it was a fluke, though she also said that in previous years about bigger political upheavals that overlapped with the conference – I feel there are too many coincidences here! :))

I have to call out the badge adornments. Upgrading to a pride lanyard, and giving both my communication preferences and pronouns on my badge was an awesome touch – I truly hope all conferences follow this model. This was only a drop in the ocean on the inclusivity though – a creche, an alcohol free room, a quiet room, a prayer room – so much to absolutely adore about the approach and care that is taken to looking after humans.

My leadership journey over the past 12months between LeadDev 2018 and this year has been an interesting one and has seen me move from an engineering lead in a relatively small company to one of the biggest global pharma companies. Lots of personal growth, lots of exposure, loads of challenges – it’s been a blast.

This years’ LeadDev for me was underpinned by a few core themes, and each resonated:

  • Inclusion and Diversity – Let’s get it out in the open – I’m from the demographic that is part of the problem. I’m a CIS gendered, middle class, middle aged white man. I’m an ally to the LGBTQ+ community as actively as able (my trans son helps a lot with that), and I thought I was an ally to the rest of the diversity spectrum, though this conference has really highlighted that in my position of privilege there is so much more I can do. I was lucky enough after the conference to attend and be welcomed at a talk on feedback at London Tech Ladies user group, and it was tragic to hear just how bad some of the ladies there had it in the workplace. I&D is, thankfully, handled very well at GSK, but LeadDev coupled with the meetup has reminded me that I need to be more proactive and engaged in my support (with permission) of more diverse groups.
  • Psychological Safety and Org Culture – it was heartening to see time and again in talks these referenced or referred to. Anyone who’s had a conversation with me knows I’ll talk till the cows come home around these subjects. Vulnerability, courage, and helping people to bring their whole selves to work is waaaay more important than the actual tech. It ran as a golden thread through the whole conference and was hugely comforting to hear. It’s essential to any effective organisation.
  • Feedback and High Performing Teams – it felt great to reflect on a lot of the practices and approaches highlighted by speakers. Things that seemed to work well for them with regards to engaging and investing in the people within the org, and a lot of them were aspects we’d already adopted (or adapted) to some success. It feels like there’s perhaps a tale from the trenches from our own experiences here that may well be valid to listen to.
  • Artificial Intelligence – a subject I have little awareness of, but that I’ve seen a few times since joining GSK, I have a healthy tech scepticism in interacting with a bot that tells me I’m “looking peaky”. The cold chills run abound at this. Happy to talk to AI when interacting with my bank or tracking a missing order on amazon, but I draw the line at my healthcare. S’the future though innit 🙂

Some of the best talks for me that it’s worth highlighting. I’ll include links to the videos as soon as they’re available.

Navigating Team Friction

Lara Hogan (@Lara_Hogan) (video)

Some great lessons based around Tuckman’s stages of group development when navigating team friction, and some great references to emotional intelligence as a basis of communication (the amygdala hijack has had us all as it’s slave for far too long!). Rarely is there a tiger in the room when we react badly to something – the classic conundrum demonstrated being the emotive subject of ‘desk moves’! A great example followed, giving surgeons who perform the same procedure across multiple hospitals and perform it multiple times, yet performance only really ever seemed to rise significantly in the surgeons home hospital. The outcomes highlighting the importance of team interaction, trust, workflows etc. – performance is so much more than just hiring a group of “rockstars” and hoping for the best.

I loved the BICEPS model highlighted when looking at our core needs at work:

  • Belonging – community, connection, a group, a tribe
  • Improvement/Progress – towards purpose, improving the lives of others, meeting KPIs/OKRs
  • Choice – flexibility, autonomy, decision making
  • Equality/Fairness – access to resources/info, equal reciprocity. When something isn’t fair we will as a species literally riot
  • Predictability – of resources, time, direction, and future challenges. Be careful though, too much and we get bored. Too little, and our amygdala comes out to play!
  • Significance – status, visibility, recognition

“Knowing and addressing peoples core needs is a shortcut to making others feel understood and valued”

I’ve followed Lara’s 1:1 guidance for some time, and found the question around ‘how do you like to receive feedback’ really useful in all 1:1s with new staff. The model that Lara proposed for feedback feels similar to others I’ve seen, but immediately applicable.

  • Observation – of a behaviour or action. Make it purely factual
  • Impact – of that behaviour. Share how you feel, but share it in a way they will care about it.
  • Open Question or Request – in talking to adults, they’re already thinking about next steps, so you can ask a genuinely open/curious question “What do you hope will be the result of your actions in the meeting?” Generally, asking a question like this reveals so much more to feed into the shared pool of meaning in that conversation. You can make a request at this stage, but Lara highlighted an open question often worked best.

Self Care
Take care of yourself – distance yourself from an unhealthy work environment. Everything covered will help navigate team friction, but it won’t help solve a bad environment. If you’re in that kind of environment, your interpersonal risk should not be at jeopardy if you’re in an unhealthy environment.

Bottom up with OKRs

Whitney O’Banner (@WooBanner) (video)

The best t-shirt of the day (and I was wearing my ‘Software is Made by Humans’ t-shirt!) – Whitney was wearing the amazing “Make tech more black” t-shirt! Any talk on Objectives and Key Results (OKRs) that starts with “Google got it wrong!” is going to grab anyones attention! This was an amazing talk, and I’d urge you to watch it. Challenging the status quo on a number of commonly held conceptions around OKRs and really made me think (and will absolutely generate discussion and change in our OKR process at work!)

Key takeaways

  1. Skip Individual OKRs
    Thankfully we already do this – OKRs down to the individual level just felt like bureaucracy and overly heavy process. Individual OKRs were likened to corporate meetings – they slow us down and don’t give us the bigger picture. How can we improve them? We can’t, just ditch them. If you need something at that level, go to ‘tasks’ instead.
  2. Ignore the metrics
    When first introducing OKRs, it’s hard enough to get the structure, the approach, and the alignment – don’t sweat the measurement detail too much. You want to improve X by 20%? Why 20%? Is that a stretch goal? Why not 30%? Whitney used the term SWAGS (Sophisticated Wild Ass Guesses) – awesome!
  3. Avoid Cascading Goals
    Heresy! This was Whitney’s key point – if you drop nothing else, drop this! Shouldn’t our goals align with the company? No… (just kidding, yes). You should absolutely want to achieve your company goals. Laszlo Bock (Googles former VP of people operations) said: “Having goals improves performance. Spending hours cascading goals up and down the company, however, does not. It takes way too much time and it’s too hard to make sure all the goals line up.” Managers tend to over value their ideas by 42%. Frontline employees tend to under value theirs by 11%. Let the frontline employees decide on ‘how’ they’ll align and add value. Bottoms up!

Inclusion stars with I

Dora Militaru (@DoraMilitaru) (video)

I wish this had been a longer talk (it was one of the 10minute slots) – it was, as a middle aged, able bodied, CIS gendered white man, a revelation. I’ve felt lucky to have more chance to support and ally the LGBTQ+ community, but this talk really highlighted to me how much more I need to do across all of the inclusion and diversity spectrum. Tech can still be a hostile place. We have male dominated job descriptions, male dominated interview processes. Some people have to work 10 times as hard in tech to do well. FT release an I&D report that I shall be reading and trying to action my own improvements.

If you read this and feel that things like diversity quotas are anti-meritocratic, double check your privilege – you may well be part of the problem. There were, however, two problems highlighted with quotas:

  • Fixing diversity is not the HR teams responsibility, and quotas tend to lie with them
  • They can trick us into thinking we’ve fixed the problem when we’ve hit the target

Nine out of ten women still work in jobs where the men are paid more for the same job. I loved the quote: “Quotas may be the only way of achieving, eventually, a world where quotas are obsolete”. Paying lip service to diversity has to end – implementation is key.

Tech seems to be particularly bad for this – in a recent Stack Overflow survey it was highlighted that 34% said that diversity was their lowest priority when looking for a new job.
So many references flooded out at the end:

  • If minorities aren’t applying, fix it – you can do so
  • Advocate actively
  • Become an ally!
  • pronoun.is and tampon.club were examples given
  • If you worry that you don’t know on something, ask – the worst thing you can do is remain silent
  • If you’re in a room making a decision and everyone in the room looks like you, that’s a warning sign
  • Celebrate diversity!

Ola Sitarska (@OlaSitarska)’s talk on an inclusive hiring process resonated a lot with the above and added some further context for me too and shouldn’t go without mention.

Facilitation Techniques 202

Neha Batra (@NerdNeha) (video)

I’m not even going to try to write this one up, but will update as soon as the video is out. I made so many scrambled notes, but it was awesome. If you facilitate anything in your working life, this had so much practical, tangible, actionable advice. How can a talk that mentions ‘Post-It Super Sticky Portable Tabletop Easel Pad with Dry Erase Panel’ be wrong? Honestly, watch it 🙂

Special Mention

To the end of day one story telling of Nickolas Means (@Nmeans) who gave us a thorough history of Eiffel’s Tower, and tied it back to organisational politics and relationship building.

The end of day two saw the amazing Clare Sudbery (@ClareSudbery) and Dr Sal Fredenberg (@SalFreudenberg) talk about an incredible array of topics pivoting around a hackathon where they invented time travel… no, really! Watch it – both amazing speakers, and they delivered their content superbly. I learned of the spoon theory during this, and have already used it in defining my own energy levels in a conversation after 3 back to back days in London – loved it. Oh, and they came onto the stage to the amazing Polystyrene and X-Ray Spex ‘Oh Bondage, Up Yours!’ – what’s not to love!

Daring to Dare Greatly

I’ve laughed, cried, and felt great affinity while reading Brené Brown’sDaring Greatly‘. There are so many takeaways, though it is important for us to remember that we are enough, and sometimes the bravest thing we can do is show up.

Her leadership manifesto should be at the heartbeat of every organisation, not only for it’s humanity, but as an indicator of just how emotionally intelligent and empathic that organisation is.

I couldn’t recommend the book more highly.

DDD Scotland 2016 – A sunny day in Edinburgh

Saturday saw an early start to travel from the north east up to Edinburgh for the first DDD Scotland in a number of years and the hashtag was already seeing activity, so it was clearly going to be a good day.

First, and only, disappointment of the day – no coffee at the venue for breakfast!  They made up for it later in the day, though a cup of rocket fuel would have helped after starting at 5am.


Some very good talks, and some real takeaways from the day – as with any event of this type, a re-firing of the engines is definitely part of the aim – new ideas, new directions, new approaches – and the day most certainly delivered on this.

I was particularly impressed with the guys from Skyscanner – both culturally and technically they have identified their problems well, and effectively delivered solutions and organisational change in order to minimise pain – a success story in the making.

A Squad Leaders Tale – the Skyscanner squads model

Keith Kirkhope (@kkirkhope)

The squads model from spotify has been widely discussed and is a model that Skyscanner has adopted (and adapted) that model during a period of organisational and architectural change, and it would appear that they have done so to good effect.

They had the typical growing pains of any organisation that has gone from early success and grown significantly really:

  • They’d hit product gridlock
  • Their source control/release process was unwieldy
  • They were broken into functional teams (front end, etc. etc.

They were able to apply a lot of thinking around the theory of constraints, and indeed highlighted that they realised they were only as fast as their slowest unit. 

They adopted the squads model, though included a few modifications to better fit their organisational structure (they included squad leads at the top of each squad, but also brought in a tribe lead, a tribe engineering lead, and a tribe product lead to give better oversight across each squad).

Each squad is essentially self managing and self directing – they come up with their own goals, metrics, and success criteria (and example given was ‘to help users find their ideal flight(s) and book it as easily as possible’)

Some really positive side effects from this empowerment – for example, project leads become redundant, the product owner becomes one of the key foci.

I managed to ask a number of questions in this talk to better grasp the model, though unfortunately the one I didn’t ask was around Conways Law.  This organisational change seemed to be fundamental to their move from monolith to micro services, and I suspect without it they could not have so effectively broken down that monolith.  This change was a top down led change, and it’d be fascinating to learn more about the drivers behind it.  It’s the first time I’ve seen the direct impact of the communication structures of an organisation directly impacting the design of the systems.

Breaking the monolith

Raymond Davies (@radyrad88)

Another talk from Skyscanner, this one a very detailed history of skyscanner’s journey from inception through to current day and covering a great deal of the technical decisions made along the way – some of which I shall investigate as part of our own journey at tombola as we face some similar issues/growing pains.

They moved from classic asp, through webforms/mvc, to mvc, and eventually arrived at their current architecture which is evolving towards a micro service model.

Some key takeaways from this one:

  • Aggressively decommission older/legacy ‘kill it with fire’
  • Theory of constraints played a bit part in their evolution
  • They weren’t affraid to look at alternative team structures and architectures

Some technologies to look at:

  • varnish and esi, riverbed traffic manager

This and the squads talk were the high point of the day for me.

Other talks

Windows brings docker goodness – what does it mean for .net developers (Naeem Sarfraz)

A great talk and the speaker was very knowledgeable – the current state of the nation for docker looks good.  Certainly not yet at the point where you’d want to deploy (far from it), but the technology is maturing nicely.

Versions are evil – how to do without in your APIs (Sebastian Lambla)

The holy wars on RESTful endpoints, and his points were very well argued.  Worth seeing the video below.

Slides From Talks

ASP.NET Core 1.0 Deep Dive – Christos Matskas

You Keep Using the word agile… – Nathan Gloyn

Versions are evil – how to do without in your API – Sebastian Lambla

“Advanced” Functional Programming for Absolute Beginners – Richard Dalton

CQRS and how it can make your architecture better – Max Vasilyev

Ladies and Gentlemen the plane is no longer the problem – Chris McDermott

Breaking the Monolith (video) – Raymond Davies

BuildStuff Lithuania 2015

Just returned from a fantastic trip to Lithuania to attend BuildStuff 2015 and thought I’d get my notes down into a blog post to help distill and to build a brown bag session for the team at work.

The focus this year seems to have been heavily around a few key topics:

  • Functional programming played a big part and it was clear from even those talks that weren’t functional that there is a shift across to this paradigm in a lot of people’s work.
  • Agile process and approaches featured heavily as an underpinning, and indeed one of the best talks of the conference for me was Liz Keogh’s talk on ‘Why building the right thing means building the thing right’
  • Micro services (micro services, micro services, micro services) is still the hipster buzzword, though at least there were hints that the golden gooses’ egg has cracks in it (they’re still seen as a very positive thing, though they’re not without their own costs and limitations)
  • APIs naturally featured heavily in a few talks as people move more towards service orientation/micro services, and there are now a healthy set of talks on the ‘how do do this part right’
  • Continuous Integration/Continuous Delivery seems to have become less popular/less cool as a topic, but I was able to get some very useful insights on the conference that helped a lot.

You can see the full list of talks I attended here.

My tweet stream for the conference is here, and the full tweet stream for the #BuildStuffLT hashtag is here.

I attended some talks based upon the calibre of the speaker, and in some cases that was a disappointment – I of course won’t mention names, though there were a few of the bigger personalities that disappointed in presentation.

Couple of the talks I took more notes at (in chronology order);

5 Anti-Patterns in Designing APIs – Ali Kheyrollahi (@aliostad)

I loved the visual metaphor presented early in this talk of the public API as an iceberg where the vast majority of the activity is under the surface in either private APIs or business logic, and the public facing element is a small part of it.

The anti-patterns were listed as follows:

  • The transparent server – Exposing far too much information about the internals or the implementation. Having to request resources with your userId in the URL (get-details/12345 instead of /get-details/me) for example.
  • The chauvinist server – Designing the API from the servers perspectives and needs and pushing that thinking and process to any clients if they wish to consume it. Interestingly, Ali came off the fence and suggested HATEOS as an anti-pattern in this regard – I’m not convinced, but it was refreshing to see a strong opinion on this.
  • The demanding client – where certain limitations are enforced from a client perspective (e.g. forcing versioning into the URL as opposed to the headers)
  • The assuming server – where the server assumes knowledge on issues that are inherently client concerns. Good example here was pagination – /get-winners/page=1 versus /get-winners?take=20&skip=0 – we don’t know anything about the form factor on the server, so a ‘page’ has no context.
  • The presumptuous client – a client taking on responsibilities that it cannot fulfil (e.g. client implementing an algorithm that the server should handle, client acting as an authority for caching/authorisation etc.)

Another analogy I liked was in thinking of the API like a restaurant. The front of house is pristine, controlled, serene, structured. How the food arrives at the table is unimportant, and the kitchen (the server side of the API) could be a bed of chaos and activity, so long as the delivery to the front of house is pristine.

Service Discovery and Clustering for .net developers – Ian Cooper (@icooper)

This was listed as .net developers, though in reality the concepts equally applied across other technology stacks but it was nice to see code examples in .net for some of these.

He covered some of the fallacies of distributed computing:

  • The network is reliable
  • Latency is zero
  • Bandwidth is infinite
  • The network is secure
  • Topology doesn’t change
  • There is one administrator
  • Transport cost is zero
  • The network is homogenous

And also covered a number of things around Fault Recovery:

  • Assume a timeout will happen at some point
  • Retry pattern (http status code 429 – ‘Retry-after’)
  • Circuit breaker pattern (another mention for Polly here, which is an awesome library)
  • Introduce redundancy (be careful where state is stored)

Discovery was discussed at length (naturally), and he covered both Server and Client side discovery, as well as the general tooling available to help manage this (Consul, Zookeeper, AirBnB SmartStack, Netflix Eureka, etcd, SKyDNS) and covered the importance of self registration/de-registration of services.

A lot of practical/good content in here and a cracking speaker. Really liked the way he delivered demos via screencast so that he could talk rather than type – I think a lot of speakers could benefit from this approach.

Why Building the Right Thing means Building the Thing Right – Liz Keogh (@lunivore)

A lot of this talk focussed around Cynefin, a framework that seems to have arrived from Dave Snowden and describes a system for understanding and evaluating complex systems as they evolve. This talk covered a number of known concepts to me, but in a new way, so it very much hit upon my ‘must learn more about this’. It covered massively more than I could do justice to (though the link to the talk above from Liz is very similar to the one she presented), and she covered a whole pathway through an organisations agile fluency.

One of two talks at the conference that really gave me ‘take aways’ to go and learn and get better at – so massively happy I attended.

ASP.NET 5 on Docker – Mark Rendle (@markrendle)

This is the first time I’ve seen Mark present and I hope it shan’t be the last. Brilliantly clever bloke, fantastic presentation style, and clearly knows his topic areas well (he gave a closing keynote too which was equally good).

I played with vNext of asp.net in early beta, so it was incredible to see how far it’s come since then. He had brought it all the way up to date (RC1 of the framework had been launched the day before, and he included it in the talk), and the flow and interaction has become really polished.

I have to admit to being behind the curve with regards Docker – understand it conceptually, have kicked up a few docker images, but nothing anywhere near production or usable at any scale. I don’t really have any solid need for it right now, though the talk did demo how easy it was to fire up and deploy the code to a docker container and it’s possibly something to look at once the container/unikernal platform settles down.

All of the demo’s were given on linux/mono, though that evening (tragic I know) I re-worked through the talk on OSX and it all worked a treat so it does indeed seem like Microsoft has the open source/multi-platform delivery message correct here. I’ll do a follow up post on this as it’s now the topic that will take up most of my play time in the evenings.

Continuous Delivery – The Missing Parts – Paul Stack (@stack72)

I talk with Paul at most conferences and have been to his talks in the past, so I hadn’t really thought I’d attend this talk (I’ve heard all he has to say!) – so glad I did. It started after a twitter conversation pre-talk with him and Ryan Tomlinson around where the complexity in micro-services exists (away from the code, and more towards the wiring/infrastructure of it all). Thankfully, Paul’s talk focussed around exactly those topics and it was almost a rant towards the micro-services fandom that is exhibited heavily at conferences currently.

He covered the key tenets of Continuous Delivery:

  • Build only once (never ever build that ‘same’ binary again once you’ve shipped it)
  • Use precisely the same mechanism to deploy to every environment – that doesn’t mean you can use right click, publish to push up to production 😉
  • Smoke test your deployment – this is key – how do you know it works?
  • If anything fails, stop the line! It’s imperative at any stage that you can interject on a deploy that fails

Covered some common misconceptions about continuous delivery:

  • It’s something only startups can do – it’s true that starting in greenfield makes it easier to build upon, but anyone can move towards continuous delivery
  • It’s something that only works for nodeJS, Ruby, Go developers – any ecosystem can be pushed through a continuous delivery pipeline
  • We can hire a consultant to help us implement it – domain knowledge is crucial here, and someone without it cannot come in and help you solve the pain points
  • Continuous delivery is as simple as hooking up github to your TC account – all parts of the pipeline really need to be orchestrated and analysed

There was a really good example of successful continuous delivery and it was a quote from facebook. They deploy new functionality to 17% of the female population of new zealand. Basically, by the time the major metropolitan cities come online, they already know if that feature is working or not.

Some other key takeaways from this talk – you have to ensure you deliver upon the 4 building blocks of DevOps (Culture, Automation, Measurement, and Sharing) in order to ensure you have a strong underpinning. Again, this harks to the micro-services talks – just moving your auth system into a separate service doesn’t give you a micro-service. You need solid infrastructure underpinning it, you need orchestration, you need instrumentation and logging, you need some way of that service being discovered, etc. etc.

Continuous Delivery (to me) feels like a solid building block that needs to be in place and working well in order to act as a feeder for things that micro-services would hinge upon.

He mentioned the Continuous Delivery Maturity Model, and it’s worth everyone reviewing that to see where they sit in each category. One of the key things for my organisation is to review our cycle time and see just what our flow looks like, and if there are any key areas that we can improve upon.

CraftConf 2015 – did someone say microservices?

I’ve just returned (well, I’m sitting on a balcony overlooking the Danube enjoying the sunshine) from two days at CraftConf 2015 and thought I’d share my thoughts on my first attendance to this conference (it’s in it’s second year).  Firstly, lets get the cost aspect out of the way – this conference is incredibly good value for money.  Flights, conference ticket and hotel came to less than the cost of most 2 day conference tickets in London, yet the speaker line up is incredible and the content not diminished because of this economy – if you have difficulty getting business sign off on conferences in the UK, you could do worse than look at this.  That said, a conference is all about the content so lets talk about the talks.

Themes – Microservices, microservices, microservices

Thankfully, more than one person did say that microservices for them was ‘SOA done properly’ – the talks I gravitated toward tended to be around scaling, performance, cloud, automation and telemetry, and each of these naturally seemed to incorporate elements of microservice discussion.  Difficult to escape, though I guess based on the number of people who haven’t yet adopted a ‘small, single purpose services within defined bounding contexts’ (ourselves included) in the room, it was a topic ripe for the picking.


I won’t cover all of the talks as there was a lot of ground covered over the two days – thankfully they were all recorded so will be available to stream from the ustream site (go give the craft conf link above) once they’re all put together.

That said, there were some that stood out for me:

Building Reliable Distributed Data Systems

Jeremy Edberg (Netflix, @jedberg)

I’m a long time fan of netflix’s technology blog, so seeing them give a talk was awesome. I think this one sat up there as one of the best of the conference for me. A number of key points from the talk:

  • Risk in distributed systems – often on releasing teams look at risk to their own systems, risks in terms of time of day, but often overlooked is the risk to the overall ecosystem – our dependencies are often not insignificant and awareness of these is key in effective releasing
  • A lot of patterns were discussed – bulkheading, backpressure, circuit breakers, and caching strategies that I really must read more around.
  • Queuing – the approach of queuing anything you’re writing to a datastore was discussed – you can monitor queue length and gain far better insight into your systems activity.
  • Automate ‘all the things’ – from configuration and application startup, code deployment and system deployent – making it easy and quick to get a repeatable system up and running quickly is key.
  • ‘Build for 3’ – when building and thinking about scale, always build with 3 in mind – a lot of the problems that come from having 3 systems co-ordinate and interact well continue on and are applicable once you scale up.  Building for 2 doesn’t pose the same problems and so bypasses a number of the difficult points you’ll cover when trying to co-ordinate between 3 (or more).
  • Monitoring – an interesting sound byte, though alert on failure, not the absence of success.  I think in our current systems at work we’re mostly good at this and follow the pattern, though we can, as always, do better.

Everything will break!

Deserving of it’s own section as this really has to be handed to netflix as an incredible way of validating their systems in live.  They have a suite of tools called the simian army which are purposely designed to introduce problems into their live systems.  The mantra is ‘You don’t know your ready unless you break it yourself, intentionally and repeatedly’ – they have a number of different monkeys within this suite, and some of them are run more regularly than others, but this is an astonishing way of ensuring that all of your services in a distributed architecture are designed around not being a single point of failure, or not handling things like transient faulting well. 

It is seen as an acceptable operational risk (and indeed he confirmed they had) to take out customer affecting live services if the end goal is to improve those services and add more resilience and tolerance to them.  Amazing!

Incident Reviews

Their approach to these fitted well with what I’d hope to achieve so thought I’d cover them:

It was all about asking the key questions (of humans):

  • What went wrong?
  • How could we have detected it sooner?
  • How could we have prevented it?
  • How can we prevent this class of problem in the future?
  • How can we improve our behaviour?

Really does fit in with the ‘blameless postmortem’ well.

The New Software Development Game: Containers, microservices, and contract tests

Mary Poppendieck (poppendieck llc, @mpoppendieck)

A lot of interesting discussion in this keynote on day two, but some key points were around the interactions between dev and ops and the differing personality types between them.  The personality types were broadly broken down into two: Safety focussed and promotion focussed.  The best approach is the harness both personalities within a team, and ensure that they interact.

Safety focussed

These people are about failure prevention – asking ‘is it safe?’ and if not, what is the safest way that we can deliver this?  Motivated by duty and obligation.  They find that setbacks cause them to redouble their efforts whereas praise causes a ‘leave it all alone’ approach.

Promotion focussed

‘All the things!’ – all about creating gains in the ‘lets do it’ mindset. They will likely explore more options (including those new and untested).  Setbacks cause them to become disheartened whereas praise focuses them and drives them.

As a ‘promotion focussed’ person primarily, I’ve oft looked over the fence at the safety focussed and lamented – though really I think understanding that our goals are the same but our approaches different is something I could learn from here.

From monolith to microservices – lessons from google and ebay

Randy Shoup (consulting cto, @randyshoup)

Some interesting content in this one – his discussion around the various large providers and their approaches:


  • 5th complete rewrite
  • monolith perl -> monolith c++ -> java –> microservices


  • 3rd generation today
  • monolithic rails -> js / rails / scala –> microservices


  • Nth generation today
  • monolithic c++ -> java / scala –> microservices

All of these have moved from the monolithic application over to smaller, bounded context services that are independently deployable and managed.

He was one of the first (though not the last) to clarify that the ‘microservices’ buzzword was, for him, ‘SOA done properly’.  I get that microservices has it’s own set of connotations and implications, though I think it’s heartening to hear this as it’s a view I’ve held for a while now and it seems others see it the same way.

Some anti-patterns were covered as well.

  • The ‘mega service’
    • overall area of responsibility is difficult to reason about change
    • leads to more upstream/downstream dependencies
  • Shared persistence
    • breaks encapsulation, encourages backdoor interface violations
    • unhealthy and near invisible coupling of services
    • this was the initial eBay SOA effort (bad)
  • “Leaky abstraction” service
    • Interface reflects providers model of the interaction, not the consumers model
    • consumers model is more aligned with the domain.  Simpler, more abstract
    • leaking providers model in the interface constrains evolution of the implementation

Consensus is everything

Camille Fournier (Rent the runway, @skamille)

Not a lot to say about this one as we’re still in the process of looking at our service breakout and on the first steps of that journey, though I’ve spoken to people in the past around consensus systems and it’s clearly an area I need to look into.

Some key comparisons between zookeeper and etcd, though as Camille highlighted, she hadn’t had enough time with Consul to really do an effective comparison with that too.  Certainly something for our radar.

Key takeaway (and I guess a natural one based on consensus algorithms and quorum) was odd numbers rule – go from 3 to 5, not to 4 or you risk locking in your consensus.


A great and very valuable conference – discussion with peers added a whole host of value to the proceedings and to see someone using terraform tear down and bring up a whole region of machines (albeit small) in seconds was astounding and certainly something I’ll take away with me as we start our journey at work into the cloud.

A lot of the content for me was a repetition of things I was already looking at or already aware of, though it certainly helped solidify in me that our approach and goals were the correct ones.  I shall definitely be recommending that one of my colleagues attend next year.

#DDDnorth 2 write up – October 2012 – Bradford

#dddNorth crowd scene, waiting for swag!

Stolen from Craig Murphy (@camurphy) as it’s the only pic I saw with me on it (baldy bugger, green t-shirt front right) – thanks Craig!

Another 5:45am alarm woke me on a cold morning to signal the start of another days travelling on a saturday for a developer developer developer event, this time with Ryan Tomlinson, Steve Higgs, Phil Hale and Dominic Brown from work.  I’ve been to a fair few of these now, and it still overwhelms me that so many people are willing to give up their Saturdays (speakers and delegates alike) and attend a day away from friends, family (and bed!) to gather for a day with their peers to learn from each other.

Lions and tigers and hackers! Oh my!

Phil Winstanley, @plip

Phil highlighted that the threat landscape has and is changing now – we’re moving away from paper and coin as our means of transactions and everything is existing in the online space, it’s virtual, and it’s instantaneous.  Identity has become a commodity, and we now all exist in the online space somewhere – facebook are making the money they are because our identities and those of our relationships are rich with information about who we are and what we like.

He brought over some very good anecdotal evidence from Microsoft around the threat landscape and how it’s growing exponentially, there are countries and terrorist organisations involved in this (more in the disruption/extraction space) but everyone is at risk – estimated 30% of machines have some form of malware on them and a lot of the time it’s dormant.

Groups like anonymous are those that folks should be most scared of – at least when a country hacks you there are some morals involved, whereas groups like anonymous don’t really care about the fallout or whom and what they affect, they’re just trying to make a point.

The takeaway from this rather sobering talk from me was to read the Security Development Lifecycle – we all agreed as developers that although we attempt to code secure software, none of us were actually confident enough to say that we categorically do create secure software.

I’ve seen Phil give presentations before and really like his presentation style and this talk was no different – a cracking talk with far more useful information than I could distil in a write up.

Asnyc c# 5.0 – patterns for real world use

Liam Westley, @westleyl

I’ve not done anything async before and although I understand the concepts, what I really lacked was some real world examples, so this talk was absolutely perfect for me.

Liam covered a number of patterns from the ‘Task-based Asynchronous Pattern’ white paper, in particular the .WhenAll (all things are important) and .WhenAny (which covers a lot of other use cases like throttling, redundancy, interleaving and early bailout) patterns.  More importantly, he covered these with some cracking examples that made each use case very clear and easy to understand.

Do I fully understand how I’d apply async to operations in my workplace after this talk? No, though that wasn’t the aim of it (I need to spend more time with aync/await in general to do that).

Do I have use cases for those patterns that he demoed and want to apply them?  Absolutely, and I can’t wait to play!

Fantastically delivered talk, well communicated, and has given me loads to play with – what more could you want from a talk?

BDD – Look Ma, No Frameworks

Gemma Cameron, @ruby_gem

I approached this talk with some scepticism – I’ve read a lot about BDD in the past, I’ve seen a talk by Gojko Adzic very recently at Lean Agile Scotland around ‘busting the myths’ in BDD, and although the concepts are fine, I just haven’t found BDD compelling.  Gemma’s talk (although very well executed) didn’t convince me any further, but the more she talked, the more I realised that the important part in all of this is DISCUSSION (something I feel we do quite well at my workplace).  I guess we as a community (developers) aren’t always ideal at engaging the product owner/customer and fully understand what they want, and it was primarily this point which was drilled home early in the talk.  Until you have a clear understanding early on by bringing stakeholders together, arriving at a common understanding and vocabulary, how can you possibly achieve the product they wish.  I buy this 100%.

This is where the talk diverged for some it seems – a perhaps misplaced comment about ‘frameworks are bad’ was (I feel) misinterpreted as ‘all frameworks are bad’, whereas really to me it felt like a ‘frameworks aren’t the answer, they’re just a small part of the solution’ – it jumps back to the earlier part about discussion – you need to fully understand the problem before you can possible look at technology/frameworks and the like.  I’m personally a big fan of frameworks when there is a usecase for them (I like mocking frameworks for what they give me for example), but I think this point perhaps muddied some of the waters for some.  She did mention the self shunt pattern which I’ll have to read more on to see if it could help us in our testing.

A very thought provoking talk, and I can imagine this will generate some discussion on monday with work colleagues – in particular about engagement with the client (product owner/customer) in order to ensure we are getting the requirements correctly – hopefully we’re doing everything we need to be doing here.

Web Sockets and SignalR

Chris Alcock, @calcock

I’m sure chris won’t mind a plug for his morning brew – a fantastic daily aggregation of some of the biggest blog posts from the previous day.  This is the first opportunity I’ve had to see Chris talk, and it’s odd after subscribing to morning brew for years now you feel like you know someone (thankfully got to chat to him at the end of the session and ask a performance related question).

I’ve played recently with SignalR in a personal project so had a little background to it already, though that wasn’t necessary for this talk.  Chris did a very good job of distilling websockets both in ‘how’ and ‘what’ and covered examples of them in use at the http level which was very useful.  He then moved on to SignalR both in the Persistent Connection (low level) and Hub (high level) APIs.  It’s nice to see that the asp.net team are bringing signalR under their banner and it’s being officially supported as a product (version 1 anticipated later this year)

This was a great talk for anyone who hasn’t really had any experience of signalR and wants to see just what it can do – like me, I’m sure that once you’ve seen it there will be a LOT of use cases you can think of in your current work where signalR would give the users a far nicer experience.

Event Driven Architectures

Ian Cooper, @ICooper

The talk I was most looking forward to on the day, and Ian didn’t disappoint.  We don’t have many disparate systems (or indeed disparate service boundaries) within our software, but for those that do exist, we’re currently investigating messaging/queues/service busses etc. as a means of passing messages effectively between (and across) those boundaries.

Ian distilled Service Oriented Architecture (SOA) well and went on to different patterns within Event Driven Architectures (EDA) and although the content is indeed complex, delivered as effectively as it could have been done.  I got very nervous when he talked about the caching of objects within each system and the versioning of them, though I can see entirely the point of it and after further discussion it felt like a worthy approach to making the messaging system more efficient/lean.

The further we at work move towards communication between systems/services the points in this talk will become more and more applicable and have only helped validate the approach we were thinking of taking.

This talk wins my ‘talk of the day’ award* (please allow 28 days for delivery, terms and conditions apply) as it took a complex area of distributed architecture and distilled into 1 hour what I’ve spent months reading about!

And Ian – that’s the maddest beard I’ve ever seen on on a speaker Winking smile


Brilliant brilliant day.  Lots of discussion in the car on the way home and a very fired up developer with lots of new things to play with, lots of new discussion for work, and lots of new ideas.  Isn’t this why we attend these events?

Massive thanks to Andrew Westgarth and all of the organisers of this, massive thanks to the speakers who gave up their time to come and distil this knowledge for us, and an utterly huge thanks to the sponsors who help make these events free for the community.

I’ll be at dunDDD in November, and I’m looking forward to more of the same there – will be there the friday night with Ryan Tomlinson, Kev Walker and Andrew Pears from work – looking forward to attending my first geek dinner!

ASP.NET MVC4 – Using WebForms and Razor View Engines in the same project for mobile teamplate support

NOTE: All content in this post refers to ASP.NET MVC 4 (Beta) and although it has a go live license, it has not gone RTM yet.  Although the process has been remarkably smooth, please work on a branch with this before considering it in your products!


We’ve been presented with an opportunity to create a mobile friendly experience for our italian site.  Our italian offering front end is an asp.net MVC 3 site using the webforms view engine (we started the project before razor was even a twinkling in microsoft’s eye), and is pretty standard in terms of setup.

There are a number of different ways of making a site mobile friendly – scott hanselman has a written a number of great articles on how he achieved it on his blog, and responsive design is very much a hot topic in web design at the moment (and that is a cracking book) and there are a lot of resources out there (both microsoft stack and otherwise) around learning the concepts.

Our italian site although div based and significantly more semantically laid out than our UK site (sorry!) would have still been a considerable task to turn into a responsive design as a first pass.  Our mobile site *will not* need to have every page that the non-mobile site has though – the purpose of the site is different, and the functionality in the site will be also.

Along comes ASP.NET MVC 4 (albeit still in beta, but it has a go live license) with its support for mobile.  I really should care about how it works under the covers (perhaps a follow up post), though for now, basically if you have a View (Index.aspx) then placing a mobile equivalent (Index.mobile.aspx) allows you to provide a generic mobile version of a page.

Upgrade your MVC3 Project to MVC4

Basically, follow: http://www.asp.net/whitepapers/mvc4-release-notes#_Toc303253806

There were no problems in this step for us – we have a large solution, and there were a number of dependent projects that were based upon MVC3, but these were all easily upgraded following the steps at that URL.

Setting up your view engines

We previously had removed Razor as a view engine from the project to remove some of the checks that go on when attempting to resolve a page, so our Global.asax had the following:

// we're not currently using Razor, though it can slow down the request pipeline so removing it
ViewEngines.Engines.Add(new WebFormViewEngine());

and it now has:

ViewEngines.Engines.Add(new RazorViewEngine());
ViewEngines.Engines.Add(new WebFormViewEngine());

The order is important – if you want your mobile views to use Razor in a WebForms view engine project, then razor must be the first view engine the framework looks to. If however you want to stick with webforms (or indeed you are only using razor) then your settings above will be different/non-existant.

Creating the mobile content

We started by creating Razor layout pages in shared in exactly the same way that you would add a master page.  Open Views/Shared and right click, Add Item, and select an MVC4 Layout Page.  Call this _Mobile.cshtml, and setup the differing sections that you will require.

To start with, as a trial I thought I’d replace the homepage, so navigate to Views/Home, right click, and ‘Add View…’ – create ‘Index.mobile’ and select Razor as the view engine – select the _Mobile.cshtml page as the layout.

Ok, we now have a non-mobile (webforms view engine) and a mobile (razor view engine) page – how do we test?

Testing your mobile content

The asp.net website comes to help again.  They have a great article on working with mobile sites in asp.net MVC4 (which indeed is far better than the above, though doesn’t cover the whole ‘switching view engines’ aspects).

I installed the tools listed in that article, and loaded up the site in the various testing tools and was presented with the following:


That’s Chrome in the background rendering out the standard site, upgraded to MVC4 but still very much using the webforms view engine and master pages, and Opera Mobile Emulator (pretending to be a HTC Desire) in the foreground using Razor view engine and layout pages.


The rest, as they say, is just hard work Smile  We very much intend to make the mobile site responsive and our CSS/HTML will be far more flexible around this, though with media queries (some examples media queries) and the book above in hand, that will be the fun part.

The actual process of using both Razor and WebForms view engines in the same project was a breeze and means that longer term the move over to Razor for our core site should be far more straight forward once we’ve worked through any teething troubles we have around the work above.  Razor as a view engine is far more conscise and (dare I say it!) pretty than webforms and the gator tags, so I look forward to using it in anger on a larger project like this.

It may be longer term that there are pages on the site that lend themselves towards not having duplicate content in which case we will investigate making the core design more responsive in places, but for now, we have a workable solution to creating mobile content thanks to the mobile support in ASP.NET MVC4.


Hope that was useful.

Unit testing complex scenarios – one approach

This is another of those ‘as much for my benefit’ as it is for the community posts.  On a sizeable project at work we’ve hit a ‘catch up on tests’ phase.  We don’t employ TDD, though obviously understand that testing is very important to the overall product (both in confidence on release, and confidence that changes to the code will break the build if functionality changes).  Our code coverage when we started this latest phase of work was terrible (somewhere around 20% functionality coverage with 924 tests) – after a couple of weeks of testing we’re up to fairly high coverage on our data access/repositories (>90%) and have significantly more tests (2,600 at last count).

We are following a UI –> Service –> Repository type pattern for our architecture which works very well for us – we’re using IoC, though perhaps only because of testability – the loose coupling benefits are obviously there.

We’re now at the point of testing our service implementations, and have significantly more to think about.  At the data access layer, external dependencies were literally only the database.  At service layer, we have other services as external dependencies, as well as repositories, so unit testing these required considerably more thought/discussion.  Thankfully I work with a very good team, so the approach we’ve taken here is very much a distillation of the outcomes from discussion with the team.

Couple of things about our testing:

Confidence is King

The reason we write unit tests is manifold, but if I were to try to sum it up, it’s confidence.

Confidence that any changes to code that alter functionality break the build.

Confidence that the code is working as expected.

Confidence that we have solidly documented (through tests) the intent of the functionality and that someone (more often than not another developer in our case) has gone through a codebase and has reviewed it enough to map the pathways through it so that they can effectively test it.

Confidence plays a huge part for us as we implement a Continuous Integration process, and the longer term aim is to move towards Continuous Delivery.  Without solid unit testing at it’s core, I’d find it difficult to maintain the confidence in the build necessary to be able to reliably deploy once, twice or more per day.

Test Functionality, Pathways and Use Cases, Not Lines of Code

100% code coverage is a lofty ideal, though I’d argue that if that is your primary goal, you’re thinking about it wrong.  We have often achieved 100% coverage, but done so via the testing of pathways through the system rather than focussing on just the lines of code.  We use business exceptions and very much take the approach that if a method can’t do what it advertises, an exception is thrown.

Something simple like ‘ValidateUserCanDeposit’ can throw the following:

/// <exception cref="PaymentMoneyLaunderingLimitException">Thrown when the user is above their money laundering limit.</exception>
/// <exception cref="PaymentPaymentMethodChangingException">Thrown when the user is attempting to change their payment method.</exception>
/// <exception cref="PaymentPaymentMethodExpiredException">Thrown when the expiry date has already passed</exception>
/// <exception cref="PaymentPaymentMethodInvalidStartDateException">Thrown when the start date hasn't yet passed</exception>
/// <exception cref="PaymentPlayerSpendLimitException">Thrown when the user is above their spend limit.</exception>
/// <exception cref="PaymentPlayerSpendLimitNotFoundException">Thrown when we are unable to retrieve a spend limit for a user.</exception>
/// <exception cref="PaymentOverSiteDepositLimitException">Thrown when the user is over the sitewide deposit limit.</exception>

and these are calculated often by calls to external dependencies (in this case there are 4 calls away to external dependencies) – the business logic for ‘ValidateUserCanDeposit’ is:

  1. Is the user over the maximum site deposit limit
  2. Validate the user has remaining spend based upon responsible gambling limits- paymentRepository.GetPlayerSpendLimit

    – paymentRepository.GetUserSpendOverTimePeriod

  3. GetPaymentMethodCurrent- paymentRepository.GetPaymentMethodCurrent

    – paymentRepository.GetCardPaymentMethodByPaymentMethodId

    – OR paymentRepository.GetPaypalPaymentMethodByPaymentMethodId

  4. if we’re changing payment method, ensure:- not over money laundering limit

So testing a pathway through this method, we can pass and fail at each of the lines listed above.  A pass is often denoted as silence (our code only gets noisy when something goes wrong), but each of those external dependencies themselves can throw potentially multiple exceptions.

We employ logging of our exceptions so again, we care that logging was called.

Testing Framework and Naming Tests

NUnit is our tool of choice for writing unit tests – syntax is expressive, and it generally allows for very readable tests.  I’m a big fan of the test explaining the authors intent – being able to read and understand unit tests is a skill for sure, though once you’ve read a unit test, having it actually do what the author intended it to is another validator.

With that in mind, we tend to take the approach ‘MethodUnderTest_TestState_ExpectedOutcome’ approach.  A few examples of our unit test names:

  • GetByUserId_ValidUserId_UserInCache_GetVolatileFieldsReturnsValidData_ReturnsValidUserObject
  • GetPlaymateAtPointInTime_GivenCorrectUserAndDate_ValidPlaymateShouldExist
  • GetCompetitionEntriesByCompetitionId_NoEntries_ShouldReturnEmptyCollection
  • GetTransactionByUserIdAndTransactionId_DbException_ShouldThrowCoreSystemSqlException

Knowing what the author intended is half the battle when coming to a test 3months from now because it’s failing after some business logic update.

Mocking Framework

We use Moq as a mocking framework, and I’m a big fan of the power it brings to testing – yup, there are quite a number of steps to jump through to effectively setup and verify your tests, though again, these add confidence to the final result.

One note about mocking in general, and any number of people have written on this in far more eloquent terms than I.  Never just mock enough data to pass the test.

If we have a repository method called ‘GetTransactionsByUserAndDate’, ensure that your mocked transactions list also includes transactions from other users as well as transactions for the same user outside of the dates specified – getting a positive result when that is the only data that exists is one thing, getting a positive result when you have a diverse mocked data set with things that should not be returned again adds confidence that the code is doing specifically what it should be.

Verifying vs. Asserting

We try very much to maintain a single assert per test (and only break that when we feel it necessary) – it keeps the mindset on testing a very small area of functionality, and makes the test more malleable/less brittle.

Verifying on the other hand (a construct supported by Moq and other framekworks) is something that we are more prolific with.

For example, if ‘paymentRepository.GetPlayerSpendLimit’ above throws an exception, I want to verify that ‘paymentRepository.GetUserSpendOverTimePeriod’ is not called – I also want to verify that we logged that exception.

The Assert from all of that is that the correct exception is thrown from the main method, but the verifies that are called as part of that test add confidence.

In our [TestTearDown] method we tend to place our ‘mock.Verify()’ methods to ensure that we verify those things that are able to be after each test.

Enough waffle – where’s the code?

That one method above ‘ValidateUserCanDeposit’ has ended up with 26 tests – each one models a pathway through that method.  There is only one success path through that method – every other test demonstrates error paths.  So for example:

public void ValidateUserCanDeposit_GetPaymentMethodCurrent_ThrowsPaymentMethodNotFoundException_UserBalanceUnderMoneyLaunderingLimit_ShouldReturnPaymentMethod()
	var user = GetValidTestableUser(ValidUserId);
	user.Balance = 1m;

	// remaining spend
	paymentRepository.Setup( x => x.GetPlayerSpendLimit(ValidUserId)).Returns( new PlayerSpendLimitDto { Limit = 50000, Type = 'w' }).Verifiable();
	paymentRepository.Setup( x => x.GetUserSpendOverTimePeriod(ValidUserId, It.IsAny(), It.IsAny())).Returns( 0 ).Verifiable();

	// current payment method
	paymentRepository.Setup( x => x.GetPaymentMethodCurrent(ValidUserId))
						.Throws( new PaymentMethodNotFoundException("") ).Verifiable();

	IPaymentMethod paymentMethod = paymentService.ValidateUserCanDeposit(user, PaymentProviderType.Card);

	Assert.That(paymentMethod.CanUpdatePaymentMethod, Is.True);
	paymentRepository.Verify( x => x.GetCardPaymentMethodByPaymentMethodId(ValidCardPaymentMethodId), Times.Never());
	LogVerifier.VerifyLogExceptionCalled(logger, Times.Once());

That may seem like a complex test, but I’ve got the following from it:

  • The author’s intent from the method signature:upon calling ValidateUserCanDeposit

    a call within that to GetPaymentMethodCurrent has thrown a PaymentMethodNotFoundException

    at that point, the users balance is below the money laundering limit for the site

    so the user should get a return that indicates that they can update their payment method

  • That those methods that I expect to be hit *are* hit (using moq’s .Verifiable())
  • That those methods that should not be called aren’t (towards the end of the test, Times.Never() verifies
  • That we have logged the exception once and only once

Now that this test (plus the other 25) are in place, if a developer is stupid enough to bury an exception or remove a logging statement that the build will fail.

Is this a good approach to testing?

I guess this is where the question opens up to you guys reading this.  Is this a good approach to testing?  The tests don’t feel brittle.  They feel like they’re focussing on one area at a time.  They feel like they are sure about what is going on in the underlying code.


Ways to improve them?

How do you unit test in this sort of situation? What software do you use? What problems have you faced?  Keen to get as much information as possible and hopefully help inform each other.

I’d love to get feedback on this.  It feels like it’s working well for us, but that doesn’t necessarily mean it’s right/good.

2010 – A year in geek

I’ve found it incredibly cathartic to read a few others’ blog posts summarising not only the year that has gone, but their aims for the year ahead – this has been an incredibly busy year for me in geek terms, and I thought I’d write it up, as another hopefully cathartic exercise.

The year starts…

2010 started for me after only four months in a new job after escaping an agency environment in August last year – I honestly didn’t know how badly I had it in my previous role until I started in my current – I took quite a hefty pay cut to switch jobs, but the previous role (I should really say roles, as I was stupidly doing the IT Manager and Dev team lead roles) had me in the last 6 months of it working comfortably 60 hour weeks – I was knackered, home life was suffering, I couldn’t switch off, I was stressed (and anyone who knows me knows I just don’t do stress).

My current role is pretty much idyllic for me – job description is Senior Developer, but we all know that hides a multitude of sins.  Basically, I get to specify technical direction, I get to do staff mentoring/staff support, I get to be involved in the community, but (best of all) I get to spend about 75% of my usable time developing.  Pig in shit I believe is the term they use Winking smile

Legacy Code

Oddly, the first real achievement this year involved minor improvements to our payment system (based upon  legacy code – classic asp – ewww!)  I write it here not because I’m proud of the technology, but of the analytical approach we took, the change process we had in place for the little and often changes to it, and the overall effect of those changes – conservative estimates by our financial officer put us at just over 1% extra turnover.  Now that doesn’t sound a lot, until you see how much the company turns over – needless to say, they were very happy with the work!

Site Rewrite

This has been the big focus for me from around April, and it’s been huge – our existing site is a mix of a lot of classic ASP with a number of .net projects dotted around – the technical debt in there is huge and changes, be it new functionality or modifications to existing functionality are just incredibly costly.  The aim (and I’ve read any number of posts that say this is a bad idea) was to re-write the whole thing into something that was:

a) more maintainable

b) easier to deploy

c) of a far higher overall quality

d) minimised technical debt

e) easier to extend

With that in mind, the technologies that myself and the team have worked on this year have been wide ranging.


The move away from web forms and into MVC has been a revelation.  I lament now the occasional need to maintain our legacy code as once you grok the separation of concerns involved in MVC2 (I heartily recommend both the Steve Sanderson book and the TekPub video series as learning resources). moving back to web forms (especially legacy) is a mare.  I’d say out of all the things covered this year, this is the biggest ‘win’ for me – I can see me using this pattern (and asp.net mvc) for a long time to come as my primary means of delivery over the web.

Testing Software (Unit, Integration, n’all that jazz)

I daren’t call this test driven development as we tend to write our tests after we’ve got the functionality in place – our specifications and business rules in most areas of the rewrite haven’t carried over verbatim, so writing unit tests ahead of time was rarely practicable.  That said, the project is now up to 290+ unit/integration tests, and I suspect before launch that number will nearly double.

It’s very easy during code reviews for team members to validate the logic, outcomes and goals in the unit tests up front so that they form almost a code contract which then goes on to define behaviour and functionality within the code.  It also (assuming business knowledge of the area under test) allows people to highlight potential missing tests or modifications to existing tests.

Learning wise, blogs have been the most use during the year for unit testing, though I would say a must purchase is ‘The Art of Unit Testing’ by Roy Osherove.  It got me thinking about unit testing in a very different way and has led (I hope) to me simplifying unit tests but writing more of them, using mocking frameworks to deliver mocks/doubles, and generally being a big advocate of testing.

Design Patterns

Obviously MVC goes without saying, though this year has seen me read a lot around software design and the patterns used therein.  I feel I now have a solid handle on a great deal more software design from an implementation point of view (the theory was never really that difficult, but turning that into an implementation…).  We’ve used the Service pattern extensively, Repository, I’d like to think we’ve used Unit of Work in a few places, the Factory pattern.  They’ve all seen the light of day (necessarily so) in this project.

There’s a fantastic post by Joel Spolsky about the Duct Tape Programmer which I’d urge everyone to read if they haven’t done so, and it’s about finding that balancing act between software design for software design’s sake (the pure view) versus getting the job done – there’s always a balancing act to be had, and hopefully I’ve stayed on the right line with regards to this.  It’s very easy when focussing on the design of the software to over engineer or over complicate something that should be (and is) a relatively straight forward task.

Uncle Bob must get a mention this year, as his SOLID principles have been a beacon – you don’t always adhere to them, you don’t always agree where they apply, but you can’t deny that as underlying principles of OOD they are a good foundation.

Business Exceptions

Two talks immediately spring to mind when I look at the approach we’ve taken with business exceptions, the first was delivered at DevWeek which I was lucky enough to attend in April (see the post here), the second was delivered by Phil Winstanley (@plip) at DDD Scotland this year.

We’ve very much using exceptions as a means of indicating fail states in methods now, and I love it – coupled with our logging, it feels like we will rarely have unhandled exceptions (and when we do, they are logged), and the overall software quality because of this feels far superior.

I understand the concerns that have been raised around the performance of exceptions (cost to raise etc.) and the desire to not use exceptions for program flow, though I think we’ve struck a happy balance and my testing (albeit rudimentary) earlier in the year suggested to me that the performance of these things was something that just wasn’t a concern.

Continuous Integration

Something that’s been on the back burner for too long now, and only the past week have I made any headway with it, but already it’s a love affair.  I suspect the quality of the information we get out of the system as we move forward will pay dividends, and as we begin to automate deployment/code coverage, and I get more heavily into MSBuild, this is going to be something that I don’t think I’ll want to give up on larger projects.


I now subscribe to approximately 160 blogs, which sounds like a lot, but thankfully not everyone posts as often as @ayende, so jobs a good un with regards to keeping up – I find 5-10mins at the end of the day lets me have a quick scan through those posts that have come in, discount the ones I’m not interested in, skim read the ones I am and star them (google reader stylee) ready for a more thorough read when I get to work the next day.  This may seem a large commitment, but remember I’ve come from a job where approximately 60hrs a week I was ‘working’ (not geeking I hasten to add, just client liaison, product delivery, bug fixing, and sod all innovation)  I now find my working week is down to approx 40hrs work, and between 5 and 15hrs per week on geek stuff depending on the week and what’s on – the extra time I get for self development is just my investment in my career really, and I talk to so many other people on twitter who do exactly the same.


Getting our own local microsoft tech user group (@NEBytes, http://www.nebytes.net) has been fantastic this year – we’ve had some superb speakers, and I know that once a month I get to catch up with some cracking geeks and just talk random tech.  The guys who run it Andrew Westgarth (@apwestgarth), Jon Noble (‘@jonoble), Ben Lee (@bibbleq) and Damian Foggon (@foggonda) do a fantastic job, and I look forward to more of this in 2011.

I managed to attend DevWeek this year, and wrote up a number of things from it, but it was a fantastic week.  Thankfully work saw the benefit so are sending me again in 2011, so hopefully I’ll meet up with folks there and learn as much as I did this year.

Developer Developer Developer days.  These are superb!  Hopefully we can get one organised closer to home in 2011, but the two I attended this year (Scotland and Reading earlier in the year) were packed full of useful stuff, and the organisers need to be praised for them.

Geek Toys – The iPad

I couldn’t round off the year without humbly admitting that I was wrong about the iPad when it launched – I didn’t see the point at all, and was adamant it was going to flop.  Then in October I found myself the owner of one (erm… I actually paid for it too – I have no idea what was going on there!).

Well, revelation doesn’t do it justice – it’s the ultimate geek tool!  Thankfully a lot of the books I buy are available as ebooks also, and I’ve found more and more I’m moving away from print and reading geek books on my ipad – epub format is best (for annotations and the like), though PDF works a treat too.  Aside from that, tweetdeck is a cracking app on the ipad, and it lets me stay in touch with geeks more regularly than I would otherwise have done.  Reeder is my final tool of choice, and the way it represents blogs you’ve not read yet is fantastic.

I’d suggest any geek that loves quick access to their blogs, their books, and tweetdeck (though naturally the ipad does a whole lot more) have a play with one and see if it could be the answer for you too – I’m hooked.

And what of 2011?

Well, I’m over the moon with the way 2010 has gone really – all I can ask is to maintain my geek mojo, my thirst for learning, and a cracking bunch of people to work with and life will be grand Smile

A very quick PS to add a technoarti tag VBXP4MC892BG so that I can claim my blog via them

TeamCity – Install and Setup (Basics)

Been a while since I posted and I thought that the past few days warranted getting my thoughts down as we’ve just setup our first foray into Continuous Integration/Build Automation with TeamCity.  We’re in the process of rewriting the corporate site from classic asp/asp.net into an MVC2 front end with some solid (though not always SOLID) design behind it.  We’ve written a lot of unit tests (though many more to go), and thought it was about time we looked at the whole CI/Build side of things.  I’d hasten to add, the following post will remain at a fairly basic level, as that’s where I’m at at the moment – hopefully something in here will be useful, though it’s as much about documenting the steps for the team I work with and whenever I write something like this down it always helps solidify it in the grey matter.

Why Continuous Integration/Build Automation?

The answers for us fit pretty much into the standard reasons behind CI – primarily ensuring quality, though easing the deployment burden was certainly a part of it.  CI completes the circle really – you’ve written your quality code, you’ve written your unit tests (and any other tests, integration, UI, etc.), so why not have an easy way to get all of that validated across your whole team, making sure that the quality remains and that you don’t have the manual task of pushing the code out to your dev servers? 

Continuous Integration helps with all of this, and a whole lot more, though the ‘more’ part is something that will come in time for us I think – we now have a working checkin build (I’ll detail the steps I went through) so that at least gives us ongoing feedback.

TeamCity was the immediate choice for us as we don’t really qualify for a TFS setup and CruiseControl.net seemed to have a higher learning curve (I may be mis-representing it here mind).

Before going through the detail of the install, a quick shout out to Paul Stack (@stack72), the fantastic Continuous Integration book from Manning, and the as yet unread MS Build book from Microsoft – these as well as the blog posts from many have helped massively in getting this setup.

Team City 6 – Install

Generally, the defaults in the setup were fine.  I made sure that all options were enabled with regards to the service etc. – I can’t see the use case when you wouldn’t want this, but it’s worth stating.


I changed the path to the build server configuration to a more general location – it initially suggested my own windows account user area, though I was unsure (and couldn’t find easy documentation) on whether this would make the configuration available to others, so I defaulted to a different path.


With regards to server port (for the TeamCity web administration tool) I changed the default port too.  Although it’s recommended that the build server remains as a single purpose box, I felt uncomfortable going with the default port 80 just in case we ever chose to put a web server on there for any other purpose.


I also chose to stick with the default and ran the service under the SYSTEM account – it doesn’t seem to have affected anything adversely and I’d rather do that than have to create a dedicated account.

Team City – Initial Setup

Initially you are asked to create an administrator account – do so, though if you’re in a team of people, there is an option later to create user accounts for each user – far better to do that and leave the admin account separate.  In the free version you can have up to 20 users, so it’s ideal for small teams.

Create a Project

The first steps in linking up your project to TeamCity is to create your project.


Here, you can give the project any name and description – it can (though doesn’t have to) match the project name in Visual Studio.


TeamCity from this point on holds your hand fairly effectively.


oh, ok – thanks Smile <click>

The build configuration page has a lot of options, but some of the pertinent ones (at least early doors – once you have more experience, which I don’t, then the others will certainly come into play).

Name your build – I named ours ‘checkin build’ as I intend for it to happen after every checkin… does what it says on the tin kinda thing.

Build number format – I left this as the default ‘{0}’ – it may be prudent to tie it in later on with the Subversion version number, but for now, we want to get a working CI process.

Artifact Paths – very much steered clear of this at the moment – it seems there’s a lot of power in these, though I haven’t touched on them enough.

Fail Build If – I went with the defaults plus a couple of others – ‘build process exit code is not zero’, ‘at least one test failed’, ‘an error message is logged by the build runner’.

Other than that, I pretty much stuck with the defaults.

Version Control Settings


I deliberately selected to checkout to the agent as I suspect this’ll give me more scalability in future – the build server can have multiple build agents on other machines from what I understand (kinda the distributed computing model?) and those agents can handle the load if there are very regular/large team checkins.  I think there are limitations on build agents in the free version, but again – if we use this solidly, and need more, then the commercial license isn’t too badly priced.

I also chose a different checkout directory to the default, just because – no solid reason here other than I have a lot of space on the D: drive.

Our project is significant (24 VS projects at last count, a lot of them testing projects (1 unit and 1 integration per main project), and initially I experimented with ‘clean all files before build’ but the overall build was taking approximately 8mins (delete all, checkout all, build, unit test) – I’m going to try to not clean the files and do a ‘revert’ instead but at present, I don’t have any experience on which is better – certainly cleaning all files worked well, but 8mins seemed a while.

Attach to a VCS Root

The important part – linking up your project to your source control (subversion in our case).


Click ‘Create and attach…’  Most of the settings in here are defaults, but you will notice further down the page it defaults to subversion 1.5 – we’re using 1.6, so double check your own setup.


I also experimented with enabling ‘revert’ on the agent ala:


With an aim to bringing down the overall build time – I haven’t played enough to warrant feedback yet, though I suspect the revert will work better than a full clean and checkout.

Build Steps

The CI build will be broken into a number of steps, but firstly we need to get the core project building on the agent.  There will be a lot more to learn on this one, but for now, what worked well for us was the following:


Our Solution file contains all the information we need to work out what needs to be built, and TeamCity supports it so jobs a good un.  As I extend the base build then this method will still just work as I’ll be modifying the .csproj files belonging to the solution anyway.

Build Step 2

This one was slightly more convoluted, but basically, giving relative paths to the DLLs that contain the unit tests is the way forward here.


Make sure you target the right framework version (I didn’t initially, though the error messages from TeamCity are pretty good in letting you figure it out).

Build Triggering

We want this all to trigger whenever we checkin to our source control system (in our case, subversion), so when we click on ‘build triggering’ and ‘add trigger’, selecting ‘VCS Trigger’ will get you everything you need:


Are we there yet?

Well, just about – you will see the admin interface has a ‘run’ button against this configuration (top right of browser), lets do an initial run and see what the problems are (if any).  You can monitor the build by clicking on the ‘agents’ link at the top of the page and then clicking on the ‘running’ link under the current build.

Should you get the message:

… Microsoft.WebApplication.targets" was not found…

This basically happens because you don’t have web deployment projects (or indeed VS2010) installed on the build server.  The path of least resistance is to copy the C:\Program Files\MSBuild folder over to the build machine’s Program Files folder (if x64, make sure you put it in the x86 one).  You should find the build just works after that.

Ok, Build is working – Tell me about it!

Notifications were the last thing I setup (make sure you’ve setup a user account for yourself before you do this, the admin account shouldn’t necessarily have notifications switched on).  Click on ‘My Settings & Tools’ at the top and then ‘Notification Rules’.

I’ve setup an email notifier (which will apparently tell me of the first failed build, but only the first after a successful), and I’ve downloaded the windows tray notifier (My Settings & Tools, General, right hand side of the page) which is setup likewise.

Next Steps?

There are a lot of other tasks I want to get out of this, not just from a CI point of view.  I’ve deliberately (as @stack72 suggested) kept the initial setup ‘simple’ – getting a working setup was far more important than getting an all encompassing setup that does everything I want from the off.  I can now see the guys doing their checkins and the tests passing, I’m now far more aware if someone has broken the build (and lets face it, we’ll all deliberately break it to see that little tray icon turn red), and I know there’s so much more that I can do.

Next priorities are:

  1. Learn MSBuild so that I can perform tasks more efficiently in the default build – e.g. I want to concatenate and minify all CSS on the site, I want to minify our own Javascript, etc.
  2. Setup deployment on the checkin build – I suspect this will use Web Deployment Projects (which themselves generate MSBuild files so are fully extensible) to get our checked in code across to our dev servers.
  3. Setup a nightly build that runs more tests.  As you can see above, we build and run unit tests for our checkin build – I want to run nightlies that perform both unit and integration tests – I want the nightly to deploy to dev also, but to promote the files necessary to our staging server (not publish them) so that we can at any point promote a nightly out to our staging and then (gulp) live servers.

I’d urge anyone working on projects where deployment is a regular and pain in the arse task, or if there are a few of you and you’ve taken unit testing and TDD (be that test first or just good solid functionality coverage), my view now is that Continuous Integration is the tool you need. 

It’s the new Santa Claus – It knows when you’ve been coding, it knows when you’re asleep, it knows if you’ve been hacking or not, so code good for goodness sake!

As per all of my other posts, the above is from a novice CI person – any feedback that anyone can give, any little nuggets of advice, any help at all – I’ll soak it up like a sponge – this has been a lot of fun, and there’s definitely a warm glow knowing it’s now in place, but there’s a long way to go – feedback *very* welcome!