CraftConf 2015 – did someone say microservices?

I’ve just returned (well, I’m sitting on a balcony overlooking the Danube enjoying the sunshine) from two days at CraftConf 2015 and thought I’d share my thoughts on my first attendance to this conference (it’s in it’s second year).  Firstly, lets get the cost aspect out of the way – this conference is incredibly good value for money.  Flights, conference ticket and hotel came to less than the cost of most 2 day conference tickets in London, yet the speaker line up is incredible and the content not diminished because of this economy – if you have difficulty getting business sign off on conferences in the UK, you could do worse than look at this.  That said, a conference is all about the content so lets talk about the talks.

Themes – Microservices, microservices, microservices

Thankfully, more than one person did say that microservices for them was ‘SOA done properly’ – the talks I gravitated toward tended to be around scaling, performance, cloud, automation and telemetry, and each of these naturally seemed to incorporate elements of microservice discussion.  Difficult to escape, though I guess based on the number of people who haven’t yet adopted a ‘small, single purpose services within defined bounding contexts’ (ourselves included) in the room, it was a topic ripe for the picking.

Talks

I won’t cover all of the talks as there was a lot of ground covered over the two days – thankfully they were all recorded so will be available to stream from the ustream site (go give the craft conf link above) once they’re all put together.

That said, there were some that stood out for me:

Building Reliable Distributed Data Systems

Jeremy Edberg (Netflix, @jedberg)

I’m a long time fan of netflix’s technology blog, so seeing them give a talk was awesome. I think this one sat up there as one of the best of the conference for me. A number of key points from the talk:

  • Risk in distributed systems – often on releasing teams look at risk to their own systems, risks in terms of time of day, but often overlooked is the risk to the overall ecosystem – our dependencies are often not insignificant and awareness of these is key in effective releasing
  • A lot of patterns were discussed – bulkheading, backpressure, circuit breakers, and caching strategies that I really must read more around.
  • Queuing – the approach of queuing anything you’re writing to a datastore was discussed – you can monitor queue length and gain far better insight into your systems activity.
  • Automate ‘all the things’ – from configuration and application startup, code deployment and system deployent – making it easy and quick to get a repeatable system up and running quickly is key.
  • ‘Build for 3’ – when building and thinking about scale, always build with 3 in mind – a lot of the problems that come from having 3 systems co-ordinate and interact well continue on and are applicable once you scale up.  Building for 2 doesn’t pose the same problems and so bypasses a number of the difficult points you’ll cover when trying to co-ordinate between 3 (or more).
  • Monitoring – an interesting sound byte, though alert on failure, not the absence of success.  I think in our current systems at work we’re mostly good at this and follow the pattern, though we can, as always, do better.

Everything will break!

Deserving of it’s own section as this really has to be handed to netflix as an incredible way of validating their systems in live.  They have a suite of tools called the simian army which are purposely designed to introduce problems into their live systems.  The mantra is ‘You don’t know your ready unless you break it yourself, intentionally and repeatedly’ – they have a number of different monkeys within this suite, and some of them are run more regularly than others, but this is an astonishing way of ensuring that all of your services in a distributed architecture are designed around not being a single point of failure, or not handling things like transient faulting well. 

It is seen as an acceptable operational risk (and indeed he confirmed they had) to take out customer affecting live services if the end goal is to improve those services and add more resilience and tolerance to them.  Amazing!

Incident Reviews

Their approach to these fitted well with what I’d hope to achieve so thought I’d cover them:

It was all about asking the key questions (of humans):

  • What went wrong?
  • How could we have detected it sooner?
  • How could we have prevented it?
  • How can we prevent this class of problem in the future?
  • How can we improve our behaviour?

Really does fit in with the ‘blameless postmortem’ well.

The New Software Development Game: Containers, microservices, and contract tests

Mary Poppendieck (poppendieck llc, @mpoppendieck)

A lot of interesting discussion in this keynote on day two, but some key points were around the interactions between dev and ops and the differing personality types between them.  The personality types were broadly broken down into two: Safety focussed and promotion focussed.  The best approach is the harness both personalities within a team, and ensure that they interact.

Safety focussed

These people are about failure prevention – asking ‘is it safe?’ and if not, what is the safest way that we can deliver this?  Motivated by duty and obligation.  They find that setbacks cause them to redouble their efforts whereas praise causes a ‘leave it all alone’ approach.

Promotion focussed

‘All the things!’ – all about creating gains in the ‘lets do it’ mindset. They will likely explore more options (including those new and untested).  Setbacks cause them to become disheartened whereas praise focuses them and drives them.

As a ‘promotion focussed’ person primarily, I’ve oft looked over the fence at the safety focussed and lamented – though really I think understanding that our goals are the same but our approaches different is something I could learn from here.

From monolith to microservices – lessons from google and ebay

Randy Shoup (consulting cto, @randyshoup)

Some interesting content in this one – his discussion around the various large providers and their approaches:

Ebay

  • 5th complete rewrite
  • monolith perl -> monolith c++ -> java –> microservices

Twitter

  • 3rd generation today
  • monolithic rails -> js / rails / scala –> microservices

Amazon

  • Nth generation today
  • monolithic c++ -> java / scala –> microservices

All of these have moved from the monolithic application over to smaller, bounded context services that are independently deployable and managed.

He was one of the first (though not the last) to clarify that the ‘microservices’ buzzword was, for him, ‘SOA done properly’.  I get that microservices has it’s own set of connotations and implications, though I think it’s heartening to hear this as it’s a view I’ve held for a while now and it seems others see it the same way.

Some anti-patterns were covered as well.

  • The ‘mega service’
    • overall area of responsibility is difficult to reason about change
    • leads to more upstream/downstream dependencies
  • Shared persistence
    • breaks encapsulation, encourages backdoor interface violations
    • unhealthy and near invisible coupling of services
    • this was the initial eBay SOA effort (bad)
  • “Leaky abstraction” service
    • Interface reflects providers model of the interaction, not the consumers model
    • consumers model is more aligned with the domain.  Simpler, more abstract
    • leaking providers model in the interface constrains evolution of the implementation

Consensus is everything

Camille Fournier (Rent the runway, @skamille)

Not a lot to say about this one as we’re still in the process of looking at our service breakout and on the first steps of that journey, though I’ve spoken to people in the past around consensus systems and it’s clearly an area I need to look into.

Some key comparisons between zookeeper and etcd, though as Camille highlighted, she hadn’t had enough time with Consul to really do an effective comparison with that too.  Certainly something for our radar.

Key takeaway (and I guess a natural one based on consensus algorithms and quorum) was odd numbers rule – go from 3 to 5, not to 4 or you risk locking in your consensus.

Summary

A great and very valuable conference – discussion with peers added a whole host of value to the proceedings and to see someone using terraform tear down and bring up a whole region of machines (albeit small) in seconds was astounding and certainly something I’ll take away with me as we start our journey at work into the cloud.

A lot of the content for me was a repetition of things I was already looking at or already aware of, though it certainly helped solidify in me that our approach and goals were the correct ones.  I shall definitely be recommending that one of my colleagues attend next year.

Read More

Switching the client side build library in visual studio 2013 MVC template to gulp and bower

Why?

A lot of people use Mads Kristensen’s absolutely awesome Web Essentials plugin for Visual Studio – we use it for less compilation, and bundling of our less/js.  It does however fall down when you need to use it in a continuous integration context, so we find that we keep the compiled/bundled output in our repository.

Couple that with the fact that in the next release of visual studio, gulp/grunt/bower are becoming first class citizens in terms of it’s support out of the box.

Scott Hanselman’s point in that post is a valid one – nuget is a superb addition to the .net ecosystem, and compare it to the dark days of ‘download a DLL from somewhere and hope’, it’s revolutionised .net development.  But there are other, arguably far better, and certainly far richer ecosystems out there for client side build, which on the one hand is absolutely awesome (npm is easy to build for and publish modules to), and on the other hand, daunting (I counted at least 15 modules that would simply minify my css for me).  Thankfully, the community talks/blogs a lot about this, so finding commonly used packages is as easy as reading from a number of sources and seeing which one comes out on top.

Microsoft are to be applauded for taking this approach and opening up the pipeline in this way – their whole approach recently with OSS of the .net clr, as well as the potential promise of a reliable .net on linux via vNext, and it’s a great time to be a .net dev.

All code for this example post is available at https://github.com/terrybrown/node-npm-gulp-bower-visual-studio

What is Gulp?

I won’t go into detail, as many other posts cover it well.  Essentially, it is a streaming build system written in node that allows people to create tasks and build up a pipeline of activities such as transforming less, copying files, validating javascript, testing, etc.  It is a more recent addition to the market (grunt, a tool with similar aims, though a different approach is another in the same arena).

What is Bower?

Essentially, a package manager for front end libraries (be they javascript, css, etc.) – think of it at a rudimentary level like nuget for client libraries.  There is a very good short video on egghead.io

Holy wars solved early – Gulp vs Grunt

Clever people have written about this.  I personally prefer the streams approach and the code over configuration driven nature of gulp over the ‘temp file all the things’ and config based approach of grunt.

Getting Setup – local dev machine + visual studio

Machine needs to be running node and gulp (gulp needs to be installed globally)

Node has just hit v 0.12 which has a number of updates (not least to streams3 and away from the somewhat interesting streams2)

node --version

Will confirm which version of node you’re running.  You don’t need the latest version, though the update in 0.12 has been a long time coming.

Setting up gulp/bower

npm install gulp -g
gulp --version
npm install bower -g
bower --version

TRX – Task Runner Explorer: This will give you a custom task runner for gulp within visual studio.

NPM/NBower Package Intellisense: Who doesn’t like intellisense right?

Grunt Launcher: Not ideally named, but a great little add on to give you right click support for gulp/bower and grunt.

You may also want to follow the steps in http://madskristensen.net/post/grunt-and-gulp-intellisense-in-visual-studio-2013 to get full intellisense.

Note: Switch off build in web essentials (it’s being used purely for intellisense)

File > New Project – and a tidy up

We want to hand over all JS and CSS handling to gulp.  This includes bundling and minification, as well as hinting/linting. We’ll start with the default MVC template from Visual Studio as the basis of our work.

Remove asp.net bundling/optimization

In the current template for MVC sites, Microsoft provide a handy bundling mechanism that although fine for smaller sites, still maintains the same problems as above and doesn’t give you separate control over your ‘distribution’ JS/CSS.  We’ll remove:

Microsoft.AspNet.Web.Optimization (and dependencies WebGrease, Antlr, Newtonsoft.Json)

This will also involve a few changes to web.config and the codebase (see https://github.com/terrybrown/node-npm-gulp-bower-visual-studio/commit/5cfb58b8e57faa4c518a067fa473d740e43725a3)

Remove client side libraries (we’ll replace these later)

  • bootstrap 3 (bower: bootstrap)
  • jquery (bower: jquery)
  • jquery validation (bower: jquery-validation)
  • jquery unobtrusive validation (bower: jquery-validation-unobtrusive)
  • modernizr (bower: modernizr)
  • RespondJS (bower: responsd)

Setting up Bower

bower init

This will lead you through a number of questions (accept defaults throughout for now, though you can read up on the options here)

You will end up with a bower.json file that will look something like:

image

Re-installing javscript and css dependencies

Take all of the package references above that we removed (the bower versions) and run the following on the command line:

bower install bootstrap jquery jquery-validation jquery-validation-unobtrusive modernizr respond --save

Do NOT forget the ‘- -save’ postfix at the end – this will ensure that your bower.json is updated with the local dependencies.

This will start the download and install, and you will end up with a new folder in your solution called ‘bower_components’ folder which contains all of the local dependencies.  Ensure you add this folder to your .gitignore (or source control ignore file of choice).

As a temporary step, switch to visual studio – add the ‘bower_components’ folder to your solution, and re-map all of your js/css files from the default template to the newly downloaded versions.

image

Setting up the build with Gulp

Firstly, we need to get this local solution ready to receive npm packages as dependencies (gulp + the other supplemental libraries we’ll be using are available via npm.

npm init

Again, accept all of the defaults really, or whatever you fancy in each field.

The examples from here down will be somewhat contrived – your own use case can dictate what you do at each step here, but for the purposes of example, what we want to achieve is:

  • Deliver all jquery and jquery validation libraries into a single request
  • Deliver bootstrap and respond as a single request
  • Create a basic more modularised structure for our CSS using less and then concatting/minifying as part of the build

In our real use cases at work, our needs are far more complex, but the above will serve as an example for this post.

Setting up a default ‘gulpfile.js’.

var gulp = require('gulp');

// define tasks here
gulp.task('default', function(){
  // run tasks here
  // set up watch handlers here
});

You can name and chain tasks in gulp really easily – each one can act independently or as part of an overall build process, and TIMTOWTDI (always) – what I’ll put forward here is the version that felt easiest to read/maintain/understand.

Deliver multiple vendor libraries into a single request

var gulp = require('gulp');
var del = require('del');
var concat = require('gulp-concat');

var outputLocation = 'dist';


gulp.task('clean', function () {
	del.sync([outputLocation + '/**']);
});

gulp.task('vendor-scripts', function () {
	var vendorSources = {
		jquery: ['bower_components/jquery/dist/jquery.min.js',
	'bower_components/jquery-validation/dist/jquery.validate.min.js',
	'bower_components/jquery-validation-unobtrusive/jquery.validate.unobtrusive.min.js']
	}

	gulp.src(vendorSources.jquery)
		.pipe(concat('jquery.bundle.min.js'))
		.pipe(gulp.dest(outputLocation + '/scripts/'));
});


gulp.task('default', ['clean', 'vendor-scripts'], function(){});

Ok, there are a number of things in here – key points:

  1. Read from the bottom up over – if you issue a straight ‘gulp’ command on the command line, you wil always run the ‘default’ task.  In this case, it doesn’t do anything itself (the empty function as the third param), but instead has a chained dependency – it’ll run ‘clean’ first, then (upon completion) run ‘vendor-scripts’ tasks.
  2. ‘clean’ task uses the ‘del’ npm module to clean out the output folder we will be pushing the built scripts/css to.
  3. ‘vendor-scripts’ uses the ‘gulp-concat’ npm module to simply join an array of files together (in this case, the jquery + jquery validation files)

if you switch to a command prompt window and run ‘gulp’ on it’s own, you will see output similar to:

image

And in visual studio, you will now see a hidden ‘dist’ folder there with the output of what you have just generated (remember to update your .gitignore – you do not want to commit these)

Disabling Web Essentials

Less has been our tool of choice for our CSS for some time now, and web essentials really did/does rock as a VS plugin to aid your workflow on those (nice inbuilt bundling, compilation, etc.  That said, now that we’re moving to a more customised build process, we need to switch the compilation side of it off.

Tools > Options > Web Essentials

Switch everything in ‘Javascript’ and ‘LESS” to false.

Deliver minified and concatenated CSS from LESS

We contrived a number of .less files in order to create the proof of concept:

_mixins.less

@brand_light_grey_color: #EFEFEF;

.border-radius(@radius: 4px) {
	-moz-border-radius: @radius;
	-webkit-border-radius: @radius;
	border-radius: @radius;
}

layout.less

@import "_mixins.less";

body {
    padding-top: 50px;
    padding-bottom: 20px;
}

/* Set padding to keep content from hitting the edges */
.body-content {
    padding-left: 15px;
    padding-right: 15px;
}


/* Override the default bootstrap behavior where horizontal description lists 
   will truncate terms that are too long to fit in the left column 
*/
.dl-horizontal dt {
    white-space: normal;
}

div.rounded {
	.border-radius(4px);
}

forms.less

@import "_mixins.less";

/* Set width on the form input elements since they're 100% wide by default */
input,
select,
textarea {
    max-width: 280px;
}

Nothing complex, though it’ll let us at least build a workflow around them.

There are a couple of key tasks we want to perform here:

  1. Grab all less files and compile them over to css
  2. Compress that css
  3. Push them all into a single file in our dist folder

Thankfully, the ‘gulp-less’ plugin performs the first two tasks, and we have already achieved the other for our JS so it’s just a repeat of those steps.

Integration into Visual Studio and tying it all together

We now have a basic working build that we can add to as and when our process demands – node and the node package manager (npm) have a massive ecosystem of libraries to support all sorts of tasks (generaily, gulp- prefixed for gulp related build tasks), so you can start to build from this point forward.

Key thing now is tying this workflow into Visual Studio, and this is where the cool happens.  The Task Runner Explorer gives us a lot of extensibility points.

image

Each of these tasks/sub-tasks can be right clicked and ran as you would do from the command line easily, but you also have a nice option to ‘bind’ certain actions in Visual Studio to steps within your grunt build.

E.g.

image

In this instance, we have bound our ‘clean’ gulp task to a ‘clean solution’ within visual studio.

Tying it all together – watching the solution

Web essentials was awesome at monitoring your work real time and updating bundled files (both less and js) into their respective outputs, but thankfully, gulp comes to the rescue in the guise of ‘gulp-watch’ – this is a highly configurable module that allows you to perform actions on changes to files.

Thankfully, now that we have all of the other tasks, the watch workflow is simply a matter of matching up targets to watch, and tasks to run when things happen to those targets.

var watch = require('gulp-watch');

gulp.task('watch', function () {
	gulp.watch('bower_comonents/**/*', ['vendor-scripts', 'vendor-css']);
	gulp.watch('Content/**/*.less', ['css']);
});

gulp.task('default', ['clean', 'vendor-scripts', 'vendor-css', 'css', 'watch'], function(){});

Once we have that, we can go back to the task runner explorer, right click the ‘watch’ task, and set it to run on solution open.

We now have our solution in watch mode permenantly and any changes to our less or the vendor scripts will trigger the appropriate tasks.

What’s next?

We’ve solved the problem (compiled css/js needing to be in our repo with web essentials), so the next steps really are incorporating this gulp build task into our CI server (TeamCity), though we’ll leave that for a follow up post.

Now that we have a whole set of automation going, we may as well re-introduce linting/hinting of our less and javascript too – some configuration will be needed here to ensure we’re happy with the outcomes, but fundamentally the ‘right thing to do’.

Testing our JS workflow is the next natural step, and there are plenty of gulp+other task runners to sit within this workflow that will let you validate your scripts either at build time or at save.

Read More

Yeoman hangs on windows – with a fix

Thought I’d quickly write this up as @kevinawalker and I had a mare with it yesterday on our windows boxes.  I suspect it’s because I’d attempted to setup yeoman in the past before there was a nice npm install pathway, coupled with the lack of dependecies my machine (I installed an up to date ruby+python in between break and fix).

Symptoms

npm install –g yo

would appear to work and give solid feedback that everything appeared tickety boo.  All PATH variables certainly indicated it would work, and ‘yo’ was discoverable in the path.

That said, typing anything other than yo –version would hang and never return (requiring a ctrl+c to break out of it).

When oldschool debugging through it (console.log for the win), turns out cli.js in the %APPDATA%/roaming/npm/node_modules/yo/cli.js was haning on line 76:

env.lookup()

I didn’t follow down the pathway, though when npm installing yo on the mac it went through no bother.

npm uninstall –g yo

then a re-install didn’t help, whether or not we cleared the npm caches in between.

Solution

I still don’t know which magical combination of the above really worked (it may be that updating ruby/python between the old install and new was key, I can’t be sure) but the following steps fixed for both @kevinawalker and I:

  1. npm uninstall –g yo
  2. explorer window to %APPDATA%/roaming/npm-cache/
  3. delete ‘yo’ folder
  4. npm install –g yo

After that, all was well with the world.

Good luck, hope this helps someone!

Read More

Velocity Conf Europe 2013 – How to utterly inspire in three short days

The past 3 days have seen me attend VelocityConf Europe 2013 which (as the sub-title suggests) focuses on Web Performance and Operations.

Talks I attended can be seen here, though thankfully they seem to record all sessions, so if you missed it they’re available from here.

I had the chance to hangout with the @toptabletech guys (http://tech.toptable.co.uk/) (@ryantomlinson has just joined them and he moved to them from working with me), and they’re all top blokes – very clever, and clearly care about what they do.

tl;dr

Without a doubt one of the best conferences I’ve attended – the mix between operations talks (though often these were given a very devops slant) and web performance really did tick all of the boxes.  It feels very much like my learning time will be consumed by some of the approaches, tools and techniques I’ve seen covered over the past few days, and I remain utterly excited about putting some of this into practice.

It does make me question some of the cultural aspects within my own organisation – something I will endeavour to at least attempt to communicate effectively upon return – there are a lot of things we could be doing better (myself very much included).

Overall, not that my passion was lacking anyway, though I’m entirely re-fired up around the areas I’ve seen talked of – monitoring/metrics, continuous integration/deployment, testing all the things, and automation, with that constant backnote on the cultural.

I became acutely aware of just how narrow my scope of development was (.net developer/PC based), and time and again a lot of the tooling shown while it likely worked on windows ‘ok’, was better geared up to either a mac os/linux background – the mac to PC ratio was scary, and it’s certainly something where I’m now going to experiment with mac as a dev machine (VM’ing into windows for the .net stuff).

I’ll cover some of the details on some of the talks I attended, though obviously covering every talk from 3 days worth is going to see at least some repetition so apologies if I miss anything/anyone out.

The below is as much so that I have all of the pointers in one place to the stuff I want to look at, though hopefully others will find it useful.

Responsive Images – Yoav Weiss

Cracking start, Yoav highlighted 72% of responsive websites serve the same resources to all form factors (we use picturefill).  I liked the look of http://sizersoze.org as a tool to highlight what you were doing at different breakpoints (in terms of savings to be made, etc.)

He highlighted mobify.js, which although a clever implementation, feels like an overwhelming hack to get around some of the limitations currently in play on browsers/http.

First mention of http://worldwidepagetest.com/ in this session too.

Be Mean to your code with Gauntlt – James Wickett

I moved from the more ops’y (TCP tuning/TLS perf etc.) talk into James’ security talk, and I wasn’t disappointed.

Gauntlt provides a means of automating a number of other attack tools and overall was the first thing where I thought ‘getting that into our build is essential’ – another talk where ‘it’s easier on a mac’ (probably the first, not the last).

Making Government Digital Services Fast – Paul Downey

Loved this talk – really nice to see how effectively these guys release and how the mindset shift was entirely around ‘what does the user want’.  Their ‘dark release’ rollout worked well, and it was one of the first talks (though again not the last) that highlighted how important instrumentation was – how do you know you’ve been successful (or otherwise) if you don’t have figures backing it up.

Stand Down Your Smartphone Testing Army – Mitun Zavery

I mention this not only because it was a good talk, though I really must have a look and play with http://www.deviceanywhere.com/.  Really nice little tool.

Testing all the way to production – Sam Adams

Loved the ‘continuous delivery’ from day 1 approach, and the mindset that each commit I make ‘I believe this code is safe to go into production’, though obviously again the monitoring metrics come in, and it’s the pipeline’s job to prove that statement wrong – strong enough pipeline builds confidence that you’ve caught ‘all the things’.

They do a lot of ‘in live’ testing, though their isolation model seemed to work really well – something I have to investigate.

Global Web Page Performance – James Smith

Although the demo didn’t go great for James, I’d used the site the day before as it was mentioned in one of the workshops and it’s a really nice abstraction over http://webpagetest.com – certainly useful.

HTTP Archive, BigQuery and you! – Ilya Grigorik

This was one of those ‘holy shit!’ demos – taking HTTP Archive data and making it accessible/queryable – see (and play with) http://www.igvita.com/2013/06/20/http-archive-bigquery-web-performance-answers/ – incredible.

Gimme more! Enabling user growth in a performant and efficient fashion

Some useful stats in this great talk – by 2017 there’ll be 5.2 billion mobile users, making more than 10 billion connections!  Mobile video will increase 16 fold between 2012 and 2017.

New Image Formats

Images make up 61% of page bytes – 65% of page bytes on mobile!  The encoding techniques we have in place are in some cases 15 years old.  WebP (less supported) and JPEG eXtended Range (JXR) look to be the next big thing in image compression and both although not heavily supported right now, if you have in place content-negotiation/browser sniffing, you could save considerable bandwidth.

Code Club – John Edwards

I love this – https://www.codeclub.org.uk/ – teaching children to code in a structured/supported way, and volunteering your own time to help.  I will be investigating this to see how best I can fit in – time is key I guess (support from employer etc.) but I really love the concept so I hope I can get involved in some way.  John Edwards did an amazing job of presenting it, and the video (http://www.youtube.com/watch?v=Ci3hY83rUwU) had me both chuckling and incredibly emotionally moved.  Great cause.

General Thoughts – Culture

A number of the talks focussed around the cultures within the organisations that we work, and in how the culture almost entirely underpins how and what you achieve and the direction of work. 

One of the best talks of the conference for me was given by John Willis, entitled ‘Culture as a Strategic Weapon’, which focussed on some of the core tenets of successful devops (CAMS – Culture, Automation, Measurement, and Sharing).

It’s made me more determined to keep pressing on with both working with, and encouraging new directions within my own organisational culture – as he said, ‘If you can’t change your culture, change your culture’ and the immortal words of ‘get the hell out of dodge’.  Working towards a better organisational culture feels like the right fight to be having, but this one talk has generated me more inspiration than any other single talk at the conference.

When seeing how effective some of the guys I talked to were being with things like ‘30% time’ and how much other organisations invest in their staff, it very much feels like there are lessons I can bring home here.

General Thoughts – Tooling

There are so many cool tools – too many to name, though the links I’ve put above are a good starter – there are so many people working on tools to both monitor, test and graph ‘all the things’ so that we get closer and closer to reliable, repeatable, understandable and maintainable releasing.

Closing Thoughts

I thankfully have a solid team of developers where I work who will be very keen to be involved in this.  We’re not bad, we do automate a lot of our build pipeline, though we don’t have enough monitoring/metrics in place. 

The conference has entirely re-invigorated me and as I sit here writing up, the thing exciting me most is ‘where do I start’ – I look forward to the playtime!

This was a great conference, and was great to be around likeminded, passionate people who were all about sharing how they got to where they are, where they want to be, and how they intend to get there.

Oh, and thanks to the facebook staff who took us to the pub on thursday night – I really enjoyed the talk with you guys and learned an awful lot!

 

Bring it on :)

Read More

We’re hiring (again!)

We’re looking for 1 developer to join a growing team in Sunderland who support and build software for tombola bingo – currently the UKs largest online bingo site.  We’ve branched out into italy, spain and elsewhere which has seen significant expansion to the team.  We are looking to place the developers within the web team.

There are 13 of us on the web team, and we are developers who love to be good at what we do – we’re passionate about delivering quality and are looking for likeminded people.

We don’t score too badly on the joel test or by others’ standards

  • We use subversion for source control
    We realise that some people would have us apologise for this, though we’re going to debate the use of dvcs soon and svn works (most of the time) for us
  • We have made our own custom build scripts with msbuild
    And although we don’t yet practice continuous deployment, we’re not that far off.  We’re also playing with psake as a replacement for msbuild. 
  • We don’t make daily builds, but we do have team city
    We use it actively for deployments/testing/automated smoke testing and are constantly improving, looking at code quality metrics etc.
  • We do have a bug database and we try to fix bugs before writing new code
    You know how it is… Winking smile
  • We employ scrum/kanban to manage our projects
    We do dailies, we work to (generally) two week sprints, we use scrumwise and trello to help with the overall picture and we’re big fans of the transparency and flexibility of agile.
  • We do regular code reviews and mentoring
    If you have a skills shortfall, chances are someone has it and will be willing to support
  • We talk a lot
    We all get stuck, we all need to bounce ideas off people, we all need opinions from time to time.  As a team, we talk to each other a lot to resolve difficulties or just get a perspective on potential solutions
  • We grok good software design principles
    We may not always apply all of the SOLID principles, and our code isn’t littered with “cool design pattern X”, but we know how and when to apply these and our codebase stands up well to scrutiny
  • We have people who have personal/play projects, and we love that
    Time is always hectic in the office, though we do our best to support anyone with an idea and we love that we have devs who are passionate about their career

the role

The role is primarily for C# asp.net MVC developers with the usual skillset: asp.net MVC 4 (or 5, 3 or 2!), decent HTML/javascript/CSS skills, decent SQL/NoSQL skills, and some understanding of some of the above bullets would make you stand out. 

With the expansion of the company, there are always opportunities to get into new technology too, and we’re always playing with new stuff – HTML5 for games is a hot topic at the moment (and all of the associated technologies), social media (ewww, we all hate that term, but you know what we mean!).

The only real caveat aside from the above is that you must be passionate about what you do – you will obviously want to make good software, and want to work with people who enjoy doing the same.

Salaries for the roles are competitive (and negotiable) and based upon experience.

get in touch for a chat

If any of the above sounds interesting, or you have any questions, please get in touch.  We’re pretty nice guys/girls and if that’s as far as it goes, at least we’ll have had a chance to meet you and you us Smile

Contact either myself, Terry Brown – Project Lead, @terry_brown, 07968 765 139)

or Ian Walshaw – Operations Manager, @ian_walshaw1973, 07850 507 629)

No Agencies

Guys, we have a preferred supplier list (who will also be looking for the above), please don’t contact us if you’re not on it – it’ll only piss us off.

Read More

Ryan Tomlinson : ITeamLead, ISolutionsArchitect, IPassionateDeveloper

I thought I’d write up a post about a member of my team, Ryan Tomlinson.  I’ve had the pleasure to work with him for the past 2 years, when he joined our employer as a senior software developer.  Very quickly he joined the projects team and after a period working on other projects took up lead on the implementation of our Spanish website.  It was here that Ryan really started to show the skill set that he had, and he helped bring into tombola an awful lot of the project management practices and software architecture that are very actively followed today.  When I was awarded the web team lead role it was only a gnats hairs breadth between us on which would have been best for the role, and in all honesty we both were ‘best’ for the role, just different.  He has since gone on to work on, and lead, multiple projects at tombola, always bringing his ‘best practice’ approach to each.

I’ve never written up a post about another developer I’ve worked with previously, though as Ryan now moves on to a team lead role at TopTable, I felt compelled to write this one.

We have spent the past two years with a relationship bordering on occasional violence because of the insults that we aim at each other, though fundamentally both know it is only achievable with the absolute upmost respect for the other – sarcasm has been a solid comedy mechanism between us, and I shall sorely miss it.

We have challenged each other during his time here on an almost daily basis, and both have grown better as developers, architects, and leads because of it.

He is without hesitation one of the best developers I have ever had the pleasure to work with – his insight, his motivation, his drive, and his experience has brought a massive amount of value to tombola over the past two years, and TopTable have gained an overwhelmingly solid team lead.

Really sorry to see you go mate.

You can see more at Ryan’s blog, twitter, and github

Read More

Creating a drop down list from an enum in ASP.NET MVC

Thought I’d share some work we’ve done in our MVC projects to ease the generation of drop down lists from enum types which makes life a hell of a lot easier for us when working with enums in views.

The basic premise focuses around the method below which is represented all over the web really (a lot of people seem to have come up with the solution at around the same time it seems) which is given an enum:

public enum UserType
{
	Visitor = 1,
	NonDepositor,
	DepositedOnce,
	DepositedTwice,
	Regular,
	LapsedRegular,
	LapsedNonDepositor
}

We can create a simple enum to select list convertor with the following:

public static SelectList ToSelectList<TEnum>(this TEnum enumObj)
{
	var values = (from TEnum e in Enum.GetValues(typeof(TEnum))
					select new { ID = e, Name = e.ToString() }).ToList();

	return new SelectList(values, "Id", "Name", enumObj);
}

Caveat: I didn’t invent this, it’s a pattern that’s published in a lot of places (stack overflow and other peoples blogs).

Making it look pretty

This may well work fine for a lot of your use cases or indeed for simple admin/internal systems, but our use cases dictated we extend this a little.  First and foremost was getting friendly strings out of this for the display value (our users like Words Separated With Spaces – curious that).

You could easily go with a simple regex on the ‘ToString()’ part of that code – something like:

public static string PascalCaseToPrettyString(this string s)
{
	return Regex.Replace(s, @"(\B[A-Z]|[0-9]+)", " $1");
}

And your call in the ‘ToSelectList’ method above would just be ‘ToString().PascalCaseToPrettyString()’ (for info: the regex above will take all uppercase characters or collections of numbers that aren’t at a word boundary and put a space in front of them).  This would give us something like ‘Deposited Once’ as opposed to ‘DepositedOnce’

Again, this may well suit exactly what you want, but what if the description you want to show to the user really doesn’t match what you want as the enum value.  For this, we look to the [Description] attribute and would decorate up our enum as follows:

public enum UserType
{
	[Description("Visitor (Not logged in)")]
	Visitor = 1,
	[Description("Non-depositing player (Created account, no deposits)")]
	NonDepositor,
	[Description("Single depositing player")]
	DepositedOnce,
	[Description("Twice depositing player")]
	DepositedTwice,
	[Description("Regular depositing player (Has 3 or more deposits)")]
	Regular,
	[Description("Lapsed Regular (Not logged in for the past 12 weeks)")]
	LapsedRegular,
	[Description("Lapsed Non-Depositor (Not deposited, not logged in for the past 12 weeks)")]
	LapsedNonDepositor
}

In this case we can simply extend our ‘PascalCaseToPrettyString’ concept a little further with:

public static string GetDescriptionString(this Enum val)
{
	try
	{
		var attributes = (DescriptionAttribute[])val.GetType().GetField(val.ToString()).GetCustomAttributes(typeof(DescriptionAttribute), false);

		return attributes.Length > 0 ? attributes[0].Description : val.ToString().PascalCaseToPrettyString();
	}
	catch (Exception)
	{
		return val.ToString().PascalCaseToPrettyString();
	}
}

This will attempt to grab the DescriptionAttribute from the enum value if there is one.  This will handle both situations (with and without Description attribute) nicely, and falls back to at least something that looks nice to the user if a description attribute isn’t present).  Our ‘ToSelectList()’ method will then just update to call .GetDescriptionString()’ instead of ‘ToString()’ for the value  (you will have to change the enum call like so):

public static SelectList ToSelectList<TEnum>(this TEnum enumObj)
		{
	var values = (from TEnum e in Enum.GetValues(typeof(TEnum))
					select new { ID = e, Name = (e as Enum).GetDescriptionString() }).ToList();

	return new SelectList(values, "Id", "Name", enumObj);
}

And we’re left with:

image

So far so good – what next?

The next steps are really edge cases, though it was useful to extend the helper in our use cases to deliver flexibility in all cases where we needed it.

Filtering

There are situations where you want to include only those options that are applicable based upon some other selection parameter or indeed some particular use case.  For this we can use a Func delegate along the lines of:

public static SelectList ToSelectList(this TEnum enumObj, Func predicate = null)
{
	IEnumerable values = (from TEnum e in Enum.GetValues(typeof(TEnum))
									select e);

	if (predicate != null)
		values = (from TEnum e in values
					where predicate(e)
					select e);

	var outputs = (from TEnum e in values
					select new { ID = e, Name = (e as Enum).GetDescriptionString() });

	return new SelectList(outputs, "Id", "Name", enumObj);
}

And in our views we can do something along the lines of:

<p>@Html.DropDownListFor(model => model.BankBalanceState, Model.BankBalanceState.ToSelectList( x => x != UserType.LapsedNonDepositor &&
				                                                                                    x != UserType.LapsedRegular))</p>

Adding ‘Please select’ as the first option

A simple one, though it saves you from having to jump through a few hoops if it’s important to have the ‘please select’ option at the top of the list.  This one requires a little more change to our helper method:

public static SelectList ToSelectList(this TEnum enumObj, Func predicate = null, bool addPleaseSelect = false)
{
	IEnumerable values = (from TEnum e in Enum.GetValues(typeof(TEnum))
									select e);

	if (predicate != null)
		values = (from TEnum e in values
					where predicate(e)
					select e);

	var outputs = (from TEnum e in values
					select new SelectListItem { Value = e.ToString(), Text = (e as Enum).GetDescriptionString() });

	if (addPleaseSelect)
	{
		var pleaseSelect = new List { new SelectListItem { Text = "--- please select ---", Value = "" } };
		outputs = pleaseSelect.Concat(outputs).ToList();
	}

	return new SelectList(outputs, "Value", "Text", enumObj);
}

Which leaves us with:

image

Shuffling the values

Another edge case though one that was useful to us in a number of situations was the shuffling of the values within the list.  We achieved this using a simple extension method:

public static ICollection ShuffleList(this ICollection list)
{
	return list.OrderBy( x => Guid.NewGuid()).ToList();
}

And included it in the updated ToSelectList like so:

public static SelectList ToSelectList(this TEnum enumObj, Func predicate = null, bool addPleaseSelect = false, bool shuffleList = false)
{
	IEnumerable values = (from TEnum e in Enum.GetValues(typeof(TEnum))
									select e);

	if (predicate != null)
		values = (from TEnum e in values
					where predicate(e)
					select e);

	if (shuffleList)
		values = values.ToList().ShuffleList();

	var outputs = (from TEnum e in values
					select new SelectListItem { Value = e.ToString(), Text = (e as Enum).GetDescriptionString() });

	if (addPleaseSelect)
	{
		var pleaseSelect = new List { new SelectListItem { Text = "--- please select ---", Value = "" } };
		outputs = pleaseSelect.Concat(outputs).ToList();
	}

	return new SelectList(outputs, "Value", "Text", enumObj);
}

Which is called from the view like so:

<p>@Html.DropDownListFor(model => model.BankBalanceState, Model.BankBalanceState.ToSelectList(shuffleList: true))</p>

Other extensions to this?

We’ve come up with a few more updates to this – one to force presentation via the enum numeric value (oddly in an enum, -1 is rendered after 1 and this isn’t always what you’d hope for).  We’ve also updated it for our multi-tenant websites to support localisation of enum values (though there’s enough work in this to provide an entirely separate blog post).  We’ve also added an optional parameter to ignore the current value of the enum (default to the first value in the select list rather than the selected enum) – again, an edge case, though I’m sure folks can see use cases themselves for this.

Hopefully that was useful – had been meaning to write it up for a while now (we’ve been using it in production now for over a year and it performs quite happily and there seem to be no bottlenecks/issues with it).

Grab the code

I’ve put the finished solution onto github if anyone wants to grab it and modify it themselves.  If anyone has suggestions on improvements feel free to send a pull request.

Read More

"it works here” – highlighting the mashup of agile methodologies that works for us

Some would say the following post demonstrates a failure of process and that we’re ‘not doing scrum right’.  Some would say that if we only did X differently or spent some time focussing on Y then this would all click into place for us. On some of those points I’d be likely to agree.  In all other aspects though, I’m writing this up as I really am a fan of the approach that we have adopted (mostly organically) when it comes to dealing with our projects and the agile approaches that we undertake.  I’ll give you a bit of background around our codebase and sites so that you can get a feel for how we’ve arrived at the processes we have.  It will become abundantly clear that I’m not a ScrumMaster or formally trained in any of those, nor is it solely led by me – I have a cracking team of devs who contribute and innovate the process as much as I do.

TL;DR

I’ll highlight that although we started out primarily using SCRUM as our methodology, the evolution of that process into something that works for our organisation by picking and choosing, and then honing various agile practices that work well within our organisation has led to a far more flexible project management methodology.

Our Codebase

Out codebase is not small – it’s an MVC4 front end to a pretty sizable c# service/domain layer.  It’s multi-tenant for both our UK and Spain websites with a massive amount of shared/reused code (our Italian website shares a similar codebase but was implemented before we went with a multi-tenancy solution so lives separately).  The solution checks the scales at 89 projects, rebuild time from cold is approximately 45seconds (SSD), and we practice continuous integration (with unit tests) and use team city currently to deploy.  From commit to release we can go live within 30mins if needed, though with our change and approval process we tend to take longer than that (business signoff etc.)

Our sites

As britains biggest bingo site (and indeed spains) you can imagine that updates are frequent anyway, though we’re undergoing a project that sees us phase out the classic asp/asp.net on the UK site and deliver the MVC4 replacement.  This absolutely has to be done in phases and has to be done ‘feature at a time’ to ensure business continuity whereby we have both sites interoperating with each other, new features via MVC4, old via classic/asp.net.

These regular updates really do require frequent releasing – time critical elements to the site, bigger promotions, etc. etc. all need to get out of the door quickly and be controlled (either via feature toggles/time toggles or something equivalent) and they don’t constitute part of this phase shift over to MVC4 so we have multiple streams of work ongoing as well as delivery to both sites (spain + uk) from the same codebase.  If this was a greenfield site there’d be no real problem (as we did with out spanish solution) in just doing one big up front release at ‘go live’ – though that’s not an option when we’re talking about a site that when quiet still has thousands of people actively using it.  Equally, we can’t afford not to release to our customers just because there is risk involved and hence we’ve gone with at least daily releasing of the codebase.

What was the problem with SCRUM for us?

Took me a while to write that heading, and it’s still not right – there’s nothing wrong with SCRUM, nor indeed was there really anything wrong with SCRUM in our environment.  We have a few areas where SCRUM just doesn’t work perfectly:

The Product Owner (or lack thereof)

Getting a product owner in our environment isn’t always easy – we have people who care about what we release, though getting the level of involvement that really warrants the ‘product owner’ title is consistently difficult.  I suspect a lot of organisations find the biggest problem with SCRUM in this one area – someone to drive ownership of the deliverables, champion and represent the customer, and more importantly, constantly review things like ROI, Customer Needs, and the ongoing work and on a daily basis work with the project team to ensure they were working on the right features at the right time.

The “Sprint” cycle and unknown unknowns

You’d think with this being brownfield the requirements would be known up front and it would be far easier to estimate the work in delivering (say) our registration process over to the new architecture would be a straight forward and plannable activity.  For some areas of the site, this has proven to be the case (and has indeed fit into the scrum ‘plan, develop, test, release, review’ pattern nicely).  Some on the other hand, including our most recent sprint have proven that fixed time windows, the ‘though shalt not change the deliverables in a sprint’ and ‘you shall attempt to estimate very well or you’re going to have headaches with the first two’ have proven overwhelmingly difficult to get around.

Hold on, this is all solveable!

Yes, I know there are many ways to solve problems the key problems with sprints, though after completion of a 4 week sprint that had initially been estimated as 2, I feel pretty good about the fact that we are employing agile principles and practices but not rigidly adhering to any one particular approach.

In this most recent sprint (delivery of an affiliates system whereby the legacy implementation was fragmented, adhoc and entirely non-systematic), where project resource changed 3 times even during that 4 week period (business priorities can and do change all the time) we’ve come to the understanding of what works for us and what doesn’t.

The New Process – “works for our organisation™”

I hasten to add that what works for us may not work for you – but that’s the overwhelmingly positive thing about agile in general.  Something works? Embrace it.  Something doesn’t work for whatever reason? Hone or abandon it.  You can pick and choose from the full gamut of agile practices, so long as you are embracing that core set of principles that underpins agile.

Planning

We do (as agile would have you do) just enough planning to understand the problem and start work.  Though we’ve stopped sweating the fact that it’s likely that in 4 days some new understanding of the product/deliverable may well surface that changes things – gaining that core/shared understanding early so that we can progress is key, but the minutiae isn’t.  It’s odd, as if you speak to any seasoned agile practitioner they’d tell you this is what you should have been doing anyway.  With our regular releasing it means that something that happens to customers on Monday within the released software could well become the new priority on Wednesday.  I think there are times when we attempted to get more detail into that up front understanding (with no great benefit) because of the risk of releasing something to a massive audience or indeed in just attempting to understand more than we needed to to ‘crack on’.

Taskboard

We use one, and it works a treat.  We don’t have a whiteboard up in the office, we use software tools – either Scrumwise or Trello depending upon the project and what the lead feels will work best for them.  We have big TVs throughout the organisation in the various meeting rooms/cafe and will eventually get to the point where the boards are easily shown on any of these.

Communication

We still don’t have product owners – the operations team involved in the project almost always become that role along with the development role they are undertaking.  That said, communication with the core areas of the business is significant and we involve them as often as is practicable (and certainly more than they have been in the past).

We still very much adopt dailies within the team as a means of what we’ve both been working on, and are going to be working on.  I’d encourage anyone adopting agile to bring in daily stand ups as a no-brainer.  When you have a project team who are all working on their own deliverables, that short daily discussion adds a massive amount of shared context and ownership to the work we’re all undertaking.

Non-fixed ‘sprints’

We do still initially plan out our deliverables in terms of ‘sprints’ but I guess the more correct way of defining them to be generically agile would be ‘timeboxes’.  We’ve given up trying to make them fixed though.  We do initially estimate for roughly 2 weeks.

We will do our best to understand and solve that one problem during the timebox – if some new understanding comes along that would make delivery of the product within that timebox unachievable, then the timebox rather than the deliverable changes.  We no longer worry about adding to the timebox while we’re still focussing on delivery of the product for that particular window.

Our most recent sprint is a classic example of this – our affiliates handling system has brought so much out of the woodwork in terms of ‘hidden details’, things that if we’d planned up front we’d have been planning for half of the sprint to attempt to find everything, and only after we actually shipped some of the product did we find out half of the issues that arose – none of this was a problem of under delivery or over commitment against the timebox but simply became ‘something else we had to deliver as part of our affiliates work’.  It sounds more adhoc, but it just feels more flexible.

Releasing

This is where our approach feels far more ‘kanban’ than ‘scrum’ to me,   Because we have to release regularly anyway (as highlighted further up, we need to keep releasing to the live site both new functionality and bug fixes against existing), and because working from the trunk keeps our CI process happy and because we want to keep things shipping then it felt natural to release our functionality as often as possible.

We use feature toggles extensively (both in our new codebase and classic codebase) so that we can easily switch on/off functionality – we can activate new functionality for our own staff only so that those new features will be as thoroughly tested in live before our customers see them.

Any risk involved? Show the new feature to only a small percentage of our customers to ensure that it’s behaving as expected.

Getting our functionality out of the door every day though (even if it’s not switched on) brings a remarkable amount of confidence.  Knowing that we’re not pushing out something ‘big’ every 2 weeks but are pushing new functionality to our customers every day (and getting feedback on that quickly) really is the essence of agile I know, but I suspect had we blindly followed a single process we’d have released at the end of each timebox and not at the start of each day.

Monitoring/Reviewing

We’re not there yet with this, it’s an ever improving aspect of our work and an area where automation will play a big part in future.  That said, we have a massive amount of logging in place, as well as a custom set of business exceptions that make it very clear what has happened and why.  This is monitored every single day and at key flashpoints (releases being one of them) – a spike in Registration exceptions? dig in, see what it is, fix it if necessary and ship quickly.

Again, agile would have you ensure that you ‘review’ regularly, so this isn’t necessarily something that we’re doing that isn’t really ‘scrum’ or agile, but is instead just the specifics of how we are doing it.

Knowing on a Tuesday that we had 22 Affiliates exceptions on the Monday, none of which affected the user experience and all of which are resolvable, though 5 of those indicate a potential issue with a section of code… there’s something massively comforting about that.

As I’ve said to one of the guys at work – “Everyone ships bugs, it’s how you find out about and deal with them that counts”.  Version 1 of your software will always suck, but until you get it in the hands of real customers, who knows which aspects suck and which the users will love?  One developers ‘killer feature’ is another customers ‘meh’ (and indeed vice versa).

Moving forward – honing the process

The good thing about where we are now is that it’s a start – you’d think if I were happier about it I’d wax lyrical a little more.  I’m over the moon with the way we manage projects, it works really well for us, and the cross pollination between scrum/kanban and other agile methodologies works a treat.

That said, we’ll never stop improving it, and we’ll never stop reviewing the process itself – if something isn’t contributing we’ll change it or remove it.  If someone comes up with a new way we could do X we’ll try it for a timebox or two and see if it can work.

Those who more practiced in agile will tell you that we’re embracing what agile is about I guess, and that it’s about adopting what works for you and your organisation and not to rigidly adhere to any one particular methodology.  I’d now agree with them and hopefully the above gives some indication of what works for us in tangible terms in a high impact, high release, high priority changing environment.

Read More

#DDDnorth 2 write up – October 2012 – Bradford

#dddNorth crowd scene, waiting for swag!

Stolen from Craig Murphy (@camurphy) as it’s the only pic I saw with me on it (baldy bugger, green t-shirt front right) – thanks Craig!

Another 5:45am alarm woke me on a cold morning to signal the start of another days travelling on a saturday for a developer developer developer event, this time with Ryan Tomlinson, Steve Higgs, Phil Hale and Dominic Brown from work.  I’ve been to a fair few of these now, and it still overwhelms me that so many people are willing to give up their Saturdays (speakers and delegates alike) and attend a day away from friends, family (and bed!) to gather for a day with their peers to learn from each other.

Lions and tigers and hackers! Oh my!

Phil Winstanley, @plip

Phil highlighted that the threat landscape has and is changing now – we’re moving away from paper and coin as our means of transactions and everything is existing in the online space, it’s virtual, and it’s instantaneous.  Identity has become a commodity, and we now all exist in the online space somewhere – facebook are making the money they are because our identities and those of our relationships are rich with information about who we are and what we like.

He brought over some very good anecdotal evidence from Microsoft around the threat landscape and how it’s growing exponentially, there are countries and terrorist organisations involved in this (more in the disruption/extraction space) but everyone is at risk – estimated 30% of machines have some form of malware on them and a lot of the time it’s dormant.

Groups like anonymous are those that folks should be most scared of – at least when a country hacks you there are some morals involved, whereas groups like anonymous don’t really care about the fallout or whom and what they affect, they’re just trying to make a point.

The takeaway from this rather sobering talk from me was to read the Security Development Lifecycle – we all agreed as developers that although we attempt to code secure software, none of us were actually confident enough to say that we categorically do create secure software.

I’ve seen Phil give presentations before and really like his presentation style and this talk was no different – a cracking talk with far more useful information than I could distil in a write up.

Asnyc c# 5.0 – patterns for real world use

Liam Westley, @westleyl

I’ve not done anything async before and although I understand the concepts, what I really lacked was some real world examples, so this talk was absolutely perfect for me.

Liam covered a number of patterns from the ‘Task-based Asynchronous Pattern’ white paper, in particular the .WhenAll (all things are important) and .WhenAny (which covers a lot of other use cases like throttling, redundancy, interleaving and early bailout) patterns.  More importantly, he covered these with some cracking examples that made each use case very clear and easy to understand.

Do I fully understand how I’d apply async to operations in my workplace after this talk? No, though that wasn’t the aim of it (I need to spend more time with aync/await in general to do that).

Do I have use cases for those patterns that he demoed and want to apply them?  Absolutely, and I can’t wait to play!

Fantastically delivered talk, well communicated, and has given me loads to play with – what more could you want from a talk?

BDD – Look Ma, No Frameworks

Gemma Cameron, @ruby_gem

I approached this talk with some scepticism – I’ve read a lot about BDD in the past, I’ve seen a talk by Gojko Adzic very recently at Lean Agile Scotland around ‘busting the myths’ in BDD, and although the concepts are fine, I just haven’t found BDD compelling.  Gemma’s talk (although very well executed) didn’t convince me any further, but the more she talked, the more I realised that the important part in all of this is DISCUSSION (something I feel we do quite well at my workplace).  I guess we as a community (developers) aren’t always ideal at engaging the product owner/customer and fully understand what they want, and it was primarily this point which was drilled home early in the talk.  Until you have a clear understanding early on by bringing stakeholders together, arriving at a common understanding and vocabulary, how can you possibly achieve the product they wish.  I buy this 100%.

This is where the talk diverged for some it seems – a perhaps misplaced comment about ‘frameworks are bad’ was (I feel) misinterpreted as ‘all frameworks are bad’, whereas really to me it felt like a ‘frameworks aren’t the answer, they’re just a small part of the solution’ – it jumps back to the earlier part about discussion – you need to fully understand the problem before you can possible look at technology/frameworks and the like.  I’m personally a big fan of frameworks when there is a usecase for them (I like mocking frameworks for what they give me for example), but I think this point perhaps muddied some of the waters for some.  She did mention the self shunt pattern which I’ll have to read more on to see if it could help us in our testing.

A very thought provoking talk, and I can imagine this will generate some discussion on monday with work colleagues – in particular about engagement with the client (product owner/customer) in order to ensure we are getting the requirements correctly – hopefully we’re doing everything we need to be doing here.

Web Sockets and SignalR

Chris Alcock, @calcock

I’m sure chris won’t mind a plug for his morning brew – a fantastic daily aggregation of some of the biggest blog posts from the previous day.  This is the first opportunity I’ve had to see Chris talk, and it’s odd after subscribing to morning brew for years now you feel like you know someone (thankfully got to chat to him at the end of the session and ask a performance related question).

I’ve played recently with SignalR in a personal project so had a little background to it already, though that wasn’t necessary for this talk.  Chris did a very good job of distilling websockets both in ‘how’ and ‘what’ and covered examples of them in use at the http level which was very useful.  He then moved on to SignalR both in the Persistent Connection (low level) and Hub (high level) APIs.  It’s nice to see that the asp.net team are bringing signalR under their banner and it’s being officially supported as a product (version 1 anticipated later this year)

This was a great talk for anyone who hasn’t really had any experience of signalR and wants to see just what it can do – like me, I’m sure that once you’ve seen it there will be a LOT of use cases you can think of in your current work where signalR would give the users a far nicer experience.

Event Driven Architectures

Ian Cooper, @ICooper

The talk I was most looking forward to on the day, and Ian didn’t disappoint.  We don’t have many disparate systems (or indeed disparate service boundaries) within our software, but for those that do exist, we’re currently investigating messaging/queues/service busses etc. as a means of passing messages effectively between (and across) those boundaries.

Ian distilled Service Oriented Architecture (SOA) well and went on to different patterns within Event Driven Architectures (EDA) and although the content is indeed complex, delivered as effectively as it could have been done.  I got very nervous when he talked about the caching of objects within each system and the versioning of them, though I can see entirely the point of it and after further discussion it felt like a worthy approach to making the messaging system more efficient/lean.

The further we at work move towards communication between systems/services the points in this talk will become more and more applicable and have only helped validate the approach we were thinking of taking.

This talk wins my ‘talk of the day’ award* (please allow 28 days for delivery, terms and conditions apply) as it took a complex area of distributed architecture and distilled into 1 hour what I’ve spent months reading about!

And Ian – that’s the maddest beard I’ve ever seen on on a speaker Winking smile

Summary

Brilliant brilliant day.  Lots of discussion in the car on the way home and a very fired up developer with lots of new things to play with, lots of new discussion for work, and lots of new ideas.  Isn’t this why we attend these events?

Massive thanks to Andrew Westgarth and all of the organisers of this, massive thanks to the speakers who gave up their time to come and distil this knowledge for us, and an utterly huge thanks to the sponsors who help make these events free for the community.

I’ll be at dunDDD in November, and I’m looking forward to more of the same there – will be there the friday night with Ryan Tomlinson, Kev Walker and Andrew Pears from work – looking forward to attending my first geek dinner!

Read More

ASP.NET MVC4 – Using WebForms and Razor View Engines in the same project for mobile teamplate support

NOTE: All content in this post refers to ASP.NET MVC 4 (Beta) and although it has a go live license, it has not gone RTM yet.  Although the process has been remarkably smooth, please work on a branch with this before considering it in your products!

 

We’ve been presented with an opportunity to create a mobile friendly experience for our italian site.  Our italian offering front end is an asp.net MVC 3 site using the webforms view engine (we started the project before razor was even a twinkling in microsoft’s eye), and is pretty standard in terms of setup.

There are a number of different ways of making a site mobile friendly – scott hanselman has a written a number of great articles on how he achieved it on his blog, and responsive design is very much a hot topic in web design at the moment (and that is a cracking book) and there are a lot of resources out there (both microsoft stack and otherwise) around learning the concepts.

Our italian site although div based and significantly more semantically laid out than our UK site (sorry!) would have still been a considerable task to turn into a responsive design as a first pass.  Our mobile site *will not* need to have every page that the non-mobile site has though – the purpose of the site is different, and the functionality in the site will be also.

Along comes ASP.NET MVC 4 (albeit still in beta, but it has a go live license) with its support for mobile.  I really should care about how it works under the covers (perhaps a follow up post), though for now, basically if you have a View (Index.aspx) then placing a mobile equivalent (Index.mobile.aspx) allows you to provide a generic mobile version of a page.

Upgrade your MVC3 Project to MVC4

Basically, follow: http://www.asp.net/whitepapers/mvc4-release-notes#_Toc303253806

There were no problems in this step for us – we have a large solution, and there were a number of dependent projects that were based upon MVC3, but these were all easily upgraded following the steps at that URL.

Setting up your view engines

We previously had removed Razor as a view engine from the project to remove some of the checks that go on when attempting to resolve a page, so our Global.asax had the following:

// we're not currently using Razor, though it can slow down the request pipeline so removing it
ViewEngines.Engines.Clear();
ViewEngines.Engines.Add(new WebFormViewEngine());

and it now has:

ViewEngines.Engines.Clear();
ViewEngines.Engines.Add(new RazorViewEngine());
ViewEngines.Engines.Add(new WebFormViewEngine());

The order is important – if you want your mobile views to use Razor in a WebForms view engine project, then razor must be the first view engine the framework looks to. If however you want to stick with webforms (or indeed you are only using razor) then your settings above will be different/non-existant.

Creating the mobile content

We started by creating Razor layout pages in shared in exactly the same way that you would add a master page.  Open Views/Shared and right click, Add Item, and select an MVC4 Layout Page.  Call this _Mobile.cshtml, and setup the differing sections that you will require.

To start with, as a trial I thought I’d replace the homepage, so navigate to Views/Home, right click, and ‘Add View…’ – create ‘Index.mobile’ and select Razor as the view engine – select the _Mobile.cshtml page as the layout.

Ok, we now have a non-mobile (webforms view engine) and a mobile (razor view engine) page – how do we test?

Testing your mobile content

The asp.net website comes to help again.  They have a great article on working with mobile sites in asp.net MVC4 (which indeed is far better than the above, though doesn’t cover the whole ‘switching view engines’ aspects).

I installed the tools listed in that article, and loaded up the site in the various testing tools and was presented with the following:

image

That’s Chrome in the background rendering out the standard site, upgraded to MVC4 but still very much using the webforms view engine and master pages, and Opera Mobile Emulator (pretending to be a HTC Desire) in the foreground using Razor view engine and layout pages.

Conclusion

The rest, as they say, is just hard work Smile  We very much intend to make the mobile site responsive and our CSS/HTML will be far more flexible around this, though with media queries (some examples media queries) and the book above in hand, that will be the fun part.

The actual process of using both Razor and WebForms view engines in the same project was a breeze and means that longer term the move over to Razor for our core site should be far more straight forward once we’ve worked through any teething troubles we have around the work above.  Razor as a view engine is far more conscise and (dare I say it!) pretty than webforms and the gator tags, so I look forward to using it in anger on a larger project like this.

It may be longer term that there are pages on the site that lend themselves towards not having duplicate content in which case we will investigate making the core design more responsive in places, but for now, we have a workable solution to creating mobile content thanks to the mobile support in ASP.NET MVC4.

 

Hope that was useful.

Read More