IIS, Optimising Performance, 304 status codes, and one stupid browser…

Well, I thought I’d start my play in earnest after last weeks DevWeek, I thought I’d experiment with various performance improvements that came out of Robert Boedigheimer’s (@boedie) talk.

First up, a play with expiry of content.  We host all of our ‘assets’ (images, css, javascript, and flash) from a content delivery network style setup.  We don’t currently use a CDN for anything other than our games, but the concept is the same – so long as the assets are hosted on a separate URL to the content, then the location of those assets isn’t an issue.

Setting Up Expiry for Assets in IIS 7

In IIS manager left click on the website, folder or indeed file that you wish to set expiry on.

image

From the ‘IIS’ section in the main pane (make sure you’re on features view for this) double click on ‘Http Response Headers’.

image

You will see in the right hand pane the option to ‘Set Common Headers…’

image

This gives you the following dialog:

image

 

You can see here, I’ve enabled ‘Expire Web content’ and am expiring it after 20 days.  You can set a fixed expiry time too, though I’ve never done this – I can imaging it’s more maintenance overhead to ensure you always keep content ‘cached’ at various points in time, though it’s there if your use case demands it.

 

Fiddlers 3(04)

Ok, so that’s all yes? Well, yup, that’s it.  Fire up fiddler and load up one of your assets – you should see something like the following:

image

So, all is well on first load – we get a status code 200 and we can see that caching is enabled and has a max-age of 1728000 (20d * 24h * 60m * 60s), so we know that our next request should be cached and shouldn’t hit the server.

Hit Refresh…

image

Erm… why are you still hitting my server even if only to be told 304 (not modified), resource hasn’t changed…  So why the round trip?

Turns out hitting refresh on any browser will indeed make that round trip – the refresh button is almost an override for the local browser cache and says ‘go and double check for me’ – I’ve tested in IE8/9, Firefox and Chrome and they all do this.

So how do I avoid the round trip for non modified resources?

Turns out you’re already doing it.  Instead of clicking refresh, click into the address bar and press return.  You will find the resource loads up again no problem, but there is now no round trip to the server and no 304 response.  Well, it loads up no problem in IE or Firefox, but there’s another pesky browser on the block…

Hmmm, Google Chrome? Why won’t you play ball?

Seems that pressing return in Chrome behaves in the same way as if you’d hit the ‘refresh’ button and still issues the request (and naturally gets the 304).

Should I worry?

Well, no – we have only been testing single resources here.  If I navigate to a page by typing in the URL and pressing return (first run) you’ll get the usual 200 status codes, and you’ll get the usual assets caching.  If you press return to ‘reload’ that page in chrome, then you will get all of the round trips back and forth with the corresponding 304s.

But, if you navigate to that page (after you’ve had your code 200’s) via either a bookmark or a google search (essentially, via a hyperlink) then jobs a good un and it doesn’t issue the requests.

I’m not sure why Chrome behaves differently to the other browsers in this regard – I could understand if it didn’t have a ‘refresh’ button, but it has.

 

I write this up as it caused an hour or so’s pain as I played around with IIS caching, as I’ve recently switched to Chrome as my default browser.  It was only chance that I tried the assets in the other browsers when Chrome wasn’t doing as it claimed it should be that I realised it was just a chrome side effect and really only came into play when I was ‘debugging’.

Hope it’s useful to others.

Ahhh, LINQ – of course you’re case sensitive!

One of those ‘ahhh, bollocks’ moments this morning, so thought I’d write about it – a) so I’m not bitten by it again (writing about these things helps them sink in) and b) in case anyone else gets stuck and need a quick google of it.

Linq to SQL

We use linq commonly in our data access (SQL Server 2005/2008) and all is well on a join like:

var results = from cd in context.Distribution
               join uc in context.UCodes on cd.Batch equals uc.Batch
               where uc.Stamp >= betweenStart && uc.Stamp <= betweenEnd

...

Fairly standard stuff, an inner join between two tables based upon a criteria.  We use Latin1_General_CI_AS as our collation so no worries at all on those joins.

Linq to Objects

Now take those two collections out of the DB and into code (as we’ve had to do recently for a long running query), and that join above (on cd.Batch equals uc.Batch) gets buggered up.

Batch in the case above is a string, and someone forgot to sanitise it before entry to the DB (I use the royal someone, as it may have been me!), so a batch can be either ‘vfc’ or ‘VFC’ or ‘Vfc’ etc.

Move away from our cosy Latin1_General_CI_AS world and the above started to return a lot less data because of casing.

The fix is (as you would expect) easy:

var results = from cd in context.Distribution 
  
               join uc in context.UCodes on cd.Batch.ToUpperInvariant() equals uc.Batch.ToUpperInvariant()
               where uc.Stamp >= betweenStart && uc.Stamp <= betweenEnd
...

I thought I’d have a quick look around in terms of case sensitivity and which conversion mechanism to use (ToUpper, ToLower, etc.) and the following post interested me:

http://msdn.microsoft.com/en-us/library/bb386042.aspx

With the following information:

Strings should be normalized to uppercase. A small group of characters, when they are converted to lowercase, cannot make a round trip. To make a round trip means to convert the characters from one locale to another locale that represents character data differently, and then to accurately retrieve the original characters from the converted characters.

I’ve always tended to .ToUpperInvariant() when I’ve done string comparisons anyway, but it’s interesting to see some reasoning behind it.

 

Anyway, it goes down as one of those gotchas that I thought I’d write up.

Google Instant Search – is this a bug?

I got the news through about google instant, and started playing straight away – I’m really liking it, and although people on the team find the search results slightly ‘jarring’ when they change, I love it.

Interestingly, I work for an online bingo retailer (tombola), and we were really quite proud last week to hit 4th when searching for ‘bingo’.  From everything we’ve heard, google instant doesn’t make a blind bit of difference to the search, so “great” thinks I – though I’ll run a few tests just in case.

Appending to a search

  1. Using instant, type ‘bingo’ as your search – you should see tombola come up 4th (as of the time of this writing).
  2. Add a space as if you’re about to change the search – google instant correctly changes the search and different results are generated (we drop off the first page).
  3. Remove the space (you changed your mind didn’t you, you really wanted to see us!) – the original results set are returned back to their usual state.

Well done google!

Prepending to a search

I admit, this use case is going to be used far less than the original above, but ‘bingo’ is one of those words – it can happen at the start or at the end of a search term.

So:

  1. Using instant, type ‘bingo’ as your search – you should see us come up 4th again.
  2. Click to the start of the search, and put a space in – the results change, and we drop off the page, and Gala Bingo comes out as the primary search.
  3. Remove the space…

Hold on, where did we go? The results haven’t changed? But my ‘search intention’ has changed!

Is it a bug?

Well, that remains to be seen.  The above ‘behaviour’ can also be demonstrated by typing ‘development’ and pre-pending versus appending.  Oddly, not all terms behave like this though, so it’s not consistent.

Thoughts?

Chrome – are you sanitising my inputs without my permission?

I had to write this as I’m going mad, and I can’t really work out if it’s me, or if Chrome is indeed utterly fecking with my inputs.

I’m creating a form that takes (as a hidden variable) a string like this:

||eJxdUt1ugjAUvvcpml1tN5QjKpraROeSmQxnNl+gKyfKJgVLGbqnX4tW0CYk/X5oT79z6GanABaf
ICoFrIcQjaEs+RZQmkwfCu6N+gGJ/IA8WNHI69kHHM57g35BlWkuGfF8L6DYQSfHoMSOS+0IQ3Fx
mC9XbBAGAzKg+AJbPQO1XLDxuB9Foe8WxWe6tUmeAdvk2Ve+5+hxtk9ASTg9oTedUNyIrVfkldTq
xKJgSLEDrVypPdtpXZQTjOu69vT5VE/kXvVDsZXde/D9g+i6skTZve6YJixezOrut/qLT/FmW8ff
L1OKraP1J1wDC3zi+yMSIhJOguckQChu+E5wma2ckcCzeVxQKxe2kJnzWEuX6YRRKQVSuDQcag1w
LHIJ5h/Tz+u+Uy2Ugr3LfSoBzVO5zREXTaZIKEhSbeq2jmti9wHR59ebaRDa9DUc94f+zSJ2NBrt
prLUdI4Qq17A9R53rLnRTahtVzPLrEfx7Zz/A6p0zvw=
||

Double pipes at start and end are put there by me to denote where the carriage returns occur.  In particular, you can see there is a carriage return after the last character.

Browsers that work

When I render this out in a hidden field in firefox (or indeed any browser other than chrome), I get the following when viewing source:

<input name="PaReq" type="hidden" value="eJxdUt1ugjAUvvcpml1tN5QiKpraROeSmQxnNl+gKyfKJgVLGbqnX4tW0CYk/X5oT79z6GanABaf
ICoFrIcQjaEs+RZQmkwfCu6N+gGJ/IA8bNHI69kHHM57g35BlWkuGfF8L6DYQSfHoMSOS+0IQ3Fx
mC9XbBAGAzKg+AJbPQO1XLDxuB9Foe8WxWa6tUmeAdvk2Ve+5+hxtk9ASTg9oTedUNyIrVfkldTq
xKJgSLEDrVypPdtpXZQTjOu69vT5VE/kXvVDsZXde/D9g+i6skTrve6YJixezOrut/qLT/FmW8ff
L1OKraP1J1wDC3zi+yMSIhJOguFkQChu+E5wma2ckcCzeVxQKxe2kJnzWEuX6YRRKQVSuDQcag1w
LHIJ5h/Tz+u+Uy2ggr3LfSoBzVO5zREXTaZIKEhSbeq2jmti9wHR59ebaRDa9DUc94f+zSJ2NBrt
prLUdI4Qq17A9R53rLnRTahtVzPLrEfx7Zz/A6p0zvw=

" />

Notice in particular that the form field ends with the correct carriage returns.

When posting this to the third party provider (this is a 3D Secure transaction, letters have been changed to protect the wealthy!), jobs a good un, works no problem at all.

What happens in Chrome

When I view the same source in Google Chrome (5.0.375.99), I get the following:

<input name="PaReq" type="hidden" value="eJxdUl1vwiAUffdXkD1tLwVq/QyS1Pkwk9WZzT/A6I02U6qUrrpfP6hiW0mb3HPPAS7nXrbZaYDF
F8hSA+8hxBIoCrEFlKWzp6MIRv2QjgkhT4639Dr+hNM1tugXdJHlitOABCHDHno6AS13QhmfsCkh
T/Plig+icEAHDN9gwx9ALxd8MumPxxHxi+FraaEpcQC+yQ/f+V6g53ifglZweUfvJmW4JhutzEtl
9IWPwyHDHjR0qfd8Z8yxmGJcVVVgrqcGMg/KH4Yd7d+DHx/E1qVLFO3rzlnKk0Vctf/VX3JJNlsX
zxh2ikafCgM8JJSQEY0QjabhwH4M1/mWcQdXOadh4Py4oYY+ukJir3GSdqZcRqk1KOnd8KgRwPmY
K7B7bD/vcataKCT/UPtMAZpnapsjIWtPkdSQZsbW7RR3xx4NYq9vnWmQxvY1mvSHpLOoG42a61SW
2c5R6tgbuN/jj7U3+gl17apnmfcY7s75P2Hdzs8=">

Erm… Chrome – where did you put those carriage returns?

I’ve tried deliberately placing carriage returns on the hidden field, adding them to the variable, etc. and still, it removes them.

It’s almost like the value has had a .Trim() applied before being output?

This transaction fails (oddly enough, invalid paReq), and although I can’t prove it, my guess is that the carriage returns are significant in this.

Help!

Am I going mad here?  Am I missing something obvious?  Is this a bug or indeed a feature?

Update

This has now been confirmed by a few people – terrifying though that is. If whitespace is important to your form inputs (well, trailing whitespace), then the cry is ‘be careful!’.

Someone suggested a workaround on stackoverflow (http://stackoverflow.com/questions/3246351/bug-in-chrome-or-stupidity-in-user-sanitising-inputs-on-forms) which works a treat, and out of all solutions I can think of, is the most elegant.

Thanks for the feedback from all – it’s been a really useful exercise!

Dependent objects in SQL Server

another of those ‘bloody hell, why did I not know this before’ moments, but one of the lads circulated this during the week as a means of checking dependencies on either stored procs or tables.

A simple

exec sp_depends [object name]

Will return a set of results that highlight which stored procs, views, tables, user-defined functions or triggers are dependent upon that object.  The full MSDN documentation is available here.

So handy when there are schema changes in legacy code/schema’s.

jQuery, Validation, and asp.net

Well, new job, new challenges, and finally my brain can switch off at the end of a day!

This past week or so I’ve been playing with a new registration process for a website, and decided to wherever possible depart from the path of least resistance (classic asp.net, validation, telerik controls, asp.net ajax etc.) and try to focus on the user experience that can be gained from using jquery and any associated plugins.

I plumped for hand rolling my own accordion as I needed more flexibility than that available from the standard plugins.  The area I’ve been most enlightened with though is the jquery.validation plugin.  I love the flexibility in the tool, the customisation, and the improvements it can bring to a form.

a simple:

$(‘input:text:not(.skip_auto_validation),input:password,select:not(.skip_auto_validation)’).blur(

function() {
        validate_field(this);
});

has allowed me to validate fields on loss of focus, and the method targets a number of elements on fail and highlights them.

Before Validation

image

Blur on username – fail

image

which incorporates a $.ajax call to an .ashx handler, and after the call has occured, the user either gets a nice slide down message or a nice indicator that everything is well.

Blur on username – success

image

 

The function that handles the validation is:

/// 
/// field by field validation – we only want to validate fields that are
/// either already validated or have previously succeeded/failed validation
/// and now have a different value
/// 
function validate_field(field) {
    var prev_icon = $(field).prev(‘.icon_success,.icon_fail’);
    if ($(field).val().length > 0 || prev_icon.length > 0) {
        if (!$(field).valid()) {
            prev_icon.remove();
            $(field).addClass(‘field_error’).before(icon_fail).prev().prev().addClass(‘label_error’).parent().parent().addClass(’section_error’);
        }
        else {
            prev_icon.remove();
            $(field).removeClass(‘field_error’).before(icon_success).prev().prev().removeClass(‘label_error’).parent().parent().removeClass(’section_error’);
        }
    }
};

Obviously there are some specific .parent() and .before() .prev() etc. that’ll only work in this pages layout, but you get the idea.

Validating Server Side Fields

I had a mare initially with this until I realised that it was the UniqueID that I wanted of the control.  After that, jobs a good un.

$(‘#aspnetForm’).validate({
    errorElement: ‘div’,
    errorClass: "validate_error",
    // what rules do we have – remember this is page 1
    rules:    {
        "<%=txt_UserName.UniqueID %>": {
            required_6_20:                true,
            username_already_in_use:    true,
            minlength:                    6,
            maxlength:                    20
        },
        "<%=txt_Password1.UniqueID %>": {
            required_6_20:                true,
            minlength:                    6,
            maxlength:                    20
        }
        "<%=txt_Email1.UniqueID %>": {
            required:                    true,
            email_already_in_use:        true,
            email:                        true
        },
        "<%=txt_Email2.UniqueID %>": {
            email:                        true,
            equalTo:                    "#<%=txt_Email1.ClientID %>"
        }
    },
    messages: {
        "<%=txt_UserName.UniqueID %>": {
            required_6_20:        ‘Your username must be between 6 and 20 characters and can only contain letters, numbers and – ! _ . punctuation characters’,
            username_already_in_use:    ‘Your username is already in use – please select another’,
            minlength:            ‘Your u
sername is too short – please change it to be between 6 and 20 characters’,
            maxlength:            ‘Your username is too long – please change it to be between 6 and 20 characters’
        },
        "<%=txt_Password1.UniqueID %>": {
            required_6_20:        ‘Your password must be between 6 and 20 characters and can only contain letters, numbers and – ! _ . punctuation characters’,
            minlength:            ‘Your password is too short – please change it to be between 6 and 20 characters’,
            maxlength:            ‘Your password is too long – please change it to be between 6 and 20 characters’
        },

        "<%=txt_Email1.UniqueID %>": {
            required:            ‘You must enter an email address’,
            email:                ‘You must enter a valid email address’,
            email_already_in_use:        ‘Your email is already in use – please enter another or click the link shown’
        },
        "<%=txt_Email2.UniqueID %>": {
            email:                ‘You must enter a valid email address’,
            equalTo:            ‘\’Email\’ and \’Confirm email\’ must match – please double check them’
        }
    }
});

so with .net controls you have to specify the field by using quotes, and <%= field.UniqueID %> to really get the rules to work.

Custom Rules

Creating custom validation rules is a doddle – just ensure they’re called before .validate()

jQuery.validator.addMethod(
    "valid_postcode",
    function(value, element) {
        // uk postcode regex – all straight forward apart from that last bit – apparently
        // uk postocdes don’t have the letters [CIKMOV] in the last 2
        var regex = /^[A-Z]{1,2}[0-9R][0-9A-Z]? ?[0-9][ABD-HJLNP-UW-Z]{2}$/i;    //after – no need for space
        return regex.test(value);
    },
    "This field is required"
);

This one validates against a UK postcode (regex actually published by the uk government – I couldn’t believe it!).

You then just say:

rules: {
“<%= txtPostcode.UniqueID %>”:
valid_postcode: true
}

It’s really nice to see microsoft are including this .validation library in the 2010 release of visual studio (I’m not sure if that means it’s going to replace the existing method of validation or not, but the fact that it’s gotten that level of support from Microsoft is ace.

Also, the CDN from Microsoft seems to include this as one of the libraries, so definitely an indicator of good things for the plugin.

installutil, windows7, and Run as administrator…

Well, stupidity had the better of me for over an hour on this one!  Having not really worked on a windows service in a while, and certainly not in Windows 7, I was having a mare getting it to install.

I’d found any number of references out there facing the same issue as me:

An exception occurred during the Install phase.
System.InvalidOperationException: Cannot open Service Control Manager on computer ‘.’. This operation might require other privileges.
The inner exception System.ComponentModel.Win32Exception was thrown with the following error message: Access is denied.

and a more detailed review of the install log indicated it was falling over at “Creating EventLog source Time Server Service in log Application…”

I immediately thought enhanced security model, UAC, and suspected that the account that I was trying to run the service as (LocalSystem) was the culprit – I seemed to remember back in the day having to set registry entries against accounts for permission to write to the system logs.  Battled with this for a while to no avail, disabled logging in the hope that the error message would be more helpful, but it didn’t reveal much.

I then started to think back to UAC etc. and started up a command window as Administrator (shift+ctrl+enter when it’s highlighted in the start menu rather than just enter).

Hey presto, problem solved.  I can see I’ll need to investigate further to ensure that there are no other issues with regards the tightened security in win7 (and indeed vista) but as of now, all seems to be working as expected.