Using T4 to generate enums from database lookup tables

I’m sure a fair few people will be working on projects like us where we have a database backend with referential integrity, including a number of lookup tables.  A lot of the time in this situation you also want to mirror the lookup values in your code (as enums for us).  Most of the time, it’s relatively easy to just manually create both sets of entries as they will rarely change once created.  Or so we hope!

I quite fancied learning about T4, and the first example I could think of was this tie up between database lookup tables and code enums. 

I love the idea that the output from your T4 work is available at compile time and available directly in your code once you’ve created the template – the synching of things between a database and your code base is an obvious first play.

So with that in mind, lets crack on.

Initial Setup

I’ve created a simple console app and a simple DB with a couple of lookup tables – simple ‘int / string’ type values.  I installed T4 Toolbox to get extra code generation options within the ‘Add New…’ dialog, though it turns out my final solution didn’t actually require it – that said, the whole T4 Toolbox project looks very interesting, so I’ll keep an eye on that.

image

This will generate a file ‘GenerateCommonEnums.tt’, and the base content of the file is:

image

Add a reference to your DB

At this point, I would have loved to use linq to sql to generate my enums, as it’s a friendly/syntacitcally nice way of getting at data within the database.

That said, this proved far more difficult than I’d have hoped – any number of people had made comments about it, and saying if you ensure System.Core is referenced and you import System.Linq job should be a good un.  It wasn’t in my case.

Thankfully, this wasn’t the end of the investigation.  I managed to find an example online that used a SQLConnection… old skool it was to be!

So what does the code look like…

The code I generated turned into the following, and I’m sure you’ll agree it aint that far away from the sort of code we’d write day in day out.

<#@ template language="C#" hostspecific="True" debug="True" #>
<#@ output extension="cs" #>
<#@ assembly name="System.Data" #> 
<#@ import namespace="System.Data" #>
<#@ import namespace="System.Data.SqlClient" #>
<#
    SqlConnection sqlConn = new SqlConnection(@"Data Source=tombola009;Initial Catalog=TeamDev;Integrated Security=True");
    sqlConn.Open();
#>
namespace MyCompany.Models.Enums
{
	public enum TicketType
	{
		<#
		string sql = string.Format("SELECT Id, Name FROM LOOKUP_TABLE_1 ORDER BY Id");
        SqlCommand sqlComm = new SqlCommand(sql, sqlConn);

        IDataReader reader = sqlComm.ExecuteReader();

        System.Text.StringBuilder sb = new System.Text.StringBuilder();
        while (reader.Read())
        {
            sb.Append(TidyName(reader["Name"].ToString()) + " = " + reader["Id"] + "," + Environment.NewLine + "\t\t");
        }
        sb.Remove(sb.Length - 3, 3);

        reader.Close();
        sqlComm.Dispose();
		#>
<#= sb.ToString() #>
	}
	
	public enum TicketCategory
	{
		<#
		sql = string.Format("SELECT Id, Area, Name FROM LOOKUP_TABLE_2 ORDER BY Id");
        sqlComm = new SqlCommand(sql, sqlConn);

        reader = sqlComm.ExecuteReader();

        sb = new System.Text.StringBuilder();

        while (reader.Read())
        {
            sb.Append(TidyName(reader["Area"].ToString()) + "_" + TidyName(reader["Name"].ToString()) + " = " + reader["Id"] + "," + Environment.NewLine + "\t\t");
        }

        sb.Remove(sb.Length - 3, 3);

        reader.Close();

        sqlComm.Dispose();
		#>
<#= sb.ToString() #>
	}
}

<#+
	
    public string TidyName(string name)
    {
        string tidyName = name;

		tidyName = tidyName.Replace("&", "And").Replace("/", "And").Replace("'", "").Replace("-", "").Replace(" ", "");
		
        return tidyName;
    }

#>

The ‘TidyName’ method was in there just to try to tidy up the obvious string issues that could crop up.  I could have regex replaced anything that wasn’t a word character, though I think this gives me a bit more flexibility and allows customisable rules.

This basically generates me the following .cs file:

 
namespace MyCompany.Models.Enums
{
	public enum TicketType
	{
		Problem = 1,
		MAC = 2,

	}
	
	public enum TicketCategory
	{
		Website_Affiliates = 1,
		Website_Blog = 2,
		Website_CentrePanel = 3,
		Website_CSS = 4,
		Website_Deposit = 5,
		Website_Flash = 6,
		Website_GameRules = 7,
		Website_GameChecker = 8,
		Website_HeaderAndFooter = 9,
		Website_HelpContent = 10,
		Website_Images = 11,
		Website_LandingPage = 12,
		Website_MiscPage = 13,
		Website_Module = 14,
		Website_Multiple = 15,
		Website_MyAccount = 16,
		Website_myTombola = 17,
		Website_Newsletters = 18,
		Website_Playmantes = 19,
		Website_Refresh = 20,
		Website_Registrations = 21,
		Website_Reports = 22,
		Website_TermsAndConditions = 23,
		Website_WinnersPage = 24,
		Website_Other = 25,
	}
}

From that point on, if there are extra lookup values added, a simple click of the highlighted button below will re-run the templates and re-generate the CS files.

image

Next Steps

I’m utterly sure there must be an easy way to use linq to sql to generate the code above and I’m just missing it, so that’s the next play area.  I’m going to be playing with the POCO stuff for EF4, so I think the above has given me a taster for it all.

As with all initial plays with this sort of thing, I’ve barely scratched the surface of what T4 is capable of, and I’ve had to rely upon a lot of existing documentation.  I’ll play with this far more over the coming weeks – I can’t believe I’ve not used it before!

MSBuild – the voyage of the noob

I’ve been determined to have a play with build automation/continuous integration for a while now and just have always found something more fun to play with (ORM, MVC, etc.), though I know as the team where I work move forward, there needs to be some control and some vision on how all of our work should hang together.  With that in mind, this weekend I started to read up on MSBuild (yup, I know there are other build managers out there, but I thought I’d start with that as my learning platform and move on from there).

Why do I need to modify the default build?

Why does anyone really I suppose, but I like what we get from it.  As we move forward, the following I think will be useful to us:

  • automating unit test runs on successful builds
  • auto-deploying to our development server
  • minifying and concating javascript and css
  • ensuring coding style rules are followed (once I setup a set of company rules for us)
  • other things I haven’t imagined… there will be lots!

So where do I learn?

This was my first stumbling block.  There are a lot of resources on MSBuild, and trudging through them to find the one that was right for my learning style and approach was a nightmare.  I though to start out with the task that was at the forefront of my mind (concat/minify JS/CSS), but I just didn’t find any resources that were straightforward (my failing more than the resources available I’m sure!)

I’ve grabbed a few useful ones on my delicious bookmarks, and in particular, a significant thanks must go to the Hashimi brothers for their fantastic series on dnrTV and finishing up with some Stack Overflow discussion.

So what did I learn?

Firstly, a quick look at a post by Roy Osherove highlighted to me some of the tools available.  I found two other visual build tools: MS Build Sidekick and MSBuild Explorer, both of which I found very useful in actually *seeing* the build process, but after a watch through those dnrTV vids, I though I’d try something straight forward – concating CSS files into a ‘deploy’ css.

Get into the .csproj file

Unload your project, right click on it, and select ‘edit <projectname.csproj>’

image

MSBuild projects seem to be broken down quite succinctly into Targets, Tasks, Items, and Properties.  For my particular need, I needed to look at Items and Targets.

The schema in MSBuild is incredibly rich – you get intellisense for the most part, but because you can define your own schema elements, you are never going to get 100% intellisense.

You have a number of different ‘DependsOn’ items (mostly defined in Microsoft.CSharp.targets file), so you can create tasks that hang onto some of these like so:


    
			ConcatenateCSS;
			$(BuildDependsOn);
		
  

This is telling the build process that I have a target called ‘ConcatenateCSS’ that should happen before the ‘BuildDependsOn’ target (roughly speaking!)

I then created that target with the following:


	  
		  
	  
    
    
    
      
    
    

Which to me, looks bloody complex! I had to find some help on this one naturally.  But essentially, we have created a target called ‘ConcatenateCSS’ which is going to execute before the build.  We create an ItemGroup (and this is where the intellisense falls over) called ‘InFiles’, and we tell it to include everything ending in .css under the _assets\css folder (it seems the **\\*.css is the wildcard for recursion too, though I may be wrong on this!), and we want to exclude _assets\css\site.css (more on this in a sec).

I then send a message (which will be seen on ‘output’ during build which tells us it’s happening, and then use the combination of ‘ReadLinesFromFile’ and ‘WriteLinesToFile’.  The %(InFiles.Identity) in the ReadLinesFromFile essentially turns this into a foreach loop, and Identity is one of the MSBuild defaults.  So this is essentially, foreach of the files we’ve identified, output the contents to the ‘Lines’ variable/parameter.  We then Write the whole lot back to our file using the @(Lines) variable.

Now, on each build, we generate a single css file (site.css) that our site can reference, but all edits go in via the broken files.  Yes, there are more elegant ways to do this, and yes, I will likely do that in time, but I’ve made a start!

Where next?

I’d be lying if I said I could do the above without some solid examples and help, so the next steps for me are creating a solid understanding of the core concepts, playing with the tools, and looking to solve some of our core business issues as we move forward in order to take some of the human elements out of the build process.  Obviously I have to investigate continuous integration and see where that all fits in too, but I’m happy with the start I’ve made.

jQuery, Validation, and asp.net

Well, new job, new challenges, and finally my brain can switch off at the end of a day!

This past week or so I’ve been playing with a new registration process for a website, and decided to wherever possible depart from the path of least resistance (classic asp.net, validation, telerik controls, asp.net ajax etc.) and try to focus on the user experience that can be gained from using jquery and any associated plugins.

I plumped for hand rolling my own accordion as I needed more flexibility than that available from the standard plugins.  The area I’ve been most enlightened with though is the jquery.validation plugin.  I love the flexibility in the tool, the customisation, and the improvements it can bring to a form.

a simple:

$(‘input:text:not(.skip_auto_validation),input:password,select:not(.skip_auto_validation)’).blur(

function() {
        validate_field(this);
});

has allowed me to validate fields on loss of focus, and the method targets a number of elements on fail and highlights them.

Before Validation

image

Blur on username – fail

image

which incorporates a $.ajax call to an .ashx handler, and after the call has occured, the user either gets a nice slide down message or a nice indicator that everything is well.

Blur on username – success

image

 

The function that handles the validation is:

/// 
/// field by field validation – we only want to validate fields that are
/// either already validated or have previously succeeded/failed validation
/// and now have a different value
/// 
function validate_field(field) {
    var prev_icon = $(field).prev(‘.icon_success,.icon_fail’);
    if ($(field).val().length > 0 || prev_icon.length > 0) {
        if (!$(field).valid()) {
            prev_icon.remove();
            $(field).addClass(‘field_error’).before(icon_fail).prev().prev().addClass(‘label_error’).parent().parent().addClass(’section_error’);
        }
        else {
            prev_icon.remove();
            $(field).removeClass(‘field_error’).before(icon_success).prev().prev().removeClass(‘label_error’).parent().parent().removeClass(’section_error’);
        }
    }
};

Obviously there are some specific .parent() and .before() .prev() etc. that’ll only work in this pages layout, but you get the idea.

Validating Server Side Fields

I had a mare initially with this until I realised that it was the UniqueID that I wanted of the control.  After that, jobs a good un.

$(‘#aspnetForm’).validate({
    errorElement: ‘div’,
    errorClass: "validate_error",
    // what rules do we have – remember this is page 1
    rules:    {
        "<%=txt_UserName.UniqueID %>": {
            required_6_20:                true,
            username_already_in_use:    true,
            minlength:                    6,
            maxlength:                    20
        },
        "<%=txt_Password1.UniqueID %>": {
            required_6_20:                true,
            minlength:                    6,
            maxlength:                    20
        }
        "<%=txt_Email1.UniqueID %>": {
            required:                    true,
            email_already_in_use:        true,
            email:                        true
        },
        "<%=txt_Email2.UniqueID %>": {
            email:                        true,
            equalTo:                    "#<%=txt_Email1.ClientID %>"
        }
    },
    messages: {
        "<%=txt_UserName.UniqueID %>": {
            required_6_20:        ‘Your username must be between 6 and 20 characters and can only contain letters, numbers and – ! _ . punctuation characters’,
            username_already_in_use:    ‘Your username is already in use – please select another’,
            minlength:            ‘Your u
sername is too short – please change it to be between 6 and 20 characters’,
            maxlength:            ‘Your username is too long – please change it to be between 6 and 20 characters’
        },
        "<%=txt_Password1.UniqueID %>": {
            required_6_20:        ‘Your password must be between 6 and 20 characters and can only contain letters, numbers and – ! _ . punctuation characters’,
            minlength:            ‘Your password is too short – please change it to be between 6 and 20 characters’,
            maxlength:            ‘Your password is too long – please change it to be between 6 and 20 characters’
        },

        "<%=txt_Email1.UniqueID %>": {
            required:            ‘You must enter an email address’,
            email:                ‘You must enter a valid email address’,
            email_already_in_use:        ‘Your email is already in use – please enter another or click the link shown’
        },
        "<%=txt_Email2.UniqueID %>": {
            email:                ‘You must enter a valid email address’,
            equalTo:            ‘\’Email\’ and \’Confirm email\’ must match – please double check them’
        }
    }
});

so with .net controls you have to specify the field by using quotes, and <%= field.UniqueID %> to really get the rules to work.

Custom Rules

Creating custom validation rules is a doddle – just ensure they’re called before .validate()

jQuery.validator.addMethod(
    "valid_postcode",
    function(value, element) {
        // uk postcode regex – all straight forward apart from that last bit – apparently
        // uk postocdes don’t have the letters [CIKMOV] in the last 2
        var regex = /^[A-Z]{1,2}[0-9R][0-9A-Z]? ?[0-9][ABD-HJLNP-UW-Z]{2}$/i;    //after – no need for space
        return regex.test(value);
    },
    "This field is required"
);

This one validates against a UK postcode (regex actually published by the uk government – I couldn’t believe it!).

You then just say:

rules: {
“<%= txtPostcode.UniqueID %>”:
valid_postcode: true
}

It’s really nice to see microsoft are including this .validation library in the 2010 release of visual studio (I’m not sure if that means it’s going to replace the existing method of validation or not, but the fact that it’s gotten that level of support from Microsoft is ace.

Also, the CDN from Microsoft seems to include this as one of the libraries, so definitely an indicator of good things for the plugin.

NerdDinner, and initial thoughts on MVC

Although I’ve not yet finished it, I thought I’d start my wee reflection on MVC as learned through NerdDinner.

Obviously, the immediate thing that hits you is that you aint in Kansas any more – ignore the asp.net postback model, it’s all change and there is going to be some significant re-learn before I get anywhere near good I think.

I do love the separation of concerns, the theory behind it is sound from a maintenance and extensibility point of view.  Keeping my model tucked away nicely, and using it to provide the meat that the controller feeds of, which then in turn drives the View I think makes perfect sense.  I need to work far more heavily on the URL Routing before starting to design anything bigger just to see how a richer navigation hierarchy will sit.

I love the way postbacks are handled (at least in the NerdDinner app) and AcceptVerbs() just makes sense to me.  I can see I’m going to have to read up a bit more on programming against an interface, as I haven’t covered so much of this.  I wasn’t a big fan of the Respository pattern, I’d have perhaps gone down the facade route, or (when and if I understand it) perhaps IoC will help with this, though obviously this was just one example.

It’s my first successful play with Linq to SQL, and I’m liking the abstraction and the codified approach to it, though I’ll have to run some heavier tests through SQL Profiler to see how it works in terms of performance.

I’m going to have to look through the source code to find out just how all of the helper methods work rather than just use them – chucking Html.ActionLink() on the page is all well and good, but I want to know what it actually does in the code (easily enough done now that MVC source code is available)

I’m only just getting now to the integration/work with Ajax, which I think will be interesting – I shall keep the blog updated with stuff as I cover it.

The weight lifts, back to the fun stuff…

Well, the launch of the bulk of the client sites we were working on throughout May/June has left me with a weeks holiday – yay!  I’m more determined than ever to spend it learning – busman’s holiday I guess, though I’d feel like I’d wasted the week if I sat playing on the consoles or just bumming around – I’m sure catching up on sleep will help too.

I’m really enjoying the learning that I can get from just an hour of reading through stackoverflow or those peoples tweets that I follow, key over the next week is focussing this and getting some more technical contacts to learn from – there are some cracking recent ones including @spolsky, @elijahmanor, and @scottgu – and just getting time to read all about it will be bliss.

I finally asked my first question on stackoverflow the other week and thankfully it was well received.  Polymorphism in c# has always for me been a timtowdi concept, and although I’ve not had significant need for it in the past, I’ve always liked the flexibility interfaces give in implementation.  Thankfully the answers seemed to back that up, and give some good concrete examples.  So nice to know there is a community around like this that will happily offer support/views, and ultimately can lead to a ‘best practice’ guide on issues.  Of course, we’ll ignore the ‘answering without reading the question’, and all the other minor issues SO has – on the whole, a cracking resource.

So, fingers crossed this will again be the start of regular posting – this next week is working through the MCTS training kit and just learning more, doing more – key areas I have to focus on really are build management (NANT), WPF (for MCTS and curiosity more than anything), Design Patterns (I’ve only really used the Facade in anger, but there is so much discussion at present about IoC and Dependency Injection that I have to have a read).

I want to be playing with Linq (the elements I’ve played with so far highlight how seriously powerful it is if used carefully) and other more ‘hands on’ elements, though I think getting that grounding right is first and foremost.

More to follow during this weeks ‘holiday’ :)

NamingContainer, where have you been all my life!

Well, it’s been where it’s always been – in the .net framework, all the way back to version 1.0 apparently!  I’m sure I must have written some awful code in the past to get around the fact that I didn’t know about this, and I really must spend more time getting down to the details of the framework for reasons of framework nuggets like this.

Imagine you have a ListView and want to allow updates on each item, with perhaps a text field etc. in there, and you click a button inside that ListViewDataItem.  NamingContainer for the button will return that ListViewDataItem so that you can find just within that container… ace!

protected void btnWishList_Click(object sender, EventArgs e)
{
    Button _button = ((Button)sender);
    RadTextBox tb = (RadTextBox)_button.NamingContainer.FindControl("txtNotes");
    // do other processing
}

the above is formatting awfully at the moment, looking for a nice wordpress syntax plugin.

A wee step backward…

Well, maybe not.  I’ve been reviewing over the weekend my approaches to learning, and after reading considerably, I think Entity Framework isn’t the right path for me at the moment.  So many are reporting issues with it, and the Julie Lerman book highlights a number of hoops you need to jump through on complex datasets that simply shouldn’t be there.

So it’s back to basics.  Linq, and then by definition Linq to Datasets, Linq to SQL (and eventually when it matures, Linq to Entities) is the path I’ve chosen.  Linq to Datasets may well be considered legacy code, though suite-e uses them in a few places to retrieve large sets of data (converting stored proc –> objects for 45,000 rows proved too time costly), and I think it’ll be handy to start at this grass roots level.

I’m convinced by ORMs, and ideally would like to proceed with Entity Framework, but at least the above will give me a solid grounding that I can then proceed through to something like EF with limited pain.  VS2010/C#4.0 apparently will have some updates, so I’ll keep monitoring.

On a separate note, I’m getting so heavily addicted to stackoverflow – I post where I can, but as a resource for learning it’s fantastic.  You still have a number of weaknesses (people not answering the question, but posting fast to try to get points, downvotes for stupid reasons, etc.) but on the whole, it’s a cracking resource.

This month will see me probably post very little, we have 4 large client sites up for launch and 2 minor client updates, so I suspect the hours will be long and arduous, but we soldier on :)

Not a wasted weekend – Telerik & Machine Setup

Well, it’s been a productive weekend, though not on the MCTS front unfortunately.  Our core software package at work, suite-e, has for a while now had an out of date editor – it was a doddle to upgrade all of the other components we use from Telerik, but we’ve always avoided the upgrade of the editor as we use so many custom dialogs that were so reliant on the old scripts from the editor that we were using (the DLL is roughly 18months old).

This weekend has seen me perform that upgrade – it’s been something that has bugged me a for a while now as it’s the last legacy component we have in there really, and it forms such a major part of the system (content management being what the whole thing is about!)

I’ve finally gotten my dev environment how I want it now at home too, installed all the utilities that make life easier when working, and have started the meandering path I plan to take on MCTS, starting with class design, polymorphism, and interfaces just to get my head back around these before I crack on.

Finally gotten myself up and running with stackoverflow, and need to allocate some time each day to read and try to input where able too.  I’ll be updating my delicious bookmarks over the coming weeks too, I used to spurl, but I’m really liking the interface on delicious.

Where do we go from here…

Most people start their blogs with a ‘hello world, expect more from me’, so apologies for the war and peace tome that you’re about to suffer if you’re reading this!

Well, we find ourselves at a somewhat melancholic crossroads.  I was contacted by a recruitment agent from Nigel Wright earlier in the week to suggest that they had 2 jobs that they thought I’d be perfect for, and they based the decision on a CV that was nearly 4 years old.

Initially, excitement kicked in and I had no idea where it came from – what was wrong with my current job?  It got me thinking and reading a bit, I’ve been here for a little over 3 years now, and have been a very active part of the team, I’ve architected and built our in house CMS/E-Commerce/CRM solution that we can then use to build client websites (there’s another post in there somewhere as to why we didn’t go with an off shelf solution!), I’ve gotten the organisation Microsoft Partner status, something I’d recommend any organisation do if they’re developing for the MS platform btw, I’ve brought in a fair few clients – everything is peachy right?

Well, no.  What about me?  Selfish to say, but where is my career going in all of this, what am I doing, and how am I doing it?  I’ve been a web developer now for about 15 years in various guises, starting out with the good old perl/cgi/mysql combo, moving over to classic ASP, and then over the past 5-6 years focussing on c#/asp.net. I’ve done some very fulfilling work, and some that just ticks over, like any developer I guess. Since my computing degree I’ve always felt I’ve been at the softer side of development, never really delving, never getting too heavily into the software architecture simply for the sake of it.  I’ve hit a point where I can continue to do as I am, delivering quality code for sure, but really not being truly happy with it because I want that more thorough approach, or I can do something about it.

I’ve led development teams, I’ve implemented n-tier solutions using the facade pattern, I’ve done huge projects with tens of thousands of (more often than not necessary) lines of code, I’ve implemented rich SQL server schema, I’ve implemented interfaces, I’ve serialised, I’ve consumed, I’ve done an awful lot.

What’s Changed?

It was a chance tweet yesterday by Mike Taulty where he mentioned Rob the Geek, and you just know, anyone with a name like that, I have to check them out.  I got to Rob’s blog and in particular a post where he was lamenting some of the same issues that are facing me… I want to know more, I want to do more, I want to understand more – software development without learning just isn’t enough for me.

So what do I want?

I think first and foremost, I want to get deeper into the framework – there are any number of the simple things that I just want to understand more, reflection, generics (not just List<Object>), Lambda, goodness the list goes on – all fundamentals, and I know that if I sat for a short period on any of them I’d get them, and it’d stick – I’m certainly not an unintelligent guy (well…)

Key really for me is MCTS I think – I think the framework exam will lead me into a nice and thorough understanding of the framework so that I can then proceed to the ASP.NET 3.5 Application Development exam.

After that, my main thirst is to work with ORM, and in particular Entity Framework, I’ve read and understand the reasons that some people are waiting till 2.0 to roll out into production, but speaking as someone who’s written DAL/Object layers for 4 years, it’s getting very old very fast and I want to try something else that lets me focus on the good bits and worry less about the data.  After that (or probably in parallel), MVC looks pretty cool – again, just as potential starting points.

How do I go about all this then?

There are an awful lot of things I waste time on each week – online gaming, getting to work at 7:15am because I’m awake and I’ve got stuff to do, leaving at 7pm because I’m still awake, and I’ve got stuff to do… it all adds up, I suspect I can allocate at least 25hours a week to personal study (thankfully my wife is very supportive of me doing this, and the kids will I think reap the benefits of the happiness I get from it).

So that’s where we are.  And this blog, at least in the first instance, is intended to be a record of my path through all of the above – I’ll post about everything, even when I’m learning things that seem ridiculously simple – I think keeping this record will help motivate me, will help keep me focussed – I don’t particularly care if anyone ever reads it, I’ve found even just this seminal post to be incredibly cathartic.

Expect regular updates, as if I don’t it means I’m not doing what I said I would :)