This blog contains reflections and thoughts on my work as a software engineer

mandag den 14. december 2009

Optimizing website performance: Using ASP.NET Webcontrols to combine your CSS and JS files

So – this month we’ve been going through a performance optimization based on the suggestions provided by YSlow. There is no absolute truth in performance so the suggestions provided by YSlow should be put in a context but it is great for inspiration and lots of the advices proposed do make a lot of sense.


One of the things we’ve pinpointed was the massive amount of files needed to load our front page (50+) so we are working on sprites as replacement for single files and I’ve been working on a solution to combine our CSS and Javascript files into one single style  The number of physical files you need to download to view a webpage is important because requests get queued and the browser consumes only two downloads in parallel – every other request sits waiting in the queue so the more requests you make the slower your website is going to be.

We needed a solution and my collegue proposed an ASP.NET Webcontrol which would act as a placeholder for our styles. After 2 days of work I came up with control made for for CSS and Javascript files which can:

  • Combine any number of CSS /Javascript files into one combined file
  • Output either combined file or single files (debug mode)
  • Removes whitespace if needed
The control ended up like the following snippet:
   <cc1:StaticFileCollection runat="server" ID="cssCollection" StaticFileType="CSS" Outputfile="/css/FrontpageAspxCssCollection.css" TrimWhiteSpace="true">      
    <cc1:StaticFile ID="StaticFile1" runat="server" Url="/script/ext-2.0/resources/css/form.css"  />
    <cc1:StaticFile ID="StaticFile2" runat="server" Url="/script/ext-2.0/resources/css/combo.css"  />
    <cc1:StaticFile ID="StaticFile3" runat="server" Url="/css/global.css"  />
    <cc1:StaticFile ID="StaticFile4" runat="server" Url="/css/article.css"  />
    <cc1:StaticFile ID="StaticFile5" runat="server" Url="/css/boxes.css"  />
    <cc1:StaticFile ID="StaticFile6" runat="server" Url="/css/ext-overrides.css"  />       

What happens on PreRender is that every StaticFile is being opened and placed in a StringBuilder. It is being output in the file “/css/FrontpageAspxCssCollection.css” if the size has changed (that means that one of the sourcefiles have been altered, i.e. during development). A reference to /css/FrontpageAspxCssCollection.css is written in a Literal control. Plain and simple – no magic attached. So if you place the StaticFileCollection above in you <head> section of your webpage what you get back is this:

<link href="/css/FrontpageAspxCssCollection.css" rel="stylesheet" type="text/css">

…where the FrontPageAspxCssCollection.css is all your StaticFile’s combined into one single, physical file.

The code works for both Javascript and CSS styles. Enjoy   :o)

    [ToolboxData("<{0}:StaticFile runat=\"server\"></{0}:STATICFILE>")]
public class StaticFile : WebControl
public string Url { get; set; }

public override bool Visible
return false;
base.Visible = value;

[ToolboxData("<{0}:StaticFileCollection runat=\"server\"></{0}:StaticFileCollection>")]
public class StaticFileCollection : PlaceHolder
    private string _staticFileType;
    public string StaticFileType { get { return _staticFileType.ToLower();} set { _staticFileType = value;} }
    public bool TrimWhiteSpace { get; set; }
    public bool Debug { get; set; }
    public string Outputfile { get; set; }
    protected override void OnPreRender(EventArgs e)
        if (!CheckInput())
        if (Debug)           


    /// <summary>
    /// Render every file in separate
    /// </summary>
    private void DoNotRenderCombinedFile()
        var controls = new List<Control>();
        foreach (var control in Controls)
            var c = control as StaticFile;
            controls.Add(new LiteralControl(GetScriptReference(c.Url)));
        controls.ForEach(x => Controls.Add(x));
    /// <summary>
    /// Render all files combined to OutputFile
    /// </summary>
    private void RenderCombinedFile()
        var combinedFileString = CollectFileContent();
        string outputFile = HttpContext.Current.Server.MapPath("/" + Outputfile);
        //Get current file
        string currentContent = string.Empty;
        var currentFile = new FileInfo(outputFile);
        if (currentFile.Exists)
            using (var sr = currentFile.OpenText())
                currentContent = sr.ReadToEnd();
        if (TrimWhiteSpace)
            var r = new Regex("\\s+", RegexOptions.Multiline);
            combinedFileString = r.Replace(combinedFileString, @" ");
        //Only create new file if content has changed to maintain timestamp (avoid download to client on every hit)
        if (currentContent.Length != combinedFileString.Length)
            using (var sw = new StreamWriter(outputFile))
        Controls.Add(new LiteralControl(GetScriptReference(Outputfile)));       
    private string CollectFileContent()
        var cssBuilder = new StringBuilder();
        DateTime begin = DateTime.Now;
        foreach (var control in Controls)
            FileInfo fi = GetFileInfoObject(control);
            using (var content = fi.OpenText())
                cssBuilder.Append(string.Format("/*** {0} start ***/", fi.Name));
                cssBuilder.Append(string.Format("/*** {0} end ***/", fi.Name));
        return cssBuilder.ToString();
    private FileInfo GetFileInfoObject(object control)
        var c = control as StaticFile;
        if (c == null)
            throw new ArgumentException("Only StaticFile controls can be childresn to a StaticFileCollection");
        var fi = new FileInfo(HttpContext.Current.Server.MapPath(c.Url));
        if (!fi.Exists)
            throw new ArgumentException(c.Url + " does not exist!");
        if (fi.Extension.ToLower() != "." + StaticFileType)
            throw new ArgumentException(string.Format("{0} is not of type {1}", c.Url, StaticFileType));
        return fi;
    private string GetScriptReference(string url)
        //Not so nice... But a strategy pattern impl. is way overkill
        string str = string.Format("<link type=\"text/css\" rel=\"stylesheet\" href=\"{0}\" />", url);
        if (StaticFileType.Equals("js"))
            str = string.Format("<script type=\"text/javascript\" src=\"{0}\" />", url);
        return str;           

    private bool CheckInput()
        if (string.IsNullOrEmpty(Outputfile))
            throw new ArgumentNullException("Outputfile must be set on a StaticFileCollection");
        if (StaticFileType != "js" && StaticFileType != "css")
            throw new ArgumentNullException("StaticFileType should be either JS eller CSS");
        return true;

tirsdag den 10. november 2009

DDoBA is the new DDoS

I was in Copenhagen for the weekend and Friday I attended an informal introduction to Windows Azure held by Microsoft in Hellerup. I also was lucky enough to be entertained by Scott Hanselman for almost two hours Saturday but that’s another story for another blogpost. My team and I were mostly interested in getting the big lines drawn on this Azure thing. It’s a cloud-based service something and what else is new… That’s basicly the mindset I went into the meeting with. It was a nice introduction held by Architect Evangelist Rene Loehde and it was packed with information.

What I noted during the session was a new term coined by Rene called “Denial of Business Attack”. We all know the classical DDOS attack paradigm: Somebody flooding a website with requests can potentially make the servers go down and make the entire website disappear from the face of the Internet. It’s been here for years and years to come – nothing new here. But now you have to consider a completely different scenario in the years to come – follow me through this business scenario:

If your business is based on having a load of X 99% of the time and maybe X times 10 during peaks (i.e. a ticketing office selling tickets to U2 or something similar) you might be interested in a cloudbased service which scales infinitly based on your current load. Sounds nice – the theory being that you only need to pay for the users consuming ressources and nothing else.

What is interesting is that you’re not safe at all from being DDos-attacked – now the threat doesn’t come from having your website disappearing from the Internet but the exact opposite: Having your site available at all times allows persons in not-so-good faith to bombard your website making you pay deerly in consumed ressources. You agreed to pay for consumed ressources in your contract with the cloud servicehost so they’ll want their money for the millions of requests for sure… That’s an interesting shift in paradigm I think. If you’re not hosting your website in the cloud you’re in danger of potential customers not being able to access your webshop. If you’re hosting your website in the cloud you’re in danger of having millions of visitors consuming ressources but without any sort guaranties that you’ll earn any money to pay for the ressources consumed… It just never stops, does it?  :o)

I find it a little amusing and a proof on the fact that that no matter what we do and how we do things on the Internet there’s always a million ways for evil people to ruin it for everybody else. Any comments on that?

onsdag den 14. oktober 2009

Definition of software quality #3

I have from time to time struggled hard to define the term "software quality". I have written about the subject before

...and on and off it has come back to us at work - we've discussed the subject for hours without any conclusions we could all back up 100%. The discussions were mainly taken whenever we had delivered incomplete features which proved to be - less than adequate tested...  :o)   At my current job we have no testers employed and our QA process is driven by developers so we do our best and take responsibility for our actions knowing that testing our own code is an antipattern. I had a small breakthrough last week - I think I've managed to figure out metrics for our business which actually is watertight - it was during a meeting when I thought it up and asked the others for feedback. They bought it immediatly and we've started gathering data for the first time... What we're measuring? First a little background story:

At work we handle a lot of asynchronous processing of online enrollments. An enrollment made by a user because he or she wants to attend an arrangement or a course (summer camps, soccer schools, gymnastics every Thursday throughout the winter etc.) ends up in our backend system as a "job" which will be processed asynchronously. A job could be an enrollment but it can also be updating an address change in various databases and 3rd party systems. Another job could be sending Emails to everybody attending a summercamp. We have a lot of business processes ending up as different kinds of jobs in our database. 

Processing jobs succesfully are highly critical for our business to function so if a job fails the business needs to attend it and fix the error. A likely cause for an error could be that a 3rd party webservice didn't respond in a timely fashion or that our support staff has blocked someone from attending more courses because the person in question didn't pay for the last course. That person is by design still able to make another enrollment but the job created will fail because there is a business rule somewhere blocking further money transactions on that particular person. Enter a developer who makes the necessary phonecalls to clarify if the person should be unblocked or not. The developer can then restart the job if our support staff unblocks that person in our backend systems.

Why do I tell you all this? Because I figured during a meeting last week that we could easily get a clear indication of our software quality by measuring the amount of failed jobs during a period of time. We are interested in measuring the quality of code. What code? That really was the question... And how to measure if that code was OK? You can get some of the way by using tools to analyze code and reviewing the codebase but I have realized that the main interest of the business is to know that whenever they use our backend as part of their business process they need to know that everything "works". How do we know that it doesn't? If a job fails... The only people who care about TDD, IoC, mocking frameworks, Cyclomatic Complexity and code coverage are developers - the business just wants to know that when they click "Send Email" that email was actually sent to the recipients. The quality of the code itself really comes second as long as your users are able to do their job. I had two other developers and my boss think about it for a few seconds and they bought it instantly. As of last week we've started saving an entry in a database table whenever a job fails us so we can track over time how many failed jobs we have under the current load. Then we also know which parts of the codebase and which business processes that needs improvement if the same kind of job fails repeatly under the same conditions. Sweeeeet........

If you (as a developer) are in doubt about the quality of your codebase ask yourself these questions:

  • Can you in any way as the software developer with insight measure when the software fails a user?
  • Can you in any way as the software developer with insight measure when the software fails a user - when the user doesn't know that an error occured?
  • Can you as a developer fix any problem in the codebase knowing all sideeffects of your changes?
  • Can you easily test and deploy a bugfix/patch to the production environment?

There are many ways to measure code quality but start focusing on the user experience and the shortcuts they make because the software doesn't work as expected. Software quality should be measured in

  1. Happy users being able to do what your software promises them they can do.
  2. Happy programmers feeling comfortable with the current state of the codebase

Until next time...

lørdag den 10. oktober 2009

Tracking down an IIS deadlock with WinDbg

We’ve had a few performance issues lately on the website I’m working on – while digging through code we’ve discovered a lot of technical debt which we’ve started paying more attention to. Also we discovered that the IIS on our farm servers went down on a daily basis – approximately one or two times per day the IIS on each server would recycle the application pool for reasons unknown to us. There was an entry in the Eventlog telling us that the IIS had deadlocked and hence a reset was issued. Surveillance didn't work as expected so we never got anything but random users once or twice a month complaining that "The site is slow sometimes"...

We developed quite a lot of theories along the way but due to gut feeling and what we could see from log files etc. we suspected either thread pool starvation because of the way we made some synchronous long-running webservive calls to i.e. clear the server’s CMS cache or an infinite loop spinning up eating all available ressources before the IIS decided that it needed to recycle the application pool to come back to life. We had little else apart from a Performance Monitor telling us a few minutes in advance that a server was going down due to a massive amount of SQL User connections suddenly craving all ressources.

Enter WinDbg – the coolest and geekiest debugging tool I have ever worked with. Until last week I had no knowledge of debugging using WinDbg apart from reading a few blogposts from time to time written by people who have made debugging their niche in the development community. I decided to take a dive into it because we needed some facts on the table instead of just creating theories. First of all I knew we had to collect some sort of dump of the process when it was in an unhealthy condition so se started to collect memory dumps using ADPlus and after a few hours of waiting the IIS on one server went down and generated a large 500+ MB dump named [something]_Process_Shutdown_[timestamps-something].dmp. I spent the next three days deep inside the dump, scratching my head and reading lots and lots of blogposts found on Google, peeking and poking and trying out commands which didn’t output anything useful. I wrote it off at first because I suspected my own missing competence for the lack of finding information until I came across a blogpost writing about false positive dumps… Which was exactly what I had. A false positive means that the dump was generated when the process was in a different state than expected – something you can only find out by debugging the content of the dump. What I had wasn’t a dump from a deadlocked process, oh no – I had a dump from the process right after the IIS had decided to recycle the application pool so every thread except the Finalizer had no ID which means it had been shut down nicely and was basicly just waiting for the garbage collector to put and end to it's misery. That was also why there were no callstacks on the threads, nothing useful in the heap… Heureka!!! Nothing but a lot of useless bytes in a big file, basicly.

A lot wiser but still with no data I started looking for a way to collect a dump when the IIS was in the deadlocked state. I managed to locate an MSDN-article which described how to set the metadata property OrphanWorkerProcess on the IIS to “true” so instead of the IIS automaticly recycling the application pool I could kick off a script invoking a WinDbg through the commandline which could write a dump of the process to the disc for later analysis. It took a while but after a bit of trial and error I managed to get the script working. The script worked fine apart from the IIS not calling filename.cmd with a parameter whenever the worker process was orphaned so whenever the IIS executed the script the script invoking the debugger failed because there was no process ID named %1… It took only another two days to figure out what was going on, thank you very much Microsoft. MSDN is an invaluable ressource for me but I didn't have anything nice to say about the author of the article for about 10-12 minutes   :o)

As of tonight - art break - I managed to get a positive-positive dump and it took only 15 minutes to figure out exactly what was going on. The last week’s trial and error, peeking and poking did prove to be a good investment – here’s what I did:

I downloaded the dump generated by WinDbg to my local machine and fired up WinDbg on my local machine. I had earlier downloaded the .NET Framework version from the farm server because even though you have version v2.0.50727 installed on your local machine there is a chance that it isn’t the exact same revision as the one you have installed on your server. In my case my local workstation’s copy of aspnet_isapi.dll is v2.0.50727.4016 while the server has 2.0.50727.3053. I figure it has something to do with the fact that I’m running Vista while the farm server is a Windows 2003. It hasn’t got any effect in your daily life but it is essential while debugging a crash dump that you have the *exact* same assemblies and symbol files as the machine where the dump was generated otherwise you might not be able to extract information out of the dump or – even worse – you might get corrupted or incorrect data which could very easy lead you in a completely wrong direction. So I zipped and downloaded the .NET Framework from the farmserver to my local machine and used the command “.exepath+ [path]” to search for matching assemblies in [path] if no matches in the default locations were found.

I loaded the dump into WinDbg using and thought back on our initial suspects. Our first suspicion was the infinite loop because it was the strongest theory we had. Debugging threading issues such as two threads deadlocking eachother would also be harder while an infinite loop would without a doubt reveal itself in a very long stacktrace on one of the active worker threads - given that I didn't have a false positive crash dump that is(!). So I used !threads to get a list of threads – here’s the output:

0:028> !threads 
ThreadCount: 52
UnstartedThread: 0
BackgroundThread: 39
PendingThread: 0
DeadThread: 6
Hosted Runtime: no
PreEmptive GC Alloc Lock
ID OSID ThreadOBJ State GC Context Domain Count APT Exception
13 1 52c 000ebc28 1808220 Enabled 00000000:00000000 000db760 0 MTA (Threadpool Worker)
15 2 1008 000f7ce0 b220 Enabled 00000000:00000000 000db760 0 MTA (Finalizer)
16 3 164c 00110208 80a220 Enabled 00000000:00000000 000db760 0 MTA (Threadpool Completion Port)
17 4 11c0 00113840 1220 Enabled 00000000:00000000 000db760 0 Ukn
11 5 314 0012f818 880a220 Enabled 00000000:00000000 000db760 0 MTA (Threadpool Completion Port)
18 6 12a0 0015ff88 180b220 Enabled 00000000:00000000 000db760 0 MTA (Threadpool Worker)
19 7 1760 0015a5c8 200b020 Enabled 00000000:00000000 001147a0 0 MTA
20 8 11fc 00164c40 200b020 Enabled 00000000:00000000 001147a0 0 MTA
21 9 f80 00158dc8 200b020 Enabled 00000000:00000000 001147a0 0 MTA
22 a 14dc 001597a8 200b020 Enabled 00000000:00000000 001147a0 0 MTA
23 b 170 0012a638 200b020 Enabled 00000000:00000000 001147a0 0 MTA
24 c 152c 00170c58 200b020 Enabled 00000000:00000000 001147a0 0 MTA
25 d 1668 06b33c10 200b020 Enabled 00000000:00000000 001147a0 0 MTA
27 e e70 06b6fb40 200b220 Enabled 00000000:00000000 001147a0 0 MTA
28 f 5e4 06b5e420 180b220 Enabled 00000000:00000000 001147a0 1 MTA (Threadpool Worker) System.Data.SqlClient.SqlException (1ee19b2c) (nested exceptions)
29 10 1470 06b6a650 200b220 Enabled 00000000:00000000 001147a0 0 MTA
30 11 fa4 06b7dbe8 180b220 Enabled 00000000:00000000 000db760 0 MTA (Threadpool Worker)
10 12 148c 06b80e60 220 Enabled 00000000:00000000 000db760 0 Ukn
7 14 1640 06b8e1b0 220 Enabled 00000000:00000000 000db760 0 Ukn
37 16 fd4 08f340d8 180b220 Enabled 00000000:00000000 000db760 0 MTA (Threadpool Worker)
39 18 8bc 09097448 200b220 Enabled 00000000:00000000 001147a0 0 MTA
40 17 104c 090532b8 200b220 Enabled 00000000:00000000 001147a0 0 MTA
41 19 1238 090967e0 200b220 Enabled 00000000:00000000 001147a0 0 MTA
42 1a 13fc 090a8bd8 200b220 Enabled 00000000:00000000 001147a0 0 MTA
43 1b 1408 09055108 200b220 Enabled 00000000:00000000 001147a0 0 MTA
44 1c 1580 090559c0 200b220 Enabled 00000000:00000000 001147a0 0 MTA
45 1d 184 09056278 200b220 Enabled 00000000:00000000 001147a0 0 MTA
46 1e 13dc 09056b30 200b220 Enabled 00000000:00000000 001147a0 0 MTA
47 1f 12e8 090573e8 200b220 Enabled 00000000:00000000 001147a0 0 MTA
48 20 1458 06c04298 200b220 Enabled 00000000:00000000 001147a0 0 MTA
49 21 93c 06c04878 200b220 Enabled 00000000:00000000 001147a0 0 MTA
50 22 d5c 06c052a8 200b220 Enabled 00000000:00000000 001147a0 0 MTA
51 23 15ec 06c05cd8 200b220 Enabled 00000000:00000000 001147a0 0 MTA
52 24 29c 06c06738 200b220 Enabled 00000000:00000000 001147a0 0 MTA
53 25 1098 06c07198 200b220 Enabled 00000000:00000000 001147a0 0 MTA
54 26 13cc 06c07bf8 200b220 Enabled 00000000:00000000 001147a0 0 MTA
55 27 12f0 06c08658 200b220 Enabled 00000000:00000000 001147a0 0 MTA
56 28 ce4 06c090b8 200b220 Enabled 00000000:00000000 001147a0 0 MTA
57 29 e60 06c09b00 200b220 Enabled 00000000:00000000 001147a0 0 MTA
58 2a d98 06c0a4e0 200b220 Enabled 00000000:00000000 001147a0 0 MTA
6 15 1060 090a6980 220 Enabled 00000000:00000000 000db760 0 Ukn
4 2b 590 090ddae8 220 Enabled 00000000:00000000 000db760 0 Ukn
3 13 cd4 1712e4f8 220 Enabled 00000000:00000000 000db760 0 Ukn
XXXX 2c 0 17133d50 1801820 Enabled 00000000:00000000 000db760 0 Ukn (Threadpool Worker)
XXXX 2d 0 17104970 1801820 Enabled 00000000:00000000 000db760 0 Ukn (Threadpool Worker)
XXXX 2e 0 090ac858 1801820 Enabled 00000000:00000000 000db760 0 Ukn (Threadpool Worker)
69 2f 99c 090ad248 180b220 Enabled 00000000:00000000 000db760 0 MTA (Threadpool Worker)
XXXX 30 0 090b5dd0 1801820 Enabled 00000000:00000000 000db760 0 Ukn (Threadpool Worker)
70 31 fcc 090b70e8 180b220 Enabled 00000000:00000000 000db760 0 MTA (Threadpool Worker)
XXXX 32 0 090b83e0 1801820 Enabled 00000000:00000000 000db760 0 Ukn (Threadpool Worker)
XXXX 33 0 090bb380 1801820 Enabled 00000000:00000000 000db760 0 Ukn (Threadpool Worker)
75 35 1134 1718fb70 200b220 Enabled 00000000:00000000 001147a0 1 MTA

There was an immediate suspect – thread #28 appeared to have a nested SqlException in it… So I switched to the thread by executing the command “~28s” and now I could get a callstack on the thread by using “!clrstack” with the following output:

…074fe5d4 0fc5c6f5 dgiCore.Log.ExceptionLogPublisher.Publish(System.Exception, System.Collections.Specialized.NameValueCollection, System.Collections.Specialized.NameValueCollection) 
074fe920 0fc5c6f5 dgiCore.Log.ExceptionLogPublisher.Publish(System.Exception, System.Collections.Specialized.NameValueCollection, System.Collections.Specialized.NameValueCollection)
074fec6c 0fc5c6f5 dgiCore.Log.ExceptionLogPublisher.Publish(System.Exception, System.Collections.Specialized.NameValueCollection, System.Collections.Specialized.NameValueCollection)
074fee20 0f97d9e2 DGI.Web.layouts.DGI.Loginbox.btnLogin_Click(System.Object, System.EventArgs)
074ff09c 01e1adf5 ASP.layouts_main_aspx.ProcessRequest(System.Web.HttpContext)
074ff180 66083e1c System.Web.HttpRuntime.ProcessRequestInternal(System.Web.HttpWorkerRequest)
074ff1b4 66083ac3 System.Web.HttpRuntime.ProcessRequestNoDemand(System.Web.HttpWorkerRequest)
074ff1c4 66082c5c System.Web.Hosting.ISAPIRuntime.ProcessRequest(IntPtr, Int32)
074ff3d8 79f68cde [ContextTransitionFrame: 074ff3d8]
074ff40c 79f68cde [GCFrame: 074ff40c]
074ff568 79f68cde [ComMethodFrame: 074ff568]

I was quite surprised to find out that a click on the Submit-button on our login-page proved to be the cause of an infinite loop… What parameters was the methods being called with? Enter the command “!clrstack –p” and somewhere along the massive output of nested exceptions I found the call invoked by the user clicking the “Login” button:

074fed68 0d6a755b dgiCore.Security.Authenticate(System.String, System.String, System.String, dgiCore.Log.Exceptions.Fejl ByRef) 
username =
password =
applicationName =
fejl = 0x074fee24

Empty username and password? Normally there would be a reference to an address in the memory… Just for the fun of it I tried to issue a login from the website under suspicion with an empty username and password and what do you know? The website hung until the IIS decided to recycle the application pool and come back to life with a new process ID... If you can imagine my surprise sitting late in the evening and just watching every little piece of the puzzle fall into place watching the IIS on the server choking harder and harder before it's inevitable death and immediate resurrection. It was in that exact splitsecond I fell in deep, unconditional love with WinDbg.

I won’t go into detail about the error itself and how it was fixed (I won’t say too much but part of it might include a bit of Javascript form validation) but I have had a major success using WinDbg (and Google) to track down and fix an error which by default only revealed itself through an entry in the server’s Event log. WinDbg is indeed a very useful tool in a developer’s toolbox and now I’ve got some experience with it I can only imagine how many hours I’ve wasted earlier on trying to come up with grand unified theories about the behaviour of a piece of code where I would now just attach a debugger / create a dump and check for myself what’s actually going on. It took me four to five days effectively to get to know the mindset and commands neccessary to make it useful but boy... They will come back a hundred times over the next three decades of programming. Try it out for yourself  :o)

onsdag den 7. oktober 2009

Stop whining!

This is a little out of the ordinary but what the heck... There has to be room for a little fun from time to time

You know Arnold Schwarzenegger? Of course you do... He has become famous for quotes like "I'll be back", "Aaaaargh" and various other sentences which primarily excels in containing words exceeding no more than two syllables. There's one quote in particular I like (and so does my fellow coworkers). It is "Stop whining" from Kindergarten Cop - when spoken McBain style nobody at our team can continue whining over bad code, cold coffee, stupid typos etc but inevitably start smiling because whining is such an easy trap to fall into when the right thing to do is just to get the job done and get going.

How does this relate to coding? Well, tonight I wondered how to play media files in C#... I never actually tried coding audio stuff in C# so I figured that I wanted to find a WAV file containing the infamous Arnold shouting "Stop whining"... And so I did and it turned out to be a very little deal getting it out. It was actually two lines of code:

var myPlayer = new System.Media.SoundPlayer("myfile.wav");

Then I figured that I wanted to play it using a keyboard stroke - to make some kind of Arnold soundboard just using keyboard shortcuts - how would that be possible? Capturing keyboard strokes isn't hard at all - if your program is in focus but I wanted to have some keyboard hook which would catch my keyboard strokes globally... I knew it had to be done invoking unmanaged code which isn't something I do on an everyday basis... so instead of doing all the WinAPI stuff myself (yes, I am a lazy bastard) I dug up this article on CodeProject which I could make use of. It basicly encapsulates the WinAPI goodies required to globally listen to whatever the user is doing with his keyboard. The hardest part was actually getting raw WAV-files with Arnold quotes - I managed to find some but it required me to save them in a different format as the .NET Sound API only allows PCM files. Enter Audacity, a load and a save later on each file and I was good to go. I needed to research a little getting the Windows Form to hide itself as a tray icon (yet another thing I've never tried before) but it was also a nobrainer.

So as of tonight I have whipped up a small application which can store itself in the tray and listen to keyboard events. If you press down S, T, O and P down, Arnold will in his informal voice speak out "Stop whining!". Try the following for yourself: LACK and COP... The implementation is here (I won't cover the GlobalKeyboardHook as it is covered in the CodeProject article):

    public partial class ArnoldForm : Form
private GlobalKeyboardHook _gkh;
private List _pressedKeys = new List();
public ArnoldForm()

_gkh = new GlobalKeyboardHook();



_gkh.KeyDown += gkh_KeyDown;
_gkh.KeyUp += gkh_KeyUp;

private void gkh_KeyUp(object sender, KeyEventArgs e)
if (_pressedKeys.Contains(e.KeyCode))

private void gkh_KeyDown(object sender, KeyEventArgs e)
var arnold = new Arnold();

private void Form1_Resize(object sender, EventArgs e)
if (FormWindowState.Minimized == WindowState)

private void notifyIcon1_DoubleClick(object sender, EventArgs e)
WindowState = FormWindowState.Normal;

public class Arnold
public void Speak(List keys)
string voiceFile = GetVoiceFile(keys);

if (!string.IsNullOrEmpty(voiceFile))
var asm = Assembly.GetExecutingAssembly();
var file = asm.GetManifestResourceStream(voiceFile);

var myPlayer = new System.Media.SoundPlayer(file);

private static string GetVoiceFile(ICollection keys)

if (keys.Contains(Keys.S) &&
keys.Contains(Keys.T) &&
keys.Contains(Keys.O) &&
return "";

if (keys.Contains(Keys.C) &&
keys.Contains(Keys.O) &&
return "";

if (keys.Contains(Keys.L) &&
keys.Contains(Keys.A) &&
keys.Contains(Keys.C) &&
return "";

return null;

Download the entire project here

It's not exactly rocket science but I had a few hours of fun building the thing. Comments are welcome - until next time...  :o)

søndag den 27. september 2009

Throwing valuable time after wasted time

Ayende just wrote an interesting post about the cost of maintaining fragile code:

You and I - I'm not pretending to be any better than everybody else on the subject - should fight the urge to keep unmaintainable code and never fear a rewrite even though it seems like wasted time. It's just so freakin' hard holding on to such a set of core values because it might cost features not being implemented in the short run... Know the feeling?

Anyway: What I wanted to really say was that I'm looking forward to hearing Ayende himself in at JAOO in Aarhus, Denmark. It's not every day you get to see one of the most influential people in the Open Source community in person so I'm quite excited about it   :o)

mandag den 3. august 2009

The most overlooked risk in software engineering


5 weeks of vacation has ended – while browsing through the posts written during those weeks on Coding Horror I stumbled across a quote by David Parnas from the post Nobody Hates Software More Than Software Developers - quite a catching title, don’t you think?

Q: What is the most often-overlooked risk in software engineering?

A: Incompetent programmers. There are estimates that the number of programmers needed in the U.S. exceeds 200,000. This is entirely misleading. It is not a quantity problem; we have a quality problem. One bad programmer can easily create two new jobs a year. Hiring more bad programmers will just increase our perceived need for them. If we had more good programmers, and could easily identify them, we would need fewer, not more.

Click here to get a transcript of the entire interview

tirsdag den 23. juni 2009


I have figured out a workaround to the problem regarding CruiseControl.NET having to run in Console-mode… There obviously wasn’t any elegant way to handle this so I decided to at least try and hide the console window outputting CruiseControl debugging info. so you can hide a console window and keep the process running. Instead of firing up CruiseControl.NET directly I created a small consoleapp which fires up CruiseControl.NET in console mode and then uses Windows API to hide the console window from the screen and was pretty amazed that I pulled it off within an hour. So I ended up with this - all credits to Brendan Grant:

class Program

public static extern IntPtr FindWindow(string lpClassName, string lpWindowName);

static extern bool ShowWindow(IntPtr hWnd, int nCmdShow);

static void Main(string[] args)
var fi = new FileInfo(string.Format("{0}/CruiseControl.NET/server/ccnet.exe", Environment.GetEnvironmentVariable("PROGRAMFILES")));
if (!File.Exists(fi.FullName))
Console.WriteLine("ccnet.exe not found in " + fi.FullName);

var p = new Process {StartInfo = new ProcessStartInfo(fi.FullName)};
p.StartInfo.WorkingDirectory = fi.DirectoryName;

SetConsoleWindowVisibility(false, fi.FullName);

public static void SetConsoleWindowVisibility(bool visible, string title)
// below is Brandon's code
//Sometimes System.Windows.Forms.Application.ExecutablePath works for the caption depending on the system you are running under.
var hWnd = FindWindow(null, title);

if (hWnd != IntPtr.Zero)
if (!visible)
//Hide the window
ShowWindow(hWnd, 0); // 0 = SW_HIDE
//Show window again
ShowWindow(hWnd, 1); //1 = SW_SHOWNORMA

Now I’m exactly where I was a few days ago except I don’t have a console window on my server… I still have to remember not to log off when remoting to my buildserver and if something crashes I haven’t got any context running CruiseControl which writes stuff into the Eventlog etc. At best this is – well, pretty bad coding style actually... but again: It’ll do for now  :o)

fredag den 19. juni 2009

Testing Javascript in a Continuous Integration environment

Ever since I started doing TDD I've been looking around to find a solution for testing UI and Javascript which would enable me to develop UI having unittest-style feedback and integrate my UI testing in a Continuous Integration setup. A few months ago we tried using QUnit at work for testing clientside scripts which relies on 3rd party vendors (Google Maps) to provide us with data for our application. I always wanted to find time to try and mainstream our experiences a bit because we never got much past "let's-try-this-thing-out" and spend a few hours seeing what could and could not be accomplished.

So - I decided a few days ago to sit down and spend a few evenings setting up a Continuous Integration environment which would

  • A) Run clientside tests in a Continuous Integration server
  • B) Provide TDD-stylish feedback on both NUnittests and clientside tests

Disclaimer: If you don't have experience with Continuous Integration environments you might find this to go a bit over your head - I strongly recommend this article written my Martin Fowler and if you really want to dig deep into the subject "Continuous Integration - Improving Software Quality and Reducing Risk" is a must-read.

How it was done

I decided to use QUnit due to the fact that it is the clientside scripting library used to test JQuery. If it good enough for those people I guess it's as good as it gets - I didn't want to even try and dig up some arbitrary, halfbaked library when QUnit was such an obvious choice. I won't go into details about the "what" and "how" of QUnit - what it does is that it provides you with feedback on a webpage which exercises your tests written in Javascript - like this: 

image image

So - you write a test in a script and execute it in a browser and get a list of failing and completed tests. If everything is OK the top banner is green (left). If one or more tests fail the banner is red (right). This is controlled by CSS-class called "pass" and "fail". With that in mind and with a little knowledge of Watin I decided to just use write a unittest in Watin because it would allow me to write an test resembling the following:

public void ExerciseQUnitTests()
    using (var ie = new IE("http://(website)/UITest/QUnitTests.aspx"))
        Assert.AreEqual("pass", ie.Element("banner").ClassName, "QUnittest(s) have failed");

...where QUnitTests.aspx should be the page exercising my QUnit tests. The test itself should check for a specific class in the top banner - if "fail" would be active one or more tests would have failed and the unittest should fail and cause a red build. There is at least one obvious gotcha to this approach: You only get to know that one or more clientside-tests have failed. You don't get to know which one and why it failed. Not very TDD'yish but it'll do for now.

Here's a list of the things I needed to do once I had my Watin-test written:

  • I created a new project on my private CruiseControl.NET server which would download the sourcecode from my VS project, compile it using MSBuild and execute the unittests in my testproject. I battled for an hour or so with Watin because it needs to have ApartementState explicitly set. You won't have any problem when running your test in Visual Studio only but whenever you try to run the Watin test outside Visual Studio you get - well, funny errors basicly.

  • I pointed the the website to the build output folder in Internet Information Server

  • Then I ran into another problem - QUnit appearenly didn't seem to work very well with Internet Explorer 7 (or so I thought) - the website simply didn't output any testresults on my build server. It wasn't a 404 error or so - the page was just plain blank - so I had working tests on my laptop and failing tests on my buildserver. Without thinking much about it I upgraded to Internet Explorer 8 on my buildserver. Not much of deal so I did it - just to find out that the webpage still didn't output any results in IE 8 either. After a while of "What the f*** is going on" I started thinking again and vaguely remembered a similar problem I had about halft a year ago... The problem back then was that scripting in IE7 is disabled per default on Windows 2008 - which of course was the problem here as well. So I enabled scripting and finally got my ExerciseQUnitTests test to pass. Guess what happened when I forced a rebuild: GREEN LIGHTS EVERYWHERE. Yeeeehaaaaaaaa  :o)

  • Last, but not least I found out that CruiseControl.NET from now on will have to run in Console-mode in order to interact with the desktop in a timely fashion - because it needs to fire up a web browser - oh dear... I need to look into that one because I define having console apps running in a server environment as heavy technical debt you need to work your way around somehow. But again: It'll do for now even though I find it a little akward.

Conclusion: After a good nights work I got the A part solved - and another two nights I cranked the B part open as well. I'm now able to write QUnittests testing code in my custom JavaScript files, exercise the tests locally in a browser, check everything in and let a buildserver exercise the tests for me just like my client-code tests were first class citizens in TDD. Sweeeeeet....  The sourcecode is available for download here  - please provide feedback to this solution and share experiences on the subject with me in the comments   :o)


Continuous Integration article by Martin Fowler

QUnit - unit testrunner for the jQuery project

CruiseControl.NET - an opensource Continuous Integration server for Windows

Watin - web test automating tool for .NET

torsdag den 11. juni 2009

Testing software quality – open your mind

I recently attended a two day course called “Softwaretest – when it’s best” in Copenhagen. The reason I found it worth going to was that we recently had a major breakdown in our production environment because we pushed a bugfix into production without having properly testing what we were actually releasing. Thus having a new refactored decoupling of of our AspNetEmail component released without it being properly tested… You can imagine what people think of the guys in the IT departement when they either don’t get their mails or get them multiple times at random… Luckily we didn’t send anybody an email that they shouldn’t have recieved but we sustained major damage to our reputation within the organization because it’s not the first time we committed crimes like these before. So – we definately need improvement in our testing phase. We’re a team of developers – 5, to be exact. We know our TDD-drill and Continuous Integration is a first class citizen in our office so testing our own software isn’t totally unknown to us. I just feel we need to fill in the gaps to get to the next level - ideally without having to build and maintain an extensive and bureaucratic testing process nobody buys into 110 percent.

So I went there with a collegue to learn about this testing stuff - and I feel I am a little wiser now. Here's the eye-openers I have decided to share with you:

  • Developers live and breathe to make things work. Testers live and breathe to make things break. It’s a state of mind you need to be aware of as a developer. You can’t sit down and develop – and then test your doings without having said to yourself “I’m not a developer right now – I’m now a tester and want to break stuff”. I tried it today and WHAM!!! I found 3 critical bugs in a piece of software I recently wrote on one of my hobby projects. Each of them could have been found if I sat down and reviewed the code - but I found them nevertheless by simply WANTING to find errors.
  • Evolution is better than revolution. Nobody likes to be hazzled into something if they don't believe it provides value. Don't start out with preaching about "the god-awesome Testing Maturity Model (TMM)" and your visions about reaching level 5 before the end of the year. Get everybody involved before making any decisions - something about people and interactions over processes and tools... You know the drill for sure but it's easy to forget.
  • If you don't understand the product you won't find the error! How often have you reviewed code and signed off on it just to find out a month later that it had bugs in it - functional as well as nonfunctional? I have more times than I actually would like to think of... If you don't know what the code is supposed to do - you can only verify what it actually does. You'll never get to the point where you realize what's missing in the code if you don't know what the customer expects from it.
  • Test the deliverables not the specifications! If you have a clear set of specifications and a testplan for those you should of course use that testplan. But - your focus must always be on what is being delivered. You should as a tester try to gain as much understanding of the domain you're testing as possible - because who knows if the testplan (if one exists) is adequate? If you during development find out that performance is critical for a specific part of the software it is your responsibility to take this into account when testing even if performance is not an issue in the specifications. If the customer percieves the software as buggy it IS buggy regardless of whatever tests you've completed and signed during development and accepttesting.
  • a number of other related goodies such as "How to write a test plan", equivalence numbers, conflict handling between testers and developers, the difference between Blackbox and Whitebox testing etc.

The very vital issue while testing is that testers have a different mindset. Their best days at work is when they find themselves "in the zone" registering bugs like maniacs while developers think success and failure in terms of solving problems and implementing features. If you’re a developer you will never become much better than poor (at best) in a testing phase if you don't acknowledge that you need to change some things in your head in order to improve your testing capabilities. That's probably the biggest eyeopener I've had since I wrote my first unittests and after an hour or two of "Why am I writing so much code for nothing" suddenly got a failing test totally unexpected...

Tomorrow, before you contact your Product Owner because you have released something to your staging environment - say to yourself: “The next hour I'm a success ONLY if I can find bugs in what I just released”. Sit down, seek - and you will find. Promise :o)

torsdag den 14. maj 2009

Release planning with only one SCRUM team servicing multiple Product Owners

In real life, SCRUM usually requires some adjustments to fit into the organization in which you are working. One of the issues I have come across in my daily work as a ScrumMaster is having to work for multiple ProductOwners but the only ressource available is a single team. SCRUM applied with the green band around your arm says that a ProductOwner has a dedicated team throughout the entire project. In real life – that’s hardly ever so. I tried both ways – working on a dedicated team with a single PO and at my current job we are serving no less than seven ProductOwners with a single team of 4 developers – and everybody is generally happy about it. How is that possible?

We have decided on a model in which we do SCRUM by the book. Whenever we break away from the beaten path we do it because it enhances transparency and maintains visibility in contradition to following the “rules” of SCRUM. We try to do things because they make sense to us – individuals and interactions over processes and tools, we say. Having seven ProductOwners aren’t exactly SCRUM by the book but nevertheless it is the world we live in and things aren’t going to change on that particular issue. How to apply SCRUM under these conditions?

What we’ve tried to accomplish now is a more thorough plannning process – we’ve introduced release planning in our process. We have introduced what we call Pre-sprint planning. It is a meeting held about the 10. in every month. Participants are ScrumMaster (me), our IT Manager and a role we’ve invented called ProductOwner Coordinator. The IT manager is in charge and decides which backlogs gets a time slot in the forthcoming sprints. Every ProductOwner writes the IT manager an Email prior to our Pre-Sprint plannings and argue his or her case for having time in the forthcoming sprints. Some backlogs get time and some don’t. The very important thing here is that there is only one single person who decides – call it dictatorship but it’s the easiest for all parties involved.

At the meeting we collect the necessary data needed to prioritize. These data inclue all requests from ProductOwner, the content of our Team Backlog (the team’s list of technical debt issues prioritized by business value and estimated just like a normal backlog) and the contents of all Must-Haves on every backlog aggregated into a single Excel spreadsheet. We base our discussion on the data before us and the IT manager decides on a release plan for the next 3-4 months.

An example: Backlog A gets perhaps 50% of the next sprint. Backlog B and C gets 25% each in the next sprint. Backlog D gets 100% in the sprint after the next one. This release plan of course isn’t carved in stone – nothing is until Sprint Planning 1 where the team commits to a list of stories for the upcoming sprint. The release plan is being published on a blog which every ProductOwner subscribes to – and if a ProductOwner disagrees with our IT managers decisions after reading the updated releaseplan because he thinks there is more business value in getting his backlog stories implemented - he is required to go to our IT manager and argue his case and make a new request. A month later we meet again to another round of Pre-sprint planning and the current release plan is discussed, requests are taken into consideration and an updated releaseplan is published on our blog.

The advantages by having a long-term goal are numerous:

  • Visibilty is evident. The entire business knows what’s going on and who get’s precious timeslots next.
  • The team knows in advance which backlogs to estimate stories from. No more estimation meetings on backlogs which never make it to Sprint Planning.
  • Our many ProductOwners have a single person to go to in order to get stories implemented by the team.

The main advantage is that it is clear to everybody who’s in charge. This person must be empowered to prioritize between multiple requests and wishes. This person must not be an active participant in the process of implementing stories – no ProductOwner, ScrumMaster or Team member should attend this role. Our IT manager is the right choice in our organization. It depends heavily on the culture and organization structure which person is the right for this job.

We’ve tried this approach for the first time at our last Pre-sprint planning because we needed to get a clear picture of which backlogs we could focus on during the summer because our vacations are very wide spread this year. If nobody disagrees with it - well – we call in for estimation meetings on the backlogs we know are in the pipeline and will be much better prepared for Sprint Planning 1 than we’ve ever been even with the limited ressources available to us during vacation season    :o)

onsdag den 13. maj 2009

Excel can’t count?

I’m currently working on a data-extraction gig – basicly we have 4 different data providers to persist information about attendees to a summit in July with some 25.000 participants – and I’m currently in the process of creating a small, shortlived console app which generates an Excel-file with information about each participants so we are able to print out identitcal badges used to identify participants at the gates during the summit.

I encountered one of the scariest things I’ve seen in months while working today on the application. First things first: One of the datastores for participants is an Excel spreadsheet – one line per guest that is. Easy enough, I found some code using the OleDB Jet-engine:

string connectionString = string.Format("Provider=Microsoft.Jet.OLEDB.4.0;Data Source={0};Extended Properties=\"Excel 8.0;HDR=YES;\"", _xlsFile);
DbProviderFactory factory = DbProviderFactories.GetFactory("System.Data.OleDb");
using (DbConnection connection = factory.CreateConnection())
connection.ConnectionString = connectionString;

using (DbCommand command = connection.CreateCommand())
command.CommandText = "SELECT * FROM [Ark1$]";

using (DbDataReader dr = command.ExecuteReader())
while (dr.Read())

//do stuff


During an ongoing testphase (yes, we actually DO this agile stuff here) I noticed that one of the columns in my output spreadsheed didn’t match the expected values. Quite an error actually because one of the things people at the summit do care a lot about is that the food they’d paid deerly for in advance ends up as a crossed checkbox in the right place on their ID-card. Otherwise they will not be allowed access to the dining area… You can imagine what people think of the IT-guy in charge if things like this doesn’t work like a charm.

So I started debugging. First I suspected that I simply counted wrong when extracting values – namely the output from these columns


As far as I’m concerned M is the 12. letter and N is the 13. However – debugging this stuff showed me the following output for the first and second row:


and row #2:


The birth date matches (as with names and other stuff) but notice the the 13. and 14. column which doesn’t match at all with the data in the spreadsheet… For one every cell has a value so how can row #2 suddenly be empty in the debugger???

I suspected I was working on another file than the one I thought – triplechecking that by inserting my own name and seeing it come up in the debugger proved that theory wrong.

I’ll personally buy the first person to explain this mystery a beer for an bulletproof explanation (other than my own which of course includes a major malfunction in Excel 2003 / .NET / OleDB / pick your own). What’s going on here? I’m not even close to being overworked (don’t tell my boss) and the code is simple… I tried converting the source spreadsheet to CSV to check the values in a plain textfile and they are what they should be according to the source spreadsheet so that’s the approach I’m taking now. Instead of parsing the Excel directly I’m converting it to CSV and using that as my datasource instead. Not very elegant but it makes me feel a lot safer.

tirsdag den 10. marts 2009

Why developers love a big whiteboard

If you ever wondered why developer's love for whiteboards is so passionate you get the answer here:

I like the last comment best:

"Tools can certainly help to organise information more efficiently. But I would challenge any tool to do all of that! I'm not against tools. Not at all. But I think they should supplement the whiteboard, not replace it. Tools should be used for things they can do that a whiteboard can't. For instance, keeping track of longer lasting information, doing calculations, searching, etc."

Question, just out of curiouosity - it might even be a little stupid but here goes: Did you ever work somewhere (as a developer) where whiteboards were regarded as You Ain't Gonna Need It ??

Regards K.

mandag den 2. marts 2009

Code reviews done "right"

This is the second post in a series of posts related to 10 things I don't believe in.

"I don't believe in code conventions" - hmm... Did I really write that? How can reviewing code be bad? It's not, actually. Code reviews can be beneficial if they are done right because to have someone review code you have written is one of the top 3 things you can do as a software developer to improve your skills. It's just the way things work - you can be a winner in life, in love and in any Xbox game by not repeating your mistakes more than once - and you will be a pitiful looser with no friends and no money if you have one success and spend the rest of your life doing whatever made you a success in the past over and over again.

When I think "code review" I think:

  • Get a timeslot in everyone's calendar and book a meeting for i.e. half an hour or an hour
  • Once at the meeting: Get everyone lined up in front of a projector
  • Pick some code from the project you're working on
  • Have the person who wrote it to walk you through
  • Feedback

95% of all the formal code reviews I have ever attended followed that prescription - and it never really worked out the way it was intended for the following reasons:

  • Get a timeslot in everyone's calendar and book a meeting for i.e. half an hour or an hour
    • Developers hate meetings. It takes time from their coding, quite literally. Developers don't like to be interrupted unless they feel an urgency to attend the meeting - and code reviews aren't percieved as something you'll die of if you don't get your weekly dosis.
  • Get everyony lined up in front of a projector
    • The more formal the approach, the merrier. There might even be some sort of agenda: "Class X: 10.00 A.M, Class Y and Z, 10.20 A.M. T-SQL changes 10.40 to 11.00 A.M"... Let's see some code, right?
  • Pick some code from the code you're working on
    • Is everyone working on the same project? If yes that's fine - proceed. If no, you'll lose every single developer who hasn't been actively committing to the code you've picked. People just don't feel committed to concentrate on understanding Javascript if they regard themselves as being primarily Database guys. And the clientside guys and gals don't have much input to a discussion about classes not implementing IDisposable in a correct manner.
  • Have the person who wrote it walk you through
    • Being up on the stage in the spotlight takes some balls to everyone who isn't used to being looked at - you're in a very vulnerable position up there. Also having to communicate some crappy code you wrote in front of perhaps some senior developers who you know have at least a dozen better ways to implement a variation of the combined Visitor / Strategy pattern you've been working on can be frightening.
  • Feedback
    • If basic trust and respect between the parties attending the code review aren't there code reviews can actually do more damage to team morale than anything else because creative people (like developers) tend to block their minds when being critizied because they put so much of themselves into their work. Even if a developer has got the arguments to prove his way of doing things he might not be able to communicate well because of the spotlight and the crowd of people staring at him.
    • If feedback to code reviews aren't backed by i.e. code conventions or architectual documentations in terms of "this is how we do things here when we deal with datetime formatting" they will mostly be based on personal bias towards a solution you like better - not necessarily the right solution for the problem at hand. That's the way it works in real life if there are no code quality measures beyond the team that the code should live up to.

Only very mature teams can handle a code review which includes a projector, I think. And why, why, why make things so complicated? Why call everyone in and disturb everyone because "code reviews are important"? When I want my code reviewed I do this:

  • Write some code
  • Ask anyone of my collegues to look it through
    • He or she moves to my desk and we sit down together and go through things together
    • We might refactor while pairprogramming
  • Put my taskcard on the "To be verified" column on our sprint board

This informal approach has a number of advantages: It doesn't feel like a code review. You don't get to attend meetings which you feel eats away your coding-time. Nobody is put up on a stage and asked to speak loud and clear because there are 6 people sweating and a projector humming in the background. Feedback happens informally and instant - and you feel safer walking through the code because it is fresh in your mind (you just wrote it, remember?). Feedback is also more valuable because it happens 1-1 in an atmosphere of "how do I solve this problem the right way". You're in problem-solving mode while sitting at your desk - chances are that you're not if you're being interrupted on-stage while trying to remember why class X implements interface Foo instead of the more general interface Bar.

Martin Fowler once wrote that If it hurts do it more often. If code reviews don't feel right you're doing it wrong - and probably you should begin by loosen up the tie and try a less formal approach because informal approaches always works the best while dealing with developers.

Until next time...

Regards K.

mandag den 9. februar 2009

How to write quality code without code conventions

The last blogpost I wrote caused quite a few hits on DZone. Due to the fact that a healthy conflict is the best way to learn new stuff I'll try to explain my thoughs on each of the items in my blogpost. I'll dedicate this post to #1: I don't believe in code conventions.

I'm a firm believer in agile values and principles. I think that processes and interactions are more valuable than processes and tools. Code conventions can be useful in certain environments - for instance: I live in Denmark and we speak Danish. Hence a lot of the code I see every day is a mix between English verbs and Danish nouns. If you are building an external API (a webservice for instance) for clients around the world you'll effectively cut off 99,9% of your potential customers if you write that API in a mix of Danish and English. Take for instance a method named in a typical Danish / English mix: "GetKonto" - how will you as a non-Scandinavian know that "GetKonto" translates very well to "GetAccount" which is the pure English name of that method? You won't - and that's why code conventions should be an issue whenever you are having conversations with unknown consumers of i.e. a webservice you're in charge of.

The two questions one must ask before adhering to a set of conventions are:

  • Will they provide direct business value - will your customers care?
  • What is an adequate penalty for not following the conventions?
You see - I don't believe in a set of rules that you can break without having to at least consider the risks involved. Code conventions work if there is an immediate fine or penalty for breaking them - i.e. a red build, a QA team who amongst other things are employed to review code and discard it if it doesn't meet certain standards such as a set of conventions. If there isn't any code police around to enforce the laws and standards there really isn't any reason for the developer to adhere to them - the code works, all tests are green and the customer is happy, right? If nobody have been authorized to vote down and discard code that doesn't meet written standards - these standards will not be followed consequently and are by default doomed to fail.

I don't subscribe to anarchy if that's what you think by now - not at all. How should code be written if there are no set of rules at all? Well - for starters:
  • Your code should be readable to the human eye. If you're unsure if your code is get someone to read it and speak out loud what the code does. If that someone takes more than 10 seconds to interpret a line of code you've got a red Human Readability test.
  • Document your code using unittests.
  • Write meaningful comments - WHY does the code behave like this as opposed to WHAT is the code doing right now.
  • As a rule of thumb: Don't use oneliners which spans several lines. There are exceptions of course - such as LINQ expressions but your eye gets "tired" mentally decomposing the true intent of a oneliner.
  • For the sake and sanity of everybody: Give your parameters and attributes meaningful names. Prefix them or postfix them as you like but keep them meaningful. The only exceptions to this rule should be counters in i.e. a for-loop named i, but that's about it.
...with the Human Readability test being the far most important one.

As I stated before: I believe in humans and interactions and value them higher than processes and tools. If you are having doubts about your code: Speak up. If you see code which you are not able to read: Speak your case. Explain to your team that you're unable to maintain code that you can't read. If you're having trouble reading well written code because you i.e don't know the syntax - do not make the mistake to refactor it towards something you like better. I fall into that hole once in a while myself if I don't watch out but I try so hard to avoid it. Learn the full syntax of your language if that's your problem - you don't have to use it if you don't like it because there are no conventions, remember? Develop a coding style which makes it readable to humans - make your intentions clear in your code rather than of following a prescribed set of rules. That's the only code convention you'll ever need to follow.

This will be the first in a series of posts which digs a little deeper with the "10 things I don't believe in" posted earlier. Until next time...

Regards K.

torsdag den 5. februar 2009

10 things I don't believe in


  1. I don't believe in code conventions. Code and the context where the code exists (projects, folder structure, file naming etc) should be selfexplaining and pay respect to i.e. Single Responsibility Principle
  2. I don't believe in code reviews. Code reviews only works if you have code conventions - which I don't believe in. Nobody feels ownership towards the code being reviewed except the one who actually coded it.
  3. I don't believe in automated frameworks testing your UI. Tests are a pain to write. They execute slowly. The reliability is mediocre at best. And they are fragile and tend to break a lot because changes in your UI which will break a test (i.e. renaming the ID of a DIV tag you ) are only caught at runtime.
  4. I don't believe in MSTest. I miss the Explicit attribute. I miss RowTest. And I miss the fact that you can't qualify your tests as being either Unittests or Integration tests using attributes. It works and does the job - but it saddens me that MSTest is outperformed by miles by NUnit and/or MBUnit.
  5. I don't believe in code coverage. Code coverage don't tell you anything about the quality of your code. You can have a near 100% coverage of a class or a namespace while both your tests and your code is unreadable to anything but a compiler - except the programmer who wrote it.
  6. I don't believe in databases. Why use a fullblown database if your needs can be met by another storage mechanism - say, a flat CSV textfile or XML document?
  7. I don't believe in Microsoft. I don't believe anybody can both read about, have a qualified opinion on, and use every framework Microsoft spits out on a daily basis and STILL get some work done. If there's a term called "framework fatigue" I'm having it right now.
  8. I don't believe in fulltime architects. There's a reason that architects in the Roman Empire were 50% architects planning to build stuff and 50% craftsmen actually building stuff - if you don't get your hands dirty on a regular basis you loose touch and feel over the things you're making plans for in your architecture.
  9. I don't believe in working through the night. You can't maintain a healthy mind in a healthy body if you work like a maniac because you're under pressure. You will pay deerly in terms of technical debt if you just code away even though your body tells you it is time to take a break and get some sleep.
  10. I don't believe in Done. You're never Done - only on the time you stop breathing you're done but until then there's always more you can do.

What don't you believe in?

torsdag den 29. januar 2009

TDD and mocks

I just found a nice post on TDD and mocks / stubs written by Gabriel Schencker - if you want a thorough introduction to the subject of mocking his post is a good place to start. Check it out :o)

/ Regards K.

mandag den 26. januar 2009

SubSonic - day #2

Did you ever see this errormessage: "Can't find the SubSonicService section of the application config file" when trying to get your autogenerated Subsonic DAL to work across multiple projects? Well, I did - and I took the liberty to blog about my bad experiences after trying SubSonic for the first time. Today I decided to tinker a little with the sourcecode of the SubSonic project to see if I could get my ideal configuration scheme set up. My idea was to have a separate configuration file (subsonic.config) placed in the project together with your autogenerated code. Configuration stuff related to database belongs to the DAL layer, I believe, so that's where I would like it to place it physically. I would then add the configuration file to the projects needing dataaccess using the "Add as link" feature in Visual Studio. This way I would have only one configurationfile for my dataaccess which could be shared across multiple projects regardless of their nature (console apps, webapps etc).

I solved it within an hour or two - this is how it was done:

I downloaded the code and quickly discovered that the code basicly traverses the application directory for either web.config or app.config. Afterwards the ConfigurationManager is used to read the various sections into wrapper objects inheriting from System.Configuration.ConfigurationSection - nothing new here.

I decided to attack the DataService class, specifically the static method LoadProviders(). It generates a SubSonicSection class which wraps the SubSonicService section of your app.config or web.config. This is how the code looks:


public static void LoadProviders()

// Avoid claiming lock if providers are already loaded
if(defaultProvider == null)
// Do this again to make sure DefaultProvider is still null
if(defaultProvider == null)
//we allow for passing in a configuration section
//check to see if one's been passed in
if(section == null)
section = ConfigSectionSettings ?? (SubSonicSection)ConfigurationManager.GetSection(ConfigurationSectionName.SUB_SONIC_SERVICE);

//if it's still null, throw an exception
if (section == null)
throw new ConfigurationErrorsException("Can't find the SubSonicService section of the application config file");



I modified it method to create a SubSonicSection based on the contents of subsonic.config if it had failed creating a section based on web/app.config:


public static void LoadProviders()

// Avoid claiming lock if providers are already loaded
if(defaultProvider == null)
// Do this again to make sure DefaultProvider is still null
if(defaultProvider == null)
//we allow for passing in a configuration section
//check to see if one's been passed in
if(section == null)
section = ConfigSectionSettings ?? (SubSonicSection)ConfigurationManager.GetSection(ConfigurationSectionName.SUB_SONIC_SERVICE);

if (section == null)
string configPath = Path.Combine(AppDomain.CurrentDomain.SetupInformation.PrivateBinPath, "subsonic.config");

if (!File.Exists(configPath))
throw new ConfigurationErrorsException(string.Format("Unable to read configuration file {0}", configPath));

var execfg = new ExeConfigurationFileMap();
execfg.ExeConfigFilename = configPath;
var cfg = ConfigurationManager.OpenMappedExeConfiguration(execfg, ConfigurationUserLevel.None);
section = (SubSonicSection)cfg.GetSection(ConfigurationSectionName.SUB_SONIC_SERVICE);

//if it's still null, throw an exception
if (section == null)
throw new ConfigurationErrorsException("Can't find the SubSonicService section of the application config file");


I then created a subsonic.config resembling the following:

<?xml version="1.0" encoding="utf-8" ?>
<section name="SubSonicService" type="SubSonic.SubSonicSection, SubSonic" requirePermission="false"/>

<add name="ExceptionLog" connectionString="Data Source=localhost\SQLExpress; Database=ExceptionLog; Integrated Security=true;"/>

<SubSonicService defaultProvider="ExceptionLog">
<add name="ExceptionLog" type="SubSonic.SqlDataProvider, SubSonic" connectionStringName="ExceptionLog" generatedNamespace="DataAccess"/>

...and placed it in my ErrorLoggerAPI project which contains all the autogenerated SubSonic DAL code in the folder DataAccess:


Next to follow was to add the subsonic.config using Add As Link in my webproject - like this:


Very important: Change the property "Copy to output directory" from "Do not copy" to "Copy when newer" on your linked copy of subsonic.config - otherwise it won't be copied into your buildfolder when you build your project and the following code will fail.

That's it! I now have complete separation between my web project and the corresponding storage working behind the scenes. The only dependency is the linked configuration file I named subsonic.config. When changes occur in my configuration (i.e. a database migration or relocation of the database server) I only have to alter the subsonic.config, rebuild the code in ErrorLoggerAPI and distribute the new subsonic.config to whatever application that's using it.

Last, but not less important: Since the configuration settings are now stored in a file called subsonic.config you will have to create a bat-file to execute the generation of your files. I haven't found a better way to do this so I created a file called deploy.bat and placed it in ErrorLoggerAPI:


The Subsonic documentation (referring the command "sonic.exe help" from the command line)dictates that if you haven't got a web.config or an app.config you will have to provide the database and databasename in the arguments when calling sonic.exe. You can still point to a configuration file where the remaining SubSonic configuration stuff can be placed.

The final deploy.bat looks like this:

"%programfiles%\SubSonic\SubSonic 2.1 Final\SubCommander\sonic.exe" generate /out DataAccess /db ExceptionLog /server "localhost\SQLExpress" /config subsonic.config

Until next time... :o)

Regards K.