This blog contains reflections and thoughts on my work as a software engineer

tirsdag den 23. juni 2009

Followup…

I have figured out a workaround to the problem regarding CruiseControl.NET having to run in Console-mode… There obviously wasn’t any elegant way to handle this so I decided to at least try and hide the console window outputting CruiseControl debugging info. so you can hide a console window and keep the process running. Instead of firing up CruiseControl.NET directly I created a small consoleapp which fires up CruiseControl.NET in console mode and then uses Windows API to hide the console window from the screen and was pretty amazed that I pulled it off within an hour. So I ended up with this - all credits to Brendan Grant:

class Program

{
[DllImport("user32.dll")]
public static extern IntPtr FindWindow(string lpClassName, string lpWindowName);

[DllImport("user32.dll")]
static extern bool ShowWindow(IntPtr hWnd, int nCmdShow);


static void Main(string[] args)
{
var fi = new FileInfo(string.Format("{0}/CruiseControl.NET/server/ccnet.exe", Environment.GetEnvironmentVariable("PROGRAMFILES")));
if (!File.Exists(fi.FullName))
{
Console.WriteLine("ccnet.exe not found in " + fi.FullName);
Console.Read();
Environment.Exit(0);
}

var p = new Process {StartInfo = new ProcessStartInfo(fi.FullName)};
p.StartInfo.WorkingDirectory = fi.DirectoryName;
p.Start();
Thread.Sleep(3000);

SetConsoleWindowVisibility(false, fi.FullName);
}

public static void SetConsoleWindowVisibility(bool visible, string title)
{
// below is Brandon's code
//Sometimes System.Windows.Forms.Application.ExecutablePath works for the caption depending on the system you are running under.
var hWnd = FindWindow(null, title);

if (hWnd != IntPtr.Zero)
{
if (!visible)
//Hide the window
ShowWindow(hWnd, 0); // 0 = SW_HIDE
else
//Show window again
ShowWindow(hWnd, 1); //1 = SW_SHOWNORMA
}
}
}


Now I’m exactly where I was a few days ago except I don’t have a console window on my server… I still have to remember not to log off when remoting to my buildserver and if something crashes I haven’t got any context running CruiseControl which writes stuff into the Eventlog etc. At best this is – well, pretty bad coding style actually... but again: It’ll do for now  :o)

fredag den 19. juni 2009

Testing Javascript in a Continuous Integration environment

Ever since I started doing TDD I've been looking around to find a solution for testing UI and Javascript which would enable me to develop UI having unittest-style feedback and integrate my UI testing in a Continuous Integration setup. A few months ago we tried using QUnit at work for testing clientside scripts which relies on 3rd party vendors (Google Maps) to provide us with data for our application. I always wanted to find time to try and mainstream our experiences a bit because we never got much past "let's-try-this-thing-out" and spend a few hours seeing what could and could not be accomplished.

So - I decided a few days ago to sit down and spend a few evenings setting up a Continuous Integration environment which would

  • A) Run clientside tests in a Continuous Integration server
  • B) Provide TDD-stylish feedback on both NUnittests and clientside tests

Disclaimer: If you don't have experience with Continuous Integration environments you might find this to go a bit over your head - I strongly recommend this article written my Martin Fowler and if you really want to dig deep into the subject "Continuous Integration - Improving Software Quality and Reducing Risk" is a must-read.

How it was done

I decided to use QUnit due to the fact that it is the clientside scripting library used to test JQuery. If it good enough for those people I guess it's as good as it gets - I didn't want to even try and dig up some arbitrary, halfbaked library when QUnit was such an obvious choice. I won't go into details about the "what" and "how" of QUnit - what it does is that it provides you with feedback on a webpage which exercises your tests written in Javascript - like this: 

image image

So - you write a test in a script and execute it in a browser and get a list of failing and completed tests. If everything is OK the top banner is green (left). If one or more tests fail the banner is red (right). This is controlled by CSS-class called "pass" and "fail". With that in mind and with a little knowledge of Watin I decided to just use write a unittest in Watin because it would allow me to write an test resembling the following:

[Test]       
public void ExerciseQUnitTests()
{
    using (var ie = new IE("http://(website)/UITest/QUnitTests.aspx"))
    {
        Assert.AreEqual("pass", ie.Element("banner").ClassName, "QUnittest(s) have failed");
    }
}



...where QUnitTests.aspx should be the page exercising my QUnit tests. The test itself should check for a specific class in the top banner - if "fail" would be active one or more tests would have failed and the unittest should fail and cause a red build. There is at least one obvious gotcha to this approach: You only get to know that one or more clientside-tests have failed. You don't get to know which one and why it failed. Not very TDD'yish but it'll do for now.



Here's a list of the things I needed to do once I had my Watin-test written:




  • I created a new project on my private CruiseControl.NET server which would download the sourcecode from my VS project, compile it using MSBuild and execute the unittests in my testproject. I battled for an hour or so with Watin because it needs to have ApartementState explicitly set. You won't have any problem when running your test in Visual Studio only but whenever you try to run the Watin test outside Visual Studio you get - well, funny errors basicly.


  • I pointed the the website to the build output folder in Internet Information Server


  • Then I ran into another problem - QUnit appearenly didn't seem to work very well with Internet Explorer 7 (or so I thought) - the website simply didn't output any testresults on my build server. It wasn't a 404 error or so - the page was just plain blank - so I had working tests on my laptop and failing tests on my buildserver. Without thinking much about it I upgraded to Internet Explorer 8 on my buildserver. Not much of deal so I did it - just to find out that the webpage still didn't output any results in IE 8 either. After a while of "What the f*** is going on" I started thinking again and vaguely remembered a similar problem I had about halft a year ago... The problem back then was that scripting in IE7 is disabled per default on Windows 2008 - which of course was the problem here as well. So I enabled scripting and finally got my ExerciseQUnitTests test to pass. Guess what happened when I forced a rebuild: GREEN LIGHTS EVERYWHERE. Yeeeehaaaaaaaa  :o)


  • Last, but not least I found out that CruiseControl.NET from now on will have to run in Console-mode in order to interact with the desktop in a timely fashion - because it needs to fire up a web browser - oh dear... I need to look into that one because I define having console apps running in a server environment as heavy technical debt you need to work your way around somehow. But again: It'll do for now even though I find it a little akward.




Conclusion: After a good nights work I got the A part solved - and another two nights I cranked the B part open as well. I'm now able to write QUnittests testing code in my custom JavaScript files, exercise the tests locally in a browser, check everything in and let a buildserver exercise the tests for me just like my client-code tests were first class citizens in TDD. Sweeeeeet....  The sourcecode is available for download here  - please provide feedback to this solution and share experiences on the subject with me in the comments   :o)



Links:



Continuous Integration article by Martin Fowler



QUnit - unit testrunner for the jQuery project



CruiseControl.NET - an opensource Continuous Integration server for Windows



Watin - web test automating tool for .NET

torsdag den 11. juni 2009

Testing software quality – open your mind

I recently attended a two day course called “Softwaretest – when it’s best” in Copenhagen. The reason I found it worth going to was that we recently had a major breakdown in our production environment because we pushed a bugfix into production without having properly testing what we were actually releasing. Thus having a new refactored decoupling of of our AspNetEmail component released without it being properly tested… You can imagine what people think of the guys in the IT departement when they either don’t get their mails or get them multiple times at random… Luckily we didn’t send anybody an email that they shouldn’t have recieved but we sustained major damage to our reputation within the organization because it’s not the first time we committed crimes like these before. So – we definately need improvement in our testing phase. We’re a team of developers – 5, to be exact. We know our TDD-drill and Continuous Integration is a first class citizen in our office so testing our own software isn’t totally unknown to us. I just feel we need to fill in the gaps to get to the next level - ideally without having to build and maintain an extensive and bureaucratic testing process nobody buys into 110 percent.

So I went there with a collegue to learn about this testing stuff - and I feel I am a little wiser now. Here's the eye-openers I have decided to share with you:

  • Developers live and breathe to make things work. Testers live and breathe to make things break. It’s a state of mind you need to be aware of as a developer. You can’t sit down and develop – and then test your doings without having said to yourself “I’m not a developer right now – I’m now a tester and want to break stuff”. I tried it today and WHAM!!! I found 3 critical bugs in a piece of software I recently wrote on one of my hobby projects. Each of them could have been found if I sat down and reviewed the code - but I found them nevertheless by simply WANTING to find errors.
  • Evolution is better than revolution. Nobody likes to be hazzled into something if they don't believe it provides value. Don't start out with preaching about "the god-awesome Testing Maturity Model (TMM)" and your visions about reaching level 5 before the end of the year. Get everybody involved before making any decisions - something about people and interactions over processes and tools... You know the drill for sure but it's easy to forget.
  • If you don't understand the product you won't find the error! How often have you reviewed code and signed off on it just to find out a month later that it had bugs in it - functional as well as nonfunctional? I have more times than I actually would like to think of... If you don't know what the code is supposed to do - you can only verify what it actually does. You'll never get to the point where you realize what's missing in the code if you don't know what the customer expects from it.
  • Test the deliverables not the specifications! If you have a clear set of specifications and a testplan for those you should of course use that testplan. But - your focus must always be on what is being delivered. You should as a tester try to gain as much understanding of the domain you're testing as possible - because who knows if the testplan (if one exists) is adequate? If you during development find out that performance is critical for a specific part of the software it is your responsibility to take this into account when testing even if performance is not an issue in the specifications. If the customer percieves the software as buggy it IS buggy regardless of whatever tests you've completed and signed during development and accepttesting.
  • ...plus a number of other related goodies such as "How to write a test plan", equivalence numbers, conflict handling between testers and developers, the difference between Blackbox and Whitebox testing etc.

The very vital issue while testing is that testers have a different mindset. Their best days at work is when they find themselves "in the zone" registering bugs like maniacs while developers think success and failure in terms of solving problems and implementing features. If you’re a developer you will never become much better than poor (at best) in a testing phase if you don't acknowledge that you need to change some things in your head in order to improve your testing capabilities. That's probably the biggest eyeopener I've had since I wrote my first unittests and after an hour or two of "Why am I writing so much code for nothing" suddenly got a failing test totally unexpected...

Tomorrow, before you contact your Product Owner because you have released something to your staging environment - say to yourself: “The next hour I'm a success ONLY if I can find bugs in what I just released”. Sit down, seek - and you will find. Promise :o)