This blog contains reflections and thoughts on my work as a software engineer

Viser opslag med etiketten tdd. Vis alle opslag
Viser opslag med etiketten tdd. Vis alle opslag

lørdag den 22. november 2008

To TDD or not to TDD

I sat down today and reflected a little on SCRUM and TDD. The two are very alike in many ways and I have strong feelings for both of them because I think they work for me. They are "right" in a way.

Lots of blogs have lists of how you become better at SCRUM. The Nokia test for instance. I haven't however seen a list of how to become better at TDD. TDD is in the community more a "write your tests first and voila - your'e now a TDD developer"... Ehmm.... No, couldn't be farther from the truth. So I sat down and wrote for an hour and came up with the following considering the headline: How do you become better at "to TDD":

  • Get the basics right. Spend at least an hour or two and read about TDD. How do you do it? What is the AAA syntax? Testdriven development means that you write your test first. If you don't know why that's crucial to your success as a TDD developer you should read some more before going on a TDD deathmarch.
  • Want it. If you want to TDD you want to write your test first. No more "I'll write code and test if afterwards" - if you think like that you don't want to TDD yet.
  • Know you will fail when you begin. You will definately suck at it when you first attempt to write tests. You will forget to write your test first - many times. Your code will look different and not "feel" right. Even the smallest problems will be hard to solve because you have to think on writing tests, tests, tests. It feels like the TDD approach hinders you from thinking on the three lines of code that will solve your problem and enable you to close the damn support ticket... Relax. This is perfectly normal. Ask for help and tell that you can't come up with a decent test for this specific problem. If a test is hard to write it is often because the code itself is flawed and has design issues which makes the code hard to test - and you are not able to recognize those code smells at first. Know you will fail - you also fell the first time you rode a bicycle, didn't you?
  • Do it even when it's hard. It is easy to fall into the mental trap of "this code is 3 years old and un-tested so writing tests on this crap is overkill for this bugfix". No! You are not being paid to always jump over the fence at the lowest point - and just claiming that a codeblock is too hard to test without having asked for help or considered what it would actually require for you to make it testable ranges somewhere between stupidity and ignorance.
  • Do it where it matters the most. If you have a piece of code which is central to your applications or is being called upon everywhere (i.e. that little string utility thingy which is shared across all your VS projects around) and it is untested - test it at all costs.
  • Don't do it everywhere. If you have a piece of code which is used for generating reports once or twice a month for one person in the company and a test requires 4 hours of boilerplate work - don't do it, just make the code work.
  • Get help. When you fail or things won't work your way - ask for help. It is the second-most crucial thing next to wanting it: Ask for help if you're stuck. Get people who knows how to do things to push you in the right direction. Read books, subscribe to blogs to get more info on how to TDD.

I believe that you will have to want to TDD the same way you will have to "want" to SCRUM because both SCRUM and TDD are basicly a mindset you must adapt to in order to make it work for you. If you want to loose weight you will have to want to not eat so much or you will have to want to exercise some more. Your mother or girlfriend can't make you loose weight nor can any of your teammates or collegues make you want to go TDD on your code. They can and will encourage you just like your family and girlfriend will encourage you to loose weight but it won't work permanently if you don't want it.

When do you know that you're on the right track? You know you want it. Ask yourself that question and if you know you still want to TDD deep inside everything is A-OK. If you don't then focus on getting your motivation back or decide to not to want to TDD. That is perfectly OK as well because why do something that you don't want to do? You can become a very competent coder without ever writing a single unittest. Just be honest with yourself and don't go TDD on your work if you don't believe in it.

To TDD or not to TDD - that really is the question :o)

/ K.

torsdag den 17. juli 2008

Antipatterns exception testing

During my first encounters with Test Driven Developement (TDD) I adopted the ExpectedException attribute and used it heavily when coding - that is: I focused a lot on testing for every exception written in the code.

I have come to the conclusion during the last few weeks that my testcases didn't provide very much value if you test heavily for exceptions. There are a few bulletpoints to mention here:


  • Tests written for testing errorconditions on general exceptions (ArgumentNullException etc) are not important. You should write exceptions for ArgumentNull etc. etc but I don't test for them anymore. Focus your testing on things you can assert instead - a test which throws an error will fail the test so even if you don't write an explicit test for the error condition your test will still fail you if the error occurs. Never ever aim for 100% code coverage - if you do you are walking a deathmarch.
  • Business value exceptions are on the other side very important to write tests for. If you have custom exceptions classes which you throw when i.e. a business rule fails to validate you should write a test which succeeds only if the error is thrown. This provides business value: You know that your business logic works because you get your expected exception thrown
  • If you throw exceptions in your code WITHOUT explaining what went wrong I will hunt you down and shoot you in the legs. Not very pedagogic indeed but exceptions are only useful with a meaningful description of what went wrong. You are giving the consumer of the exception no data to help figuring out what went wrong.
Which best practices on this issue do you reccommend to others?

onsdag den 9. juli 2008

Rhino + VS debugger == WFT?

We have used the MVP pattern lately in my recent project to increase testability and decouple the UI from our business logic. To enhance our testing environment we have begun using Rhino Mocks and it is an overall satisfying experience even though I am still having a hard time "getting it right" when writing tests.

I want to share with you one of the "What The F***" experiences I had just a few minutes ago: One of our mocking tests failed. Well - insert a few breakpoints and fire up the debugger. And then it happened: For every time I ran my test I had errors on different lines in my source code... Most of the times I had a NullPointerException but at other times the Rhino framework threw mysterious errormessages at me...

I was a little scared to see a unittest behave like this for twenty minutes and really had no clue what was happening but my collegue was able to solve the puzzle once I asked around for a little help. What had happened was that in my MVP Presenter I had set up an mock of my view and some expectations for different properties to be called during OnLoad. They should be called just once so when you start using your debugger's Quickwath on those properties you "use up" the quota of expected calls. You are actually fulfilling the expectation you just set up when quickwatching a property in a test with expectations set Rhino-style... And when the code afterwards tries to execute the code you intended the expectation to test for your test fails because you only set your property up for being accessed just once.

Lesson learned today: When a test which uses mocking fails you - clear all breakpoints and try again. It actually solved the mystery and I was able to fix the initial cause of the error within a few minutes.

tirsdag den 22. april 2008

Hello Rhino World

I have discovered something tonight - the art of testing through mocking. Yes, I admit it: I've never actually given much thought to the fact that mocking could be remotely interesting due to the fact that a mock is simply just a replacement for something you intend to test, i.e. a database access layer or external devices connected to your computer. I do not like to think of myself as being religious but nevertheless I have never given mocking a try because it just didn't feel right (!). So much for trying to see things from the other side - until now, because:

We have had quite a few exhausting weeks at work. Our source control is coming up to speed and a CI build enviroment is slowly emerging. It just builds everything but we are working disconnected from eachother and nobody can screw up the build for days without getting instant notice. I like it sooo much... We have hired one of my former collegues, who is now an independent consultant, to tell us about Agile development and TDD. Last Wednesday we talked a bit about mocking and I peeped a little about the paradox of testing a simulated version of the real world. However I decided that I would give a mocking framework a try one of these days and I tried it out tonight, so I created a simple scenario:

I want to create a Person object and store it in a database. The person should expose to me whether or not he has been saved or not. Separation of Concerns, I know, I know... I'm just mocking a scenario here... (applause sign on). And - tada - I want to test on both a real object and a mocked one and be able to retrieve a positive response whenever I ask Mr. Per Son: Have you been saved?

I downloaded Rhino Mocks because I had it recommended and it seems to be some sort of de facto standard these days - mainly because it is strongly typed which doesn't seem to be the case with other .NET mocking frameworks. I created the following unittests to mock my reallife Person object:

[Test]
public void CreateDomainObject()
{
Person p = new Person();
Assert.IsNotNull(p, "Person is null");
}

[Test]
public void RealPerson_NotPersisted()
{
Person p = new Person();
Assert.IsFalse(p.IsPersisted());
}

[Test]
public void RealPerson_Persisted()
{
Person p = new Person();
p.Save();
Assert.IsTrue(p.IsPersisted());
}

I then created a Person object to simulate this behaviour:

public class Person
{
private bool persisted;
public Person()
{
persisted = false;
}

public void Save()
{
persisted = true;
}

public bool IsPersisted
{
get { return persisted; }
}
}


...and now things get interesting. A mock of the Person is introduced in the following test:

[Test]
public void Mock_Persisted_Property()
{
IPerson persistedPerson = GetMockedPerson();
Assert.IsTrue(persistedPerson.Persisted);
}

private IPerson GetMockedPerson()
{
MockRepository mocks = new MockRepository();
IPerson person = (IPerson)mocks.CreateMock(typeof(IPerson));
Expect.Call(person.Persisted).Return(true);
mocks.ReplayAll();
return person;
}


The MockRepository makes it possible to take an instance of an IPerson object - that is, an interface describing the methods and properties on the object you intend to test. The domain object does not actually have to inherit the interface so if you do not want to have your domain model "interfaced" for various reasons, you do not have to. The IPerson I created is simply:

public interface IPerson
{
bool Persisted { get; }
}

I do not want to do anything with Save() so I leave it out of the interface. Maybe it is not best practise - I'm a newbie on this so you are obliged to guide me in the right direction if this has proven to be an anti-pattern.

In the Expect thingy I decide that a call to the Persisted property on my Person object should return true. The mocks.ReplayAll() stops recording expectations on my Person object and now I am ready to validate my expectations in my tests.

This makes it possible for me to run the Mock_Persisted_Property() test succesfully - and retrieve a positive value when asking if the Person has ever been saved even if the Save method has not been called. This is truly great because I have had numerous encounters with problems regarding third-party vendors and external hardware and processes which has not been testable for me until now either because testing them would result in extremely slow integration tests or that the tests themselves and the chance of a succesful test suite run would be extremely tightly coupled to a certain state of the hardware being tested (i.e. the hardware device should always be on and responding to requests).

I like this a lot. I have only scratched the documentation but I will definately go deeper into this during the next weeks and months. I have seen the light.- or at least seen the possibilities instead of only focusing on the limitations of mocking ;o)

Regards Kristian

P.S: - The entire source (Visual Studio 2008 solution) can be downloaded from this location

mandag den 28. januar 2008

Test-Driven Development and the quality issue

There has been quite a lot of fuzz lately regarding Test Driven Development (TDD) - I have seen a number of posts where poeple are asking if the maturity of the tools on the market make old-school TDD obsolete. I have also seen posts where Ben Hughes asks if TDD really ensures quality.

That made me think about the term "quality" - what is software quality seen from a customer's point of view and how can we convince them that the software we produce are better than the one of our competitors?

I don't really think you can measure very much out of TDD in a sense of software quality seen from a customer's point of view. I mean - do you really care how Microsoft develops Windows XP and Windows Vista? I couldn't care less - neither do our customers as long as our sh** works every time. Even when our customers have a negative experience because they are unable to fulfill whatever task they were set to do because we as software engineers have failed to deliver - do they start to ask questions? Not really - they start looking for the products developed by our strategic competitors on the market to look for a replacement of the crap we have convinced them to install on their harddrive.

The principles and paradigms on which a software product has been built upon can and should never be a quality metric for our customers. They will never care anyway. TDD is just one (very useful) tool out of many when we develop software. In terms of quality it should only be a metric inhouse whether or not our software has a certain degree of code coverage for instance... We must not be mislead to believe as developers that anyone else but us really care what tools are in our toolbox!