This blog contains reflections and thoughts on my work as a software engineer

onsdag den 26. februar 2014

Windows Server 2012: How to run everything as Administrator

I’ve recently ported a few servers from Windows 2008 to Windows 2012 and learned a few things here and there – especially regarding the security model of Windows 2012

Most of you know about User Access Control (disclaimer: I have a hard time being emotionally detached towards UAC) a.k.a “Let’s figure out a way to make people grind their teeth and then we’ll enter the business of dentistry and get filthy rich” (no pun intended). This so-called “feature” has been facelifted once more in Windows 2012 which means that it is almost impossible to make Windows 2012 accept your administrative rights as… well, administrative rights. Why did you guys have to invent TWO kinds of administrators – one which can do almost everything and the “Run as administrator” administrator… ??

Anyway… terms and limitations which one does not have the influence to alter or change one must either work around (and rant whenever possible) or walk away (while ranting). The third and – while not pretty – is to simply slap the server hard until it stops whining and accepts it’s place as second in command. The best way to achieve your natural position as Leader of the Pack (a.k.a Server aAministrator) is by using Powershell. UAC can in fact be turned off so that every time you fire up any program you’ll run with elevated privileges – for the sake of the sanity of my fellow programming colleagues around the world here’s how to do it  (I got to make Gist work one of these days…)

I won’t take credit for writing the lines myself – I just put it all together in a Gist.

Regards K.

P.S. – There might be side-effects – the blogpost above mentions that the Metro Store might not be available when UAC is disabled

onsdag den 13. november 2013

Fixing broken links in CRM 2011 standard reports

I have recently battled my way through the inner workings of CRM 2011 Reporting facilities. We got ourselves into trouble when we rolled up a dedicated Reporting Services Server and migrated all our reports – including the one’s in CRM – to this new server. Along the way the reports were placed in another folder on the new server and hence we had broken links within the standard reports.

The strategy to start with was to see if we somehow could run a repair on the SSRS Connector for CRM which according to this post would re-publish the reports – we would hopefully nail a few other issues along the way which is outside the scope of this blogpost. No luck there – we never actually succeeded in running a successful repair due to security issues, missing files in installation directory and other fun stuff. After a full day of teeth-grinding, frowning and the occasional swearing and fist-throwing we changed strategy and decided to hack the standard report itself to fix the broken links – we figured that a single broken report was more of a joy to handle than a fullblown crash on the entire reporting engine.

It turned out that editing the standard report was pretty straight forward – the report containing the broken links (the “Campaign Performance Detail Report”) is a giant you-know-what of XML – go search for <Action> and you’ll find something like this:

image

Paddling through the XML you’ll eventually find an action which is a DrillThrough:

image

Change the value of ReportName to the correct path on your Reporting Server, save and overwrite the existing report in your CRM installation – and you’re done.

All there you have to do now is to put your edited copy of the standard report into sourcecontrol because it is almost destined to happen that some future rollup of CRM will alter or try to modify existing reports on your CRM installation.

onsdag den 24. april 2013

I've recently been part of quite a complex integration project. Lots of fun building it - not so fun maintaining it. The project is an ticket ordering solution where people would buy and pay for a ticket to an event in system #1 (a.k.a The TOC) and then be redirected to our site (a.k.a The Checkin Site) where the user should subscribe to various events (running, bicycling, kayaking etc). You should not be able to enter the checkin-site without having paid for a ticket in The TOC. One of the non-technical requirements were that a user should really not notice that the two systems operated on different domains so we put quite a lot of effort in data flow and encryption to ensure that data and login information would silently flow from The TOC to The Checkin Site so the shift from The TOC to The Checkin Site would be almost transparent to the user.

A number of painpoints were discovered along the way. This is a list of discoveries which I hope someone (such as myself in a distant future when I've forgotten all about the pains and headaches during the last 6 months.........) could find useful in the future when building integration stuff. Here goes: 

  1. Thou should persist external data as-is 
The solution I implemented parsed a customer's order to a DTO and persisted it alongside other informations about the order (timestamps etc). It proved to be a wrong decision not to persist external data in the format we received it before parsing occurred. We have everything stored relationally and being able to query a customer's data in The TOC using SQL rather than relying on a mix of webservice calls and console applications is - well... You can imagine the outbursts and occasional teeth-grinding...

  2. Thou should agree on required fields
Is a person's last name required in your system in order to create a person? What about Email? Gender? It proved to be a less-than-Apple-like userexperience once a user entered our checkin because our two domains (ticket ordering / payment versus event reservation) had different perceptions on data validation. Had we only sat down and talked for a while about the input fields in the two systems we would probably have discovered that a person's birthdate is essential data on a person in our system (due to age validation and other stuff) but birthdate is less-than-important when ordering and paying for a ticket. Even though the user had entered their personal data in the TOC and those data were transferred to us the user would have to fill in the blanks such as Age and Gender as well once they entered the Checkin site. It would have been a much smoother experience from the user's point of view to be able to enter all information in the same workflow and just be presented with the data they entered later on.

  3. Thou should agree early on end-to-end integration test phases
When planning we didn't take into account that The TOC was still undergoing heavy development even though we agreed on a testing / bugfixing phase one month prior to release. It resulted in numerous testcases (such as "Order two adult and one child tickets, buy additional product X and Y in a quantity of 3, go to checkinsite, subscribe to a running event. Validate your receipt onscreen) which were outdated before they could be submitted to our testers because the guidelines for ordering tickets in The TOC didn't match what was currently running on The TOC's testing environment. It resulted in basically all tests were coming back without the tester had ever made it to the Checkin Site because the tester weren't able to order and pay for a ticket using the guidelines provided... Which resulted in developers sitting on their hands ready to fix bug when no bugs were reported in. We ended up cancelling the setup and took all quantitative tests and gave them to a dedicated ressource sitting next to our development team to ease the barrier of communication between the tester and our development team when flaws in guidelines were discovered. 

  4. Thou should be able to subscribe to events 
Only a few days after we released we found out that quite a few people didn't realize that there was a second step involved - for whatever reason a number of people never clicked the 200x300 pixel orange "Click here to enter the Check-in site" button on the receipt from the TOC. Because TOC notification services don't exist  the checkin system weren't notified when new tickets were ordered so the only option for us a the Checkin Site was to implement a pull-based console application doing a "Give me all customers who have ordered something during the last 24 hours" and check if all customers who had had updated their ticket order during the last 24 hours were known to us. They might never hit the Checkin button or they could have added an upselling products to their initial order (such as breakfast Thursday) after their initial order and subsequent checkin had taken place. Especially updates to existing orders proved to be a challenge until the synchronization thingie started to take those scenarios into account. We scheduled the job to run every 24 hours - we could have had a smaller timespan between job runs but I would much have preferred a message-based solution where we could have subscribed to notifications instead as the primary source of notification. I doubt we would have had a setup without some sort of pull mechanism in order to do a full sync once in a while but it is a tedious and slow way to make data flow your way if you don't get to know anything about your customers unless you repeatedly ask for it. 

  5. Thou should reconcile data early and often 
We encountered problems along the way with The TOC's payment gateway which caused a lot of customer support because user's transactions timed out. This in turn caused a lot of manual handling of customer's data in The TOC's backend which in turn caused that a customer in some cases would be created twice with tickets attached to both customer records in The Ticket System. The way our synchronization worked was to get all orders from a given customer - we didn't (and don't) have anything to merge two customer's data. This in turn caused some customer's synchronizations to fail because a given ticket type (an adult ticket) is required in order to create an event subscription. This in turn causes problems in validating that we actually have all data in our systems because numbers now do not match......... We should early in the process have planned for data being out of sync and implemented patterns for dealing with customer's records not matching what we expected (or agreed on). One of the pitfalls were that the developers at The TOC's company ensured us that no customer could submit an order without ordering an adult ticket. True - but once the problems with their payment provider kicked in customer's data were handled manually in the backend systems which didn't take our special business rules into account.... Voila, data weren't what we expected even though we were all in good faith and worked well together to win the race. At least we should for our part have had a plan for reconciliation of i.e. all adult tickets ordered versus what we had registered in our own backend. It should have been possible from day 1 to merge two customer's data into one order. The morale is regarding to data from external systems: Trust nobody. Expect the unexpected. When (not if) a sync job fails on a given record - can you gracefully recover? Who gets notified and how? Settle early how you handle ongoing support when the unexpected happens and somebody needs to take a dive into the bowels of the system to figure out what is going on. 

 Well... there's probably more to come but I can't think of more "lessons learned" right now. Until next time...

Regards K.

mandag den 15. oktober 2012

Announcing T.REST – a testing framework for REST ressources

I’ve finally done it – made my debut in the OSS community. As of today my company I have released a bunch of refactored helper classes from a former project as a Codeplex project. The baby have been named T.REST and is a testing framework suited for regression-testing REST ressources. We I developed it as part of a migration process from homegrown backend systems to CRM 2011 and it has evolved to be interesting enough that people outside my company have been curious about our testing abilities regarding REST so the decision for my company to release our testing framework as a OSS project was really a no-brainer.

The case is simple: Imagine that you depend on a REST ressource serving i.e. locations to your Google map. You really want to be sure that the look and feel of the service does not change because your clientside Javascript consuming the service is extremely hard to write tests against and you know for sure that you probably won’t be reading the newsletter email with the paragraph “By the way, service XYZ will introduce breaking changes in the following release” placed somewhere in the middle of lots of other boring stuff

The project will probably evolve over time but the essence of the framework can be expressed in a few lines of code:

[TestInitialize]
public void TestInitialize()
{
RessourceFactory.Init(Assert.Fail, Assert.AreEqual, Assert.AreNotEqual, Assert.AreEqual, Assert.AreNotEqual);
}

[TestMethod]
public void JqueryUIDemo()
{
var expected = new Dictionary<string, Type>
{
{"latitude", typeof (decimal)},
{"longitude", typeof (decimal)},
{"title", typeof(string)},
{"content", typeof(string)}
};

var res = RessourceFactory.Create(new RestConfiguration
{
Url = "/svn/trunk/demos/json/demo.json",
Host = "jquery-ui-map.googlecode.com",
ExpectedObjectSignature = expected,
ExpectArrayResult = true
});
res.ValidateSignature();
}


This lines of code will assert the following:




  • That the REST ressource returns an object with properties


  • That the number of properties matches the number of properties in the expected result


  • That no properties were found which were not specified in the expected result


  • That the types of the properties returned by the REST ressource matches your expected result



T.REST is released under the MIT license so feel free to use in any way you desire.



There are a number of quirks in the original implementation which I have tried to refactor out in this initial release but if you give it a try and stumble upon something please use the Codeplex Issue tracker. If time allows it I would like to write some more code examples uncovering the small quirks and how-to-do’s using the framework so keep an eye on my blog and on the documentation on the Codeplex project for updates.



Regards K.

mandag den 10. oktober 2011

Google announces Dart - a programming language for the web

I was at GOTO Aarhus today and Google had announced that they would present a new programming language developed by Google to the open public for the first time. So what did they come up with this time?

Dart is “a programming language for the web”. It has been developed by Lars Bak (the guy who created the Javascript engine V8 in Google Chrome). If we take one step up the ladder it is a paradigm shift which enables developers to write compiled code invented and prepared for being run in a browser. Scripting languages have been ruling the web world for eons but the inherent disadvantages with runtime interpretation and the DOM itself have driven Google towards a decision that we need to take web development to the next level. Tooling and frameworks for i.e. Javascript (such as JQuery) CSS and HTML have evolved around making things easier for the developers but they have been constrained by the nature of the web, i.e. the request/response paradigm of the HTTP protocol etc. With the emerging HTML5 standards the rules of the game will fundamentally change and I believe Google – again – have been quick enough to embrace that fact and ask somebody to drive this development forward in a direction which is pointed out by Google. I can’t remember speed being an key point of interes in any web-browser until Google released a beta of Google Chrome which made Internet Explorer look really bad. Google gave people the impression that they could get much more and a much richer browsing experience just by using another browser and guess what? People just love it when they get more for free.

Dart runs in a Dart VM which can be integrated into the browser – that calls for insanely fast websites / webapplications when combined with i.e. HTML5 offline capabilities. Dart can also be ported to Javascript with a tool called DartC so it can run in browsers which do not support a native Dart VM. It is a commercial decision, not a technical one I’m pretty sure. Lars answered a question regarding the possibility to run Javascript directly from Dart and his response was crystal clear: It was not an option and it wouldn’t become one. “Everything starts falling apart”, he said if you allow developers to hack around shortcomings in a language – the nature of Dart isn’t scripting anyway so the Dart team have made a clean cut there. That is for the good I think and another indicator that Google regards Dart as a programming paradigm which could rule out Javascript as the tool to solve a given problem in a lot of cases. There’s plenty of room for both languages but due to the heavy attention on mobile browsing experience in the community today I would expect mobile browsers to be the first to adapt VMs. The constraints regarding CPU size, memory shortage, network latency etc. on a mobile platform calls for VMs which are able to host and run compiled, not interpreted code. Mobile platforms are all the rage due to the fact that smartphones and tablets in various forms are about to take over from laptops and desktops as the main Internet browsing platform so new tools and languages and a large community will emerge for sure during the next few years.

Dart is still work in progress and Lars emphasized that a lot so we won’t see a large community evolving in the near future I believe - but due to the fact that some 15 developers from Google stood up at the end of the keynote so people could see their faces and eventually catch one of them and ask questions during a break proves that this isn’t just some prototype gadget Google have given birth to. It’ll be exciting to see the reactions from the other browser vendors such as Mozilla and Microsoft. Will they go in another direction and try to market their solution to the same problems identified by Google? I personally believe it won’t be long until Microsoft releases some sort of VM-like prototype to the Microsoft community just like they did with Internet Explorer 8 which had a brand new Javascript engine as a response to the V8 made by Google… On the other hand they might stick with optimizing Javascript performance but no matter what they’ve got to come up with something. It’ll be fun to see what they will come up with and how the community will evolve around Dart.

Ressources:

GOTO Conference

Dart language.org

Google Code blog – Dart announcement

 

Regards K.

torsdag den 6. oktober 2011

Debugging dynamically loaded Javascript files with IE Developer Toolbar

I’m currently stuck using IE Developer Toolbar because my current project involves Microsoft xRM. Neat platform but it is not crossplatform (yet – they’ve got something coming in the next release) so we’re using Internet Explorer for the time being.

The Javascript setup in our solution includes dynamically loading some custom Javascript files but appearently the IE debugger refuses knowledge of your dynamically loaded JS files . That sucks – really, it does because if you don’t know that you can just search for your dynamically loaded Javascript using the Search bar at the top right corner you’re in a world of s***. Then you’re left behind using good ol’ alert-boxes and console logging… I don’t know about you but I’ve been there, done that and it’s not an option for me.

I looked around and the solution is simply searching for your javascript content once the page (and your dynamically loaded files) have been fetched from the server. Search for something in the file you want to debug - in my case I’m trying to find the namespace “NC.Gruppe”:

image

Now we’re talking… The file isn’t available in the list of loaded Javascript-files but you’re able to set breakpoints anyway. If you know that what’s you’re supposed to do or know your way around Google.

Thanks to Заметки for starting writing in English

onsdag den 31. august 2011

How to make System.Data.SQLite.dll work on a 64 bit Windows server

I have a pet project in which I’m using SQLite for persisting day-to-day gasoline prices from various companies. No-one ever thought about that one, right? If this was 2001 I would have at least 20 employees and millions of venture capital already… Luckily this is 2011 and I’m not wasting anybody’s time and money on this one.

Anyway – I’m using SQLite as a persisting mechanism and I had a great deal of trouble making it work. I am for various reasons currently working on a 32 bit Windows 7 laptop and my production server is (luckily) a 64 bit Windows 2008 server. Everything worked fine on my laptop but once I deployed my solution to the server I got various error all evolving around an error message “Unable to load dll SQLite.Interop.dll”.

I thought at first that I just needed to adjust my Visual Studio project settings so all projects in my solution would build as 32 bit. That should work because as they say on MSDN: “If you have 100% type safe managed code then you really can just copy it to the 64-bit platform and run it successfully under the 64-bit CLR”

Short story was: I tried every possible combination of platform targeting, I tried deploying my code with both the 32 bit and 64 bit System.Data.SQLite.dll, I tried just about anything but never made anything work – and I really couldn’t figure out why because it ought to work but didn’t.

After digging for a while I realized that SQLite for .NET is simply a wrapper on top of the original C++ implementation… A few clicks verifying what had to be missing on the server 5 minutes later I had installed the 32bit Visual C++ package and everything started working.

The morale here is: I had a rock-solid idea about SQLite.Net that it wasn’t a wrapper around native code but was SQLite written in pure .NET but never confirmed it by looking it up. I’ve done it before and I’ll probably end up there again in the future but it is always a good idea spending a few minutes learning about the architecture of the tools you’re about to embrace as part of your toolbox. Had I learned from the beginning that there was a C++ assembly hidden somewhere I wouldn’t have spent an entire evening grinding teeth at my computer… Lesson learned this time for sure.

onsdag den 17. august 2011

Using Jint to unittest your Javascript in C#

I recently stumbled across Jint and found it interesting to a degree that I have spent a few hours getting to know the product. 99 times out of 100 I don’t have the time or the energy to dig deeper into new products but the timing was well so off I went.

What is Jint? It is an opensource implementation of a Javascript interpreter. The project defines itself in the following terms: “Jint is a script engine based on the Javascript language…Jint aims at providing every JavaScript functionalities to .NET applications. Does this mean that I can take a piece of Javascript and execute it in a .NET Console application? Yes it does – and it works out to be a much more frictionless experience than you might expect. I tried integrating QUnit with CruiseControl.NET a while back to test Javascript in a managed environment and even though I made it work 95% it really didn’t feel like a comfortable way to go. Let’s see some code (example is from the project’s website)

script= @"
function square(x) {
return x * x;
};

return square(number);
";

var result = new JintEngine()
.SetParameter("number", 3)
.Run(script));

Assert.AreEqual(9, result);


Really?  Yes, indeed… I decided to try it out on one of our own internal Javascript API methods and came up with this



[TestMethod]
public void Basic_GetRestHost_ValueReturned()
{
string expectedValue = "http://restservices.localhost";

var jint = new JintEngine();
var returnVal = jint.Run(File.ReadAllText("dgiapi.js") + "return $dgi.getRestHost();");

Assert.AreEqual(expectedValue, returnVal);
}


Test passes, 4 lines of code, not too much ceremony along the way… It took a while to figure out that the Run-method isn’t chained – I thought I could preload our API in a base class and use a second “Run” method to invoke the call to $dgi.getRestHost() but never made it work. It might not be best practise – it probably isn’t but haven’t dug deeper there yet.



Conclusion: Jint really looks promising. One of the major showstoppers along the way of testing Javascript for me has always been the lack of integration with buildservers and Continuous Integration but Jint seems to close the gap here. I can definately see some usages in our business here where we spend more and more of our time writing Javascript instead of serverside .NET code – especially because we’re currently migrating from a self-breed internal business application to a new, shiny installation of Microsoft xRM in which we will inevitably end up with Javascript to extend the standard user interface (hide buttons, load data into dropdowns etc. etc). It is business critical that these scripts works like expected so it would be nice to be able to unittest at least parts of them in a Continuous Integration environment. I’ll look forward to a PoC of Jint under these circumstances.



The project is work in progress and I submitted a bug yesterday - it was fixed this morning in a dev branch. Thumbs up.

tirsdag den 26. juli 2011

How to qualify an Enterprise-y software solution from a tech-guy’s perspective

At some point in your career you will inevitable come across one of those major projects which will replace what you’ve got with something better, more reliable, more suited for exactly your business needs, easier to maintain and a whole lot more… You know the drill. Let’s for the fun of it call the legacy system bound for replacement OldSystem and the new system for ShinySystem

If you’re really lucky someone at some point in time will ask you to qualify a number of vendor’s ShinySystem towards eachother. Maybe the decision has already been made and you find yourself stuck in an Middle-Eastern-style arranged marriage but at least you should still need to know the new ShinySystem on a higher level. What should you be looking out? I’ve had the pleasure of working with both legacy systems and shiny Sharepoint-solutions during the last three years so I’ve faced these type of questions before. I remember that I used to be terrified answering questions such as “What do you think of this ShinySystem” because frankly I really didn’t know what to look for from a technical point of view. During the last two or three years I’ve summoned up the gazillions of mistakes I’ve made along the way and here it is – the ultimate “What you should ask your vendor once the guys from Sales have left the room”.

Disclaimer: I’ve tried to keep the list technology-agnostic but keep in mind that I’m a .NET guy… Also bear in mind that this list is intended to qualify larger Enterprise-y systems. If you intend to use this list for qualifying a .NET control you intend to buy for 99 dollars you are way off target. I assume that you’re looking at something you will be hosting yourself on servers which are not cloudbased and you have full control over because cloudbased services is a whole other ballpark. Last, but not least – you’re a tech guy so I’ll be focusing on tech stuff and not issues such as licensing and stuff. Enough rambling – here goes:

Security model

I’ve come to the conclusion that in just about any system there is a concept of security. It can be disregarded - i.e. by placing an internal website on the local intranet and telling nobody it exists except the five developers who need access to it – but the concept of security still exists and have to be taken into regard when designing the system. The concept of excluding someone from doing something is what security is all about. The way security is handled by a system tells you if the system is designed with security in mind. If it is not – what else wasn’t taken into account you should ask yourself in your position… I would start bothering developers on ShinySystem with questions like the ones below:

  • How are users and roles created and maintained in ShinySystem?
    • Users, roles, permissions and securables are completely different things. If you think “read” and “write” are roles, please stop reading until you’ve learned the distinction.
    • User authentication and user authorization are completely different things. If you don’t know what I’m talking about, please stop reading until you’ve learned the distinction.
  • Is there a concept of a user group?
  • How are permissions maintained?
  • Is it possible to define your own set of permissions?
  • Is it possible to break inheritance?
  • Please explain how auditing works in ShinySystem

That should quickly lead you towards areas of the system which nobody else in your organization knows exist – but the have to live with the consequenses if ShinySystem doesn’t live up to business expectations regarding security. If you have a business case regarding security which OldSystem doesn’t solve according to business needs you should analyze why it doesn’t work in OldSystem, solve it on a whiteboard using ShinySystem and maybe make a Proof of Concept / prototype if you’re still uncertain whether or not it will work. I regard a feasible security architecture as a key factor when qualifying software products – simply because an inefficient architecture affects about 100% of the users who will be using the system and users expect things like security to “just work”. They will blame you and not the vendor if security issues comes to cause friction in their daily work so watch out on this one.

Custom code

You’re looking at any given ShinySystem and have a business case ready which should be possible to solve in ShinySystem. At some point even the sales guys are having a hard time keeping up their appearences because it isn’t possible to model your entire business using Drag&Drop in that shiny visual modelling tool in ShinySystems main window. This is where you probably will have to code something on your own. Most major vendors design their core applications with extensiability in mind - Firefox extensions have been around for eons by now – so you should expect that any Enterprise-y ShinySystem candidate provide some way of letting you write your own code in terms of “custom workflow”, “plugin”, “webpart”, “custom control” and so on. Given that you CAN write custom code and extend functionality – start wondering:

  • Which programming languages can I use?
  • What is the development cycle when writing custom code on your development machine
    • One answer could be “Write code, build, deploy to test website, reset webserver, test change”
  • It is very likely that there is an API somewhere to interact with ShinySystem
    • How does the ShinySystem API look like (there is likely to be one)
    • Are there any undocumented features in the API?
    • Which programming languages can you use?
    • How does an API error message look like – can I tell what’s going on just by looking at it?
  • How do I upload my changes to production?
  • If my custom code fails how will it affect the stability of ShinySystem?
  • How do I debug an error in a custom component?
    • During development?
    • Once an error shows up in production?

Storage

Applications come and go. Data lives forever. There are basicly two distinct types of replacing OldSystem with ShinySystem – those where you move the OldSystem data (or parts of it) to another, shinier location and those where you don’t. And then again: If you decide or is being asked to move parts of your data storage to another location there are two types of migration projects: Those where you are able to buy tooling and expertise and those where you’re on your own. If you are migrating from SQL 2005 to Oracle you have migration tooling avaliable to you. But what if you are migrating from i.e. Sitecore CMS written in .NET to Drupal which is written in PHP which is completely different using another programming language and another databasetechnology? You are on your own on this one. Maybe you’re lucky to find tools and guidelines which can help you but it’s up to you to find and use them. In either case you will end up in a migration project writing mapping code which extracts data from OldSystem and inserts them into ShinySystem. This discussion quickly descends into lowlevel technical details but remember that you want to know where your data is in ShinySystem and how it looks like – especially now when cloudbased services is all the rage and data flows around. Do you know on which servers your Google Emails are being stored? Are those servers located in Europe or United States? Do you care? If those emails are crucial to your business in regard to storage management policies you HAVE to care – that’s my point.

Side-note: Developers are usually in full control of an entire system but keep in mind that some Enterprise-y systems don’t allow you to mingle with the physical data storage. A dedicated Microsoft Support SWAT team will hunt you down and bleed you to death with a blunt knife if you as much as add a new column to a table in the database behind Microsoft xRM. In xRM you’re allowed access through webservices but the underlying data storage could be flat text files for all you know. You do of course but Microsoft early on decided to take full control of the database behind Sharepoint and xRM.

Keep in mind that vendors should have an answer ready on these topics:

  • Where are my data once I click “Save” on a new user / a new item etc in ShinySystem?
    • The answer here is of great interest if ShinySystems is located in the cloud
  • How do I perform a backup / restore of my new datastore?
  • In regard to tooling
    • How do I monitor, profile and optimize the current use of data?
    • How do I upgrade the database scheme?
    • How do I deploy database sceme changes to production?

Scalability

Scalability is a very loaded word – what is scalability after all? Does your system scale well if ShinySystem responds in a timely fashion? Wikipedia offers one feasible conclusion: “Scalability is the ability of a system, network, or process, to handle growing amounts of work in a graceful manner or its ability to be enlarged to accommodate that growth”.  The article also suggests possible scalability dimensions such as administrative, functional, geographic and load scalability. One might add that a system should be able to scale down as well – if you’re doomed to maintain 10 servers because that’s the way you installed it but you figure after 6 months that you only need 2 servers but your are unable to reconfigure your installation because “thats the way it is” – your system doesn’t scale very well in my opinion.

I’m from a webbased world and the bottleneck in just about any case I’ve experienced with users complaining about long response times in the end came down to problems extracting data from the database. The database might be flooded with requests – SQL statements were poorly written or there might be an infinite loop on a webpage with a database call in it which effectively exhausted the entire database from responding to other requests in a timely fashion. If ShinySystem is a webbased solution you want to dig into how ShinySystem communicates with the datastore that’s for sure.

  • How would you suggest to scale up ShinySystem if the number of users exceed what we’re expecting?
  • Can parts of ShinySystem be run on multiple servers?
    • Large websites might run on N webservers with one single database server but ShinySystem might consists of both the webserver AND the underlying database.
  • How would you scale down ShinySystem?
  • Is caching a baked-in feature of ShinySystem?
    • How does it work?
    • How do you administer and tweak caching features?
  • How does ShinySystem communicate with it’s datastore – synchronous or asynchronous?
  • How would you throttle or prioritize data requests from various parts of ShinySystem?

There is of course litterature available on the subject – I’m currently reading “Scalable Internet Architectures” by Theo Schlossnagle.

Upgrading

You’ve built your ShinySystem, the users are happy, you’re overall happy and have learned your way around the things and limitations within ShinySystem 4.0 which is the version you’re running. Then a new version 4.5 is ready for sale and management obviously wants to upgrade because reporting and statistics are better in version 4.5. Now you’ve got a headache because management of course expects everything to be running as usual with all the new stuff made available to them. Who can blame them on such an assumption?

How you and your vendor have prepared yourself for such a situation (and upgrades will be part of your life from time to time once you start playing with standard Enterprise-y software) is of great interest. Remember: Every time you decide to extend standard functionality you take sole responsibility for it to work after an upgrade of the underlying system. It can’t be emphasized enough. Often vendors will make a great effort to make it easy for developers to extend their standard software – by providing templates, base classes, fully documented APIs wrapped in a multitude of languages and so on. It looks nice but remember that whatever you do – if you extend standard functionality it’s you and not some ShinySystem key account manager who will be called upon once you perform an upgrade and things start to fall apart.

You should really be looking out for systems that are hard to model towards your business needs without writing custom code. If you’re facing problems where you’re trying to configure ShinySystem to fit business needs 100% and 9 out of 10 times it turns out you have to write custom plugins to make things happen you’re bound for trouble. The size of an upgrade project will grow exponentially to the number of plugins you need to test on a new version in my experience.

It is sometimes also a question of mind-fiddling with your managers and project leaders. If you give them a choice of delivering 80% of any given task within the next day using standard functionality ready on your testing environment – and delivering 100% of business requirements in about three weeks give or take a week because those last 20% requires you to write custom code, test it, test everything else and still risk breaking all the things you didn’t take into account because of obscure dependencies and you-know-the-drill… most (or some at least) sane project managers will go with the 80%. Sometimes the core business value is hidden in the last 20% and you have to take your time writing code and testing it but you will do it knowing that it is vital to the business that you do it and spend money on you giving a custom extension your very best shot.

Developer community

“Can I find solutions to my problem in ShinySystem on Google?” It really comes down to that. I make a Google search no less than 20 times a day every day at work when I’m doing programming chores. If you decide to go with any Enterprise-y system you really, really want to go with one that has a decent developer community – which means people who use the system you’re about to get married to and provide developer solutions to developer problems. It can’t be emphasized enough – solutions to your problems will take longer to develop, will lack quality and probably won’t follow best practises if you have to educate yourself in ShinySystem because noone outthere can help you. Developer communities embrace both opensource products and products like Microsoft Sharepoint – there are surprisingly many people out there who love Microsoft Sharepoint to a degree where they are building webparts every day providing them to you for free. Dedicated bloggers (also non-MVP's) are consistently adding new posts about problems and “how-to-avoid-strange-messages-like-XYZ-if-you-want-to-ZXY” experiences – things you are likely to experience for yourself once you make a final decision to go with ShinySystem. I don’t value the developer support you might get from ShinySystem as high as the one you get from real-world users with no relation to the company who built and sell ShinySystem solutions – simply because when it comes down to it ShinySystem’s own developers will never advise you to use another product which they may know of and which might solve your problems in a much better fashion than ShinySystem is able to.

  • Have people before me used Shinysystem with success?
  • If I write “Shinysystem blog” and “ShinySystem best practise” in Google what do I find?
  • Are there any companies which offers developer and user education in ShinySystem?
  • To what extend do people with wifes and families love ShinySystem so much that they can’t resist develop ShinySystem plugins in their spare time for free?

If you decide to go with any ShinySystem, even a major one with an active community surrounding it – spend a few hours writing blogposts about your own discoveries along the way. All those posts available to you have been written by someone for you to use for free, right? If you like what others have written and use their work to get things working at your current gig you should at least consider sharing your experiences with ShinySystem on a blog or similar – anything goes as long as Google indexes it once in a while.

Epilogue

There are a ton of questions regarding debugging, logging, development environments, licensing issues, testing abilities, how to automate builds and deploys, surveillance and monitoring of production systems and so on which I won’t cover simply because I’ve been writing for hours now and need to stop at some point and get some work done…

My final statement for now will be this one: There are always tradeoffs – it is impossible to have your cake and eat it too so you need to focus on what’s important for you as a developer.

  • Is it an extensive range of tooling?
  • Is it the programming language available to you?
  • Will you be running screaming away if you’re told you can’t add indexes to the database because database optimization is being handled for you behind the scenes by ShinySystem?
  • Do you need an extensive developer community you can turn to or are you best off by inspecting code and figuring things out for yourself?

If you’ve come all the way to the bottom of this post (that would be around here, I guess) I guess you have a few ideas about other areas of concern or bulletpoints missing in the areas covered by this post. Do write them in the comments section and I’d be thrilled. Until next time…

fredag den 1. april 2011

How to place favicon.ico outside the root folder of your website

I always hated being forced to have favicon.ico  (the little icon in your browser’s address bar) in the root folder of the websites I’ve been working on. It’s just plain ugly having content files in the root of anything because basicly: It doesn’t belong there and makes any webproject look bad… Right until now I never thought that this could be costumized but today I googled and found a neat solution: Just place the following tag in your HEAD-section:

<link rel="SHORTCUT ICON" href="/content/favicon.ico"/>

This tells your browser that it should go get the icon from /content/favicon.ico instead of checking the root of your website. According to Wikipedia this solution is also crossbrowser compatible. I just loooooove discovering little bits and pieces that makes annoying files disappear…

Regards K.

mandag den 31. januar 2011

HTML5 followup

Just wanted to follow up on my last postMads Kristensen is on Hanselminutes #251 talking about HTML5. Check it out :-)

Regards K.

fredag den 21. januar 2011

HTML5 - what is it and what’s in it for me?

The wif'e’s out and the kids are asleep, nothing interesting is on the TV so I started googling for “HTML5 explained” and such. I’m interested in the subject because I feel obliged to stay tuned on the new stuff coming at us and I’ve wanted to dig a few feet deep into the matter when I had the time. What I found was simply so interesting that I wanted to share it with you because it’s a whole new world, baby…

HTML5 is a response to the observation that the HTML and XHTML in common use on the World Wide Web is a mixture of features introduced by various specifications, along with those introduced by software products such as web browsers, those established by common practice, together with many syntax errors in existing web documents”. Wikipedia tells us that HTML5 is trying to solve the problems we have today with an outdated HTML 4.01 specification where we rely heavily on Javascript, AJAX and a bunch of various industry standards such as JQuery to animate, play videos, validate data and so on.Take a look at the available types of input in HTML 4.01 and the input types available in HTML5. Geo-location is also a first-class citizen in HTML5 which enables any HTML5-capable browser to i.e. load a Google Map and mark your location on a map by using a few algorithms and whatever wireless network Google is able to locate you by.

By looking into the matter I found quite a few blogposts by people being concerned that the term “HTML5” is more of a marketing phrase than a distinct set of related technologies. Jeffrey Zelmand (who first coined the term “Web 2.0”, I think it was) has an interesting blogpost on the subject. He advocates for us to market HTML5 as “HTML5 and related technologies” or “HTML5 and other new technologies”. The reason is that HTML5 is still so vague that people don’t understand it (I don’t either – don’t shoot me, I’m just the pianoplayer) – which leaves plenty of room for misunderstanding core concepts and discussions running in circles. I like the “HTML5 and related technologies” since the HTML5 specification is concerning a lot on core HTML concepts (parsing) and less on everything else. There’s nothing about CSS in the spec for instance – that’s another story told by another spec.

How will the spec end up looking like? It is still under active development and every release has a profound disclaimer telling the world that they should expect the current APIs and elements to be subjects of change in the coming years until the standard have settled and stabilized itself. I found a fascinating post about the development of the first HTML standard and how it became to what we work with (and sometimes curse upon every day).  Get this: “But none of this answers the original question: why do we have an <img> element? Why not an <icon> element? Or an <include> element? Why not a hyperlink with an include attribute, or some combination of rel values? Why an <img> element? Quite simply, because Marc Andreessen shipped one, and shipping code wins”  I don’t expect things to have changed much since then (sarcasm intended) so we will probably see some of the big players on the market implementing things their way and once their solution reaches critical mass – that’s how things make it into standardized glory.

What can you do as a developer to prepare yourself for the inevitable? You don’t have to worry for the next few years but time will come when you will need to take a deep breath and unlearn your current way of doing things when developing for the web. The reason is that once you start to use HTML5 there’s probably some element which solves a given problem for you. Don’t format your dates and times using Javascript anymore – use the appropriate input-element instead. Don’t use the ol’ A HREF for links – use the input type “url” instead. Order a new copy of “Who moved my cheese”, unlearn what you know and move on. The time is right when the HTML5 spec settles a bit.

Become a Javascript jedi is another safe bet and you should start today. HTML5 (or HTML 4.01 for that matter) won’t do you much good if you don’t know your way around a scripting language for the web. The applications you run today locally on your  machine is likely to move much closer to the web. Microsoft Office has a released a limited set of Office features in Office Web Apps for Sharepoint 2010 which enables users to open, edit and save Office-documents in a web browser without the user ever having installed Office locally on their machines. Things are moving towards webbased solutions and I wouldn’t be surprised if Office Webapps will have more features in 10 years from now than an Office installation on your local harddrive. Who says anyway that you need much more than a browser and an uplink to your desktop running in the cloud in 7-10 years from now? Which technologies are likely to be used when we get to that point? Spot on – HTML5 with Javascript (or something similar scripting engine) as the glue between data and user interactions.

Feel free to comment on your thoughts on HTML5 – I’d love some feedback on my thoughts in this matter.

onsdag den 12. januar 2011

Sharepoint 2010 - List item on custom content type not updated using Edit dialogue

Sharepoint 2010 can be a tricky bastard sometimes… One of the small, annoying things I’ve discovered being married to Sharepoint 2010 is that yesterday when I added two new columns in Schema.xml to an existing custom document contenttype I couldn’t update these two columns in the List Item Edit dialogue. The other columns in my content type (metadata fields) worked fine but my new Expiration date and a checkbox column wouldn’t change it’s values. Nothing showed up in the log-files – what the hey??  I wasn’t really surprised though. Behaviour like this isn’t uncommon during Sharepoint development so I googled the problem and found out that I had to alter the column name and everything would work out fine… Which it to my pleasant surprise did.

To make things work I ended up changing the StaticName Property on my Field in Schema.xml, hit deploy and things have went smooth from here. I’m writing this to spread the word since I had a bit of trouble finding a solution to my problem. Spread the word  :o)

mandag den 8. november 2010

Visual Studio likes to break my lines

I have always been irritated by Visual Studio for having automatically breaking lines whenever I completed a statement. For some reason I have lived with this for eons but suddenly today it got to me and I (mentally, for the sake of my co-workers) cried “DAMN YOU!!!” and started searching the Tools –> Options menu. After a mere two minutes I found this:

image

I could probably have spared quite a few hours of my life re-assembling lines of code being torn to pieces by Visual Studio after completing a line of code – lesson learned, don’t grow accostumed to irritation in your development environment for long periods of time because the solution could be no more than two minutes away

torsdag den 17. juni 2010

Norwegian Developer Conference – Day 2

I’m impressed… I’ve attended two different .NET debugging sessions today with Ingo Rammer. I kid you not – he is coding faster than Ayende himself. I don’t know the average amount of characters he was able to punch during a given timeframe but it was probably twice my own and I’m not exactly a slow writer. It was impressive – just like his sessions of debugging. I’ve got to know a load of features I had no idea existed in both Visual Studio and WinDbg. How do you debug a Windows Service which fails during startup? I know now… It includes using the geekiest tools available to you from Microsoft but it CAN be done.

I also attended two sessions with Jon Skeet. Today he shared with us his thoughts and wishes for C# 5 and how he would like the language to evolve. I can’t say I agree on much of it but he had some good thoughts about reducing boilerplate code when creating new instances of objects in various situations. But having polymorphic method overload in interfaces? I think not…

I’ve been a fan of DDD for quite some time (not ever succeeding with it on a project but even though you suck at football you can still be a fan, right?). I went to see Eric Evans about a talk reflecting upon his experiences and learnings after he wrote his much famous DDD book. Not the best performance but it was interesting to hear about i.e. domain events and their importance for a successful DDD experience if you take them into account and use them wisely. And I know a lot more about the .NET Service Bus and how it is the most important invention since Windows NT back in 1993. Don’t take my word for it – I believe Judal Lowy’s word weights a lot more than mine. He believes in it too so you should definately go for him instead of me  :o)

I’m off again – there’s a geek party planned about 9pm which is some 20 minutes from now. We’re leaving tomorrow so this is probably the last post for this week. I’ll recap the conference in a few days – see you then  :o)

onsdag den 16. juni 2010

Norwegian Developer Conference – Day 1

I’m attending the NDC 2010 and wow… The fatigue is settling by now after a nice dinner at a pizza-shop. I dare you: Have either of you ever paid 180 Norwegian Kroner ~ 28 US Dollars for a pizza without beverages? I knew that Norway was bloody expensive but I honestly didn’t see that one coming.

I attended seven sessions today Wednesday including the keynote kicking off the conference. I think I gave three or four green cards and two yellows… No red cards yet.   Sadly the two F# sessions this morning were cancelled so I watched Kevlin Henney at first talking about architecture and then Steve Strong walking us through the new features in .NET 4 System.Treading namespace. Quite interesting – I have a feeling that the usage of System.Threading is limited considering the day-to-day problems we face back at work but it’s good to know about the new stuff Microsoft has made available to us.

After that it was time for probably the best session for me today which was Scott Allen doing a talk about Modern Javascript. I know so little about the strengths and possibilities in Javascript so it was packed with information about features and core concepts that I had no clue existed. As a C# programmer hearing things like “functions as constructors” and “replacing the ‘this’ keyword with another object" rocked the boat quite a bit… I have decided to get to know Javascript better because my ignorance towards dynamic languages comes part from ignorance and part from the fear of the unknown, I guess. The only lasting solution to that problem is to take a deep dive into it and I was impressed even though I have to spend some time getting familiar with the concepts.

The session by Jon Skeet about Noda was interesting. I don’t think I share his passion for dates and times and I’ll probably sleep like a baby tonight despite that… I for one don’t really get a kick out  of a bug found in the New Zealand way of handling timezone restrictions but he managed to convince me that programming dates and times into an application is just as complicated as anything else which is the feeling I’ve had earlier. Earlier I almost felt a bit ashamed to not be able to figure out “simple” things with dates and times but the thing is: Dates and times in software development is hard, period. Jon says so, I’ve felt it on my own body – talk to the hand!

Until tomorrow…

tirsdag den 15. juni 2010

Norwegian Developer Conference 2010

I’m currently sitting in my hotelroom watching Brazil scoring an insane 1-0 goal against North Korea – I’ve spent the last 15 minutes planning the day tomorrow where I’ll be attending NDC 2010. I’ve never been to the conference before but a collegue of mine were here last year and I was a little envious when I learned the speakers attending the conference last year… This year it seems to be more focused on upcoming Microsoft-related technologies such as Windows Azure, F# and a bunch of C# stuff but there’s also plenty of room for i.e. MonoRails and lots of other stuff built outside of Seattle.

I have decided to plan a bit ahead. The other conferences I’ve attended earlier I wen to without much planning ahead and afterwards I had a sensation that I should have been elsewhere instead of attending sessions which I ought to have known couldn’t teach me very much. What’s the idea of attending a TDD 1-1 session when you’ve reached past that stage years ago? So tomorrow I’ll be switching tracks a bit. One of the worst things about such a conference is all the stuff you DON’T get to see live… There are podcasts and webcasts afterwards but it doesn’t beat the sensation of being there yourself of course. I’ll be following two sessions of F#, delving into Javascript with Scott Allen and I’m looking forward to seeing Jon Skeet presenting a ported version of a Java DateTime framework… I decided to go to his session for three reasons: Jon Skeet is there, I never questioned the System.DateTime namespace in .NET and Jon Skeet is there… I haven’t got the faintest clue how another framework could attack issues regarding date-time stuff so it’ll probably be either an eyeopener or time wasted I guess.

…and now Brazil is leading 2 against 0… I’ll bet that if North Korea against all odds gets away with one or even three points the national TV-station will broadcast the North Korean goals in a loop 24-7 the next five years or so   :o)

onsdag den 28. april 2010

MSTest results on CruiseControl using .NET 4

We’ve decided to upgrade our Visual Studio 2008 solutions to VS2010 and I had a few issues updating our buildserver – one of the most annoying problems was that the XSLT rendering the .NET 4 build output didn’t include our test results. My skills in regard of XSLT are – well, mediocre on a very good day - but I finally figured out that the namespace in (CCNET Installation folder)\webdashboard\xsl\MSTest9Report.xslt was wrong - the namespace was http://microsoft.com/schemas/VisualStudio/TeamTest/2006 and had to be changed to http://microsoft.com/schemas/VisualStudio/TeamTest/2010. Then my dear tests results were back to normal again. It caused me a bit of a headache because the XML was wellformed and the XPath was correct so it was really weird for a XSLT-n00b like me. It reminded me of the time where I used to debug Javascript by inserting “alert(‘123’)” into the code to see which if-clause got hit this time… What a great way to spend a few days at work that was  :o)

After changing the namespace everything is A-OK even though it annoys me a bit having to install Visual Studio 2010 on our buildserver in order to execute our MSTests. I haven't figured out a way to simply reference the Microsoft.VisualStudio.QualityTools.UnitTestFramework.dll assemblies - it just won't work for a number of reasons such as odd files missing, MSBuild errors and other various issues I’ve encountered while trying to hack my way through. Have anyone ever made MSTests run on a buildserver with no Visual Studio installed? Please let me know and you’ll be my friend for the day.

tirsdag den 13. april 2010

Refactoring

The software development community contains a huge amount of litterature with advices on how to “do” things. Do’s and dont’s are littered across the Internet – to be true I’ve also given birth to a few posts myself on various topics from time to time. Browsing the stuff one has written in the past can be quite an eyeopener in terms of “I-must-have-been-drunk-writing-that-piece-of-insanity-and-publishing-it-to-an-audience”… Any blogger with a decent record of blogposts across time probably has similar emotions towards their own blogposts otherwise you’re doing something wrong, I believe. You’re not learning anything and you’re definately not making enough mistakes on a daily basis. Only if you (such as myself) are doing regular f***ups on production environments and is the proud owner of www.iwanttoshootmyselfwithaslingshotforhavingreleasedthat.com – only then you’re in a position to discover new things about coding and yourself – and only then will you be able to reflect quitely upon the fact that truth is relative. What you believe today might not be what you believe in after another day at work tomorrow.

So – with that in mind it’s important for me to elaborate a bit from time to time over what I believe are rock-solid facts right now. Not tomorrow because I’ll probably screw up two or three times tonight during release and spend a few hours firefighting something which could have been avoided had I decided otherwise somewhere along the line. Then in a few years I’ll be able to look back on this post and think “Did I write that? Man, what a n00b…”.

As for now I want to tell you a bit about my thoughts on refactoring, how I think it should be done and the pitfalls I’ve fallen into while refactoring. Martin Fowler has written an almost mythological piece on refactoring and his Refactoring website is a good place to visit because there’s always new things to learn. Here goes

1: As always: Learn the basics, in this case about OO. If you don’t know the very basics such as “High coupling is bad” and don’t know when your code could benefit from an interface you won’t be able to recognize bad code from good code. Refactoring isn’t supposed to shape the code into something you alone like – that’s simply a waste of time. The overall goal is to improve overall readability and lift the quality of the code in terms that can be measured by tools such as FxCop and NDepend.

2: Refactoring is something you always do. Renaming a variable is refactoring – you don’t have to extract duplicate code into a method to qualify what you’re doing as “refactoring”. That’s also why agile tells us that “refactoring” tasks on the Sprint board is an antipattern. You should be refactoring 50% of the time while coding otherwise you’re doing something wrong.

3: Iteration, selection and statements – every line of code you’ve ever written has either been an iteration over something, a predicate or a statement. You can be aggressive refactoring iterations and statements because that’s often not very dangerous. Iterations and statements are changed when we rename and change visibility of methods, put duplicate code in base classes, replace magic numbers, consider recursion etc. That’s often not something to be afraid of. Changing behaviour however is a whole other story. You consider changing behaviour when you look at a monster of if-elseif-elseif-elseif…[snip]…else something and decide to introduce i.e. a Strategy Pattern. You also might go for refactoring that hideous substring-replace hell someone before you introduced to parse HTML strings because you know your Regular Expressions to the fingernails… I urge you to go for it but take your time writing those unittests. The reason you’re refactoring selections is that they are too complex for the average programmer to understand which is a proof to the fact that you probably don’t fully understand what the code is doing. How much of a moron are you if you actually think you can rewrite 100 lines of code you don’t understand into something smaller, prettier and still with the same behaviour intact? You can’t unless you really know what you’re doing (which you don’t) – and it is just a pain to have to explain to people on the floor why invoices are suddenly being sent twice to the wrong address because you missed something during your rewrite.

4: If you’re a C# programmer and have a pile of public methods you think nobody is using anymore – you can do three things: Delete, check in and pray. Or you could leave the methods as-is. Neither option is very good – you should deprecate the method instead and see if warnings start to pop up in other solutions. The [Obsolete] tag in C# is a tremendous efficient way of figuring out if public properties / methods / classes are being referenced anywhere outside your code. It’s perfectly safe and you can always go back and delete whatever you marked as Obsolete when you have made certain that there are no warnings appearing around your codebase.

5: Use tools. You’re not smart enough anyway. I know myself well enough to not trust myself at (almost) anything regarding code. Tools don’t lie – you ask, they respond. Ask WinGrep “I want to find .cs and .aspx files which contains ‘SomeNamespace.IWant.ToFind'” and it will find them for you. If you’re busting your brain to figure out assemblies where a piece of code could be used you should be using a tool to help you. The human brain is by far the most unreliable computer there is. You and all of the human race suck at being 100% accurate – that’s why man has built computers with software for you to use, damn it… I’ve only scratched the surface of NDepend but it has helped me already by clarifying some assumptions I had about our current codebase at work.

6: Don’t forget code readability. It’s not something which will improve your Cyclomatic Complexity level but it’s so important to be able to read your code. Really read – like a book. Your goal while programming should be to become the new Stephen King - in code. Or if a big guy with a beard from Hell is what turns you on you could always go for Martin Fowler. I’ll leave the details up to you. Beautiful code is easy to read and reveals intent. A fair amount of the refactorings suggested on Fowlers www.refactoring.com does indeed push for simple things such as renaming because readability in code is vastly underestimated as a code quality metric.

7: Know when to quit. You can keep on refactoring the same code because code isn’t perfect and can always be improved. Your choice and plans of attack changes over time because you change and mature as a person (hopefully). You can easily refactor the same piece of code over and over again over the years without improving it very much if you don’t look out. Ask yourself if it’s worth the effort. If it’s not ask yourself if you could learn something new by refactoring this piece of code. If not – ask someone if you should go for it. If that someone doesn’t nod his or her head – focus your energy elsewhere.

Until later…

Visual Studio 2010 released

VS2010 has arrived – check out this blogpost from The Hanselman  :o)