Just another Software Engineer

This blog contains reflections and thoughts on my work as a software engineer

onsdag den 26. februar 2014

Windows Server 2012: How to run everything as Administrator

I’ve recently ported a few servers from Windows 2008 to Windows 2012 and learned a few things here and there – especially regarding the security model of Windows 2012

Most of you know about User Access Control (disclaimer: I have a hard time being emotionally detached towards UAC) a.k.a “Let’s figure out a way to make people grind their teeth and then we’ll enter the business of dentistry and get filthy rich” (no pun intended). This so-called “feature” has been facelifted once more in Windows 2012 which means that it is almost impossible to make Windows 2012 accept your administrative rights as… well, administrative rights. Why did you guys have to invent TWO kinds of administrators – one which can do almost everything and the “Run as administrator” administrator… ??

Anyway… terms and limitations which one does not have the influence to alter or change one must either work around (and rant whenever possible) or walk away (while ranting). The third and – while not pretty – is to simply slap the server hard until it stops whining and accepts it’s place as second in command. The best way to achieve your natural position as Leader of the Pack (a.k.a Server aAministrator) is by using Powershell. UAC can in fact be turned off so that every time you fire up any program you’ll run with elevated privileges – for the sake of the sanity of my fellow programming colleagues around the world here’s how to do it  (I got to make Gist work one of these days…)

I won’t take credit for writing the lines myself – I just put it all together in a Gist.

Regards K.

P.S. – There might be side-effects – the blogpost above mentions that the Metro Store might not be available when UAC is disabled

onsdag den 13. november 2013

Fixing broken links in CRM 2011 standard reports

I have recently battled my way through the inner workings of CRM 2011 Reporting facilities. We got ourselves into trouble when we rolled up a dedicated Reporting Services Server and migrated all our reports – including the one’s in CRM – to this new server. Along the way the reports were placed in another folder on the new server and hence we had broken links within the standard reports.

The strategy to start with was to see if we somehow could run a repair on the SSRS Connector for CRM which according to this post would re-publish the reports – we would hopefully nail a few other issues along the way which is outside the scope of this blogpost. No luck there – we never actually succeeded in running a successful repair due to security issues, missing files in installation directory and other fun stuff. After a full day of teeth-grinding, frowning and the occasional swearing and fist-throwing we changed strategy and decided to hack the standard report itself to fix the broken links – we figured that a single broken report was more of a joy to handle than a fullblown crash on the entire reporting engine.

It turned out that editing the standard report was pretty straight forward – the report containing the broken links (the “Campaign Performance Detail Report”) is a giant you-know-what of XML – go search for <Action> and you’ll find something like this:

image

Paddling through the XML you’ll eventually find an action which is a DrillThrough:

image

Change the value of ReportName to the correct path on your Reporting Server, save and overwrite the existing report in your CRM installation – and you’re done.

All there you have to do now is to put your edited copy of the standard report into sourcecontrol because it is almost destined to happen that some future rollup of CRM will alter or try to modify existing reports on your CRM installation.

onsdag den 24. april 2013

I've recently been part of quite a complex integration project. Lots of fun building it - not so fun maintaining it. The project is an ticket ordering solution where people would buy and pay for a ticket to an event in system #1 (a.k.a The TOC) and then be redirected to our site (a.k.a The Checkin Site) where the user should subscribe to various events (running, bicycling, kayaking etc). You should not be able to enter the checkin-site without having paid for a ticket in The TOC. One of the non-technical requirements were that a user should really not notice that the two systems operated on different domains so we put quite a lot of effort in data flow and encryption to ensure that data and login information would silently flow from The TOC to The Checkin Site so the shift from The TOC to The Checkin Site would be almost transparent to the user.

A number of painpoints were discovered along the way. This is a list of discoveries which I hope someone (such as myself in a distant future when I've forgotten all about the pains and headaches during the last 6 months.........) could find useful in the future when building integration stuff. Here goes: 

  1. Thou should persist external data as-is 
The solution I implemented parsed a customer's order to a DTO and persisted it alongside other informations about the order (timestamps etc). It proved to be a wrong decision not to persist external data in the format we received it before parsing occurred. We have everything stored relationally and being able to query a customer's data in The TOC using SQL rather than relying on a mix of webservice calls and console applications is - well... You can imagine the outbursts and occasional teeth-grinding...

  2. Thou should agree on required fields
Is a person's last name required in your system in order to create a person? What about Email? Gender? It proved to be a less-than-Apple-like userexperience once a user entered our checkin because our two domains (ticket ordering / payment versus event reservation) had different perceptions on data validation. Had we only sat down and talked for a while about the input fields in the two systems we would probably have discovered that a person's birthdate is essential data on a person in our system (due to age validation and other stuff) but birthdate is less-than-important when ordering and paying for a ticket. Even though the user had entered their personal data in the TOC and those data were transferred to us the user would have to fill in the blanks such as Age and Gender as well once they entered the Checkin site. It would have been a much smoother experience from the user's point of view to be able to enter all information in the same workflow and just be presented with the data they entered later on.

  3. Thou should agree early on end-to-end integration test phases
When planning we didn't take into account that The TOC was still undergoing heavy development even though we agreed on a testing / bugfixing phase one month prior to release. It resulted in numerous testcases (such as "Order two adult and one child tickets, buy additional product X and Y in a quantity of 3, go to checkinsite, subscribe to a running event. Validate your receipt onscreen) which were outdated before they could be submitted to our testers because the guidelines for ordering tickets in The TOC didn't match what was currently running on The TOC's testing environment. It resulted in basically all tests were coming back without the tester had ever made it to the Checkin Site because the tester weren't able to order and pay for a ticket using the guidelines provided... Which resulted in developers sitting on their hands ready to fix bug when no bugs were reported in. We ended up cancelling the setup and took all quantitative tests and gave them to a dedicated ressource sitting next to our development team to ease the barrier of communication between the tester and our development team when flaws in guidelines were discovered. 

  4. Thou should be able to subscribe to events 
Only a few days after we released we found out that quite a few people didn't realize that there was a second step involved - for whatever reason a number of people never clicked the 200x300 pixel orange "Click here to enter the Check-in site" button on the receipt from the TOC. Because TOC notification services don't exist  the checkin system weren't notified when new tickets were ordered so the only option for us a the Checkin Site was to implement a pull-based console application doing a "Give me all customers who have ordered something during the last 24 hours" and check if all customers who had had updated their ticket order during the last 24 hours were known to us. They might never hit the Checkin button or they could have added an upselling products to their initial order (such as breakfast Thursday) after their initial order and subsequent checkin had taken place. Especially updates to existing orders proved to be a challenge until the synchronization thingie started to take those scenarios into account. We scheduled the job to run every 24 hours - we could have had a smaller timespan between job runs but I would much have preferred a message-based solution where we could have subscribed to notifications instead as the primary source of notification. I doubt we would have had a setup without some sort of pull mechanism in order to do a full sync once in a while but it is a tedious and slow way to make data flow your way if you don't get to know anything about your customers unless you repeatedly ask for it. 

  5. Thou should reconcile data early and often 
We encountered problems along the way with The TOC's payment gateway which caused a lot of customer support because user's transactions timed out. This in turn caused a lot of manual handling of customer's data in The TOC's backend which in turn caused that a customer in some cases would be created twice with tickets attached to both customer records in The Ticket System. The way our synchronization worked was to get all orders from a given customer - we didn't (and don't) have anything to merge two customer's data. This in turn caused some customer's synchronizations to fail because a given ticket type (an adult ticket) is required in order to create an event subscription. This in turn causes problems in validating that we actually have all data in our systems because numbers now do not match......... We should early in the process have planned for data being out of sync and implemented patterns for dealing with customer's records not matching what we expected (or agreed on). One of the pitfalls were that the developers at The TOC's company ensured us that no customer could submit an order without ordering an adult ticket. True - but once the problems with their payment provider kicked in customer's data were handled manually in the backend systems which didn't take our special business rules into account.... Voila, data weren't what we expected even though we were all in good faith and worked well together to win the race. At least we should for our part have had a plan for reconciliation of i.e. all adult tickets ordered versus what we had registered in our own backend. It should have been possible from day 1 to merge two customer's data into one order. The morale is regarding to data from external systems: Trust nobody. Expect the unexpected. When (not if) a sync job fails on a given record - can you gracefully recover? Who gets notified and how? Settle early how you handle ongoing support when the unexpected happens and somebody needs to take a dive into the bowels of the system to figure out what is going on. 

 Well... there's probably more to come but I can't think of more "lessons learned" right now. Until next time...

Regards K.

mandag den 15. oktober 2012

Announcing T.REST – a testing framework for REST ressources

I’ve finally done it – made my debut in the OSS community. As of today my company I have released a bunch of refactored helper classes from a former project as a Codeplex project. The baby have been named T.REST and is a testing framework suited for regression-testing REST ressources. We I developed it as part of a migration process from homegrown backend systems to CRM 2011 and it has evolved to be interesting enough that people outside my company have been curious about our testing abilities regarding REST so the decision for my company to release our testing framework as a OSS project was really a no-brainer.

The case is simple: Imagine that you depend on a REST ressource serving i.e. locations to your Google map. You really want to be sure that the look and feel of the service does not change because your clientside Javascript consuming the service is extremely hard to write tests against and you know for sure that you probably won’t be reading the newsletter email with the paragraph “By the way, service XYZ will introduce breaking changes in the following release” placed somewhere in the middle of lots of other boring stuff

The project will probably evolve over time but the essence of the framework can be expressed in a few lines of code:

[TestInitialize]
public void TestInitialize()
{
RessourceFactory.Init(Assert.Fail, Assert.AreEqual, Assert.AreNotEqual, Assert.AreEqual, Assert.AreNotEqual);
}

[TestMethod]
public void JqueryUIDemo()
{
var expected = new Dictionary<string, Type>
{
{"latitude", typeof (decimal)},
{"longitude", typeof (decimal)},
{"title", typeof(string)},
{"content", typeof(string)}
};

var res = RessourceFactory.Create(new RestConfiguration
{
Url = "/svn/trunk/demos/json/demo.json",
Host = "jquery-ui-map.googlecode.com",
ExpectedObjectSignature = expected,
ExpectArrayResult = true
});
res.ValidateSignature();
}


This lines of code will assert the following:




  • That the REST ressource returns an object with properties


  • That the number of properties matches the number of properties in the expected result


  • That no properties were found which were not specified in the expected result


  • That the types of the properties returned by the REST ressource matches your expected result



T.REST is released under the MIT license so feel free to use in any way you desire.



There are a number of quirks in the original implementation which I have tried to refactor out in this initial release but if you give it a try and stumble upon something please use the Codeplex Issue tracker. If time allows it I would like to write some more code examples uncovering the small quirks and how-to-do’s using the framework so keep an eye on my blog and on the documentation on the Codeplex project for updates.



Regards K.

mandag den 10. oktober 2011

Google announces Dart - a programming language for the web

I was at GOTO Aarhus today and Google had announced that they would present a new programming language developed by Google to the open public for the first time. So what did they come up with this time?

Dart is “a programming language for the web”. It has been developed by Lars Bak (the guy who created the Javascript engine V8 in Google Chrome). If we take one step up the ladder it is a paradigm shift which enables developers to write compiled code invented and prepared for being run in a browser. Scripting languages have been ruling the web world for eons but the inherent disadvantages with runtime interpretation and the DOM itself have driven Google towards a decision that we need to take web development to the next level. Tooling and frameworks for i.e. Javascript (such as JQuery) CSS and HTML have evolved around making things easier for the developers but they have been constrained by the nature of the web, i.e. the request/response paradigm of the HTTP protocol etc. With the emerging HTML5 standards the rules of the game will fundamentally change and I believe Google – again – have been quick enough to embrace that fact and ask somebody to drive this development forward in a direction which is pointed out by Google. I can’t remember speed being an key point of interes in any web-browser until Google released a beta of Google Chrome which made Internet Explorer look really bad. Google gave people the impression that they could get much more and a much richer browsing experience just by using another browser and guess what? People just love it when they get more for free.

Dart runs in a Dart VM which can be integrated into the browser – that calls for insanely fast websites / webapplications when combined with i.e. HTML5 offline capabilities. Dart can also be ported to Javascript with a tool called DartC so it can run in browsers which do not support a native Dart VM. It is a commercial decision, not a technical one I’m pretty sure. Lars answered a question regarding the possibility to run Javascript directly from Dart and his response was crystal clear: It was not an option and it wouldn’t become one. “Everything starts falling apart”, he said if you allow developers to hack around shortcomings in a language – the nature of Dart isn’t scripting anyway so the Dart team have made a clean cut there. That is for the good I think and another indicator that Google regards Dart as a programming paradigm which could rule out Javascript as the tool to solve a given problem in a lot of cases. There’s plenty of room for both languages but due to the heavy attention on mobile browsing experience in the community today I would expect mobile browsers to be the first to adapt VMs. The constraints regarding CPU size, memory shortage, network latency etc. on a mobile platform calls for VMs which are able to host and run compiled, not interpreted code. Mobile platforms are all the rage due to the fact that smartphones and tablets in various forms are about to take over from laptops and desktops as the main Internet browsing platform so new tools and languages and a large community will emerge for sure during the next few years.

Dart is still work in progress and Lars emphasized that a lot so we won’t see a large community evolving in the near future I believe - but due to the fact that some 15 developers from Google stood up at the end of the keynote so people could see their faces and eventually catch one of them and ask questions during a break proves that this isn’t just some prototype gadget Google have given birth to. It’ll be exciting to see the reactions from the other browser vendors such as Mozilla and Microsoft. Will they go in another direction and try to market their solution to the same problems identified by Google? I personally believe it won’t be long until Microsoft releases some sort of VM-like prototype to the Microsoft community just like they did with Internet Explorer 8 which had a brand new Javascript engine as a response to the V8 made by Google… On the other hand they might stick with optimizing Javascript performance but no matter what they’ve got to come up with something. It’ll be fun to see what they will come up with and how the community will evolve around Dart.

Ressources:

GOTO Conference

Dart language.org

Google Code blog – Dart announcement

 

Regards K.

torsdag den 6. oktober 2011

Debugging dynamically loaded Javascript files with IE Developer Toolbar

I’m currently stuck using IE Developer Toolbar because my current project involves Microsoft xRM. Neat platform but it is not crossplatform (yet – they’ve got something coming in the next release) so we’re using Internet Explorer for the time being.

The Javascript setup in our solution includes dynamically loading some custom Javascript files but appearently the IE debugger refuses knowledge of your dynamically loaded JS files . That sucks – really, it does because if you don’t know that you can just search for your dynamically loaded Javascript using the Search bar at the top right corner you’re in a world of s***. Then you’re left behind using good ol’ alert-boxes and console logging… I don’t know about you but I’ve been there, done that and it’s not an option for me.

I looked around and the solution is simply searching for your javascript content once the page (and your dynamically loaded files) have been fetched from the server. Search for something in the file you want to debug - in my case I’m trying to find the namespace “NC.Gruppe”:

image

Now we’re talking… The file isn’t available in the list of loaded Javascript-files but you’re able to set breakpoints anyway. If you know that what’s you’re supposed to do or know your way around Google.

Thanks to Заметки for starting writing in English

onsdag den 31. august 2011

How to make System.Data.SQLite.dll work on a 64 bit Windows server

I have a pet project in which I’m using SQLite for persisting day-to-day gasoline prices from various companies. No-one ever thought about that one, right? If this was 2001 I would have at least 20 employees and millions of venture capital already… Luckily this is 2011 and I’m not wasting anybody’s time and money on this one.

Anyway – I’m using SQLite as a persisting mechanism and I had a great deal of trouble making it work. I am for various reasons currently working on a 32 bit Windows 7 laptop and my production server is (luckily) a 64 bit Windows 2008 server. Everything worked fine on my laptop but once I deployed my solution to the server I got various error all evolving around an error message “Unable to load dll SQLite.Interop.dll”.

I thought at first that I just needed to adjust my Visual Studio project settings so all projects in my solution would build as 32 bit. That should work because as they say on MSDN: “If you have 100% type safe managed code then you really can just copy it to the 64-bit platform and run it successfully under the 64-bit CLR”

Short story was: I tried every possible combination of platform targeting, I tried deploying my code with both the 32 bit and 64 bit System.Data.SQLite.dll, I tried just about anything but never made anything work – and I really couldn’t figure out why because it ought to work but didn’t.

After digging for a while I realized that SQLite for .NET is simply a wrapper on top of the original C++ implementation… A few clicks verifying what had to be missing on the server 5 minutes later I had installed the 32bit Visual C++ package and everything started working.

The morale here is: I had a rock-solid idea about SQLite.Net that it wasn’t a wrapper around native code but was SQLite written in pure .NET but never confirmed it by looking it up. I’ve done it before and I’ll probably end up there again in the future but it is always a good idea spending a few minutes learning about the architecture of the tools you’re about to embrace as part of your toolbox. Had I learned from the beginning that there was a C++ assembly hidden somewhere I wouldn’t have spent an entire evening grinding teeth at my computer… Lesson learned this time for sure.