All posts by Sandro Todesco

About Sandro Todesco

I am writing software for a living and this Blog is the place where I try to put in words what I encounter during programming. You will find mainly information about the stuff I’m working on. This means: C# Objective-C Microsoft .NET iOS Patterns Unit testing Agile development

Turn on Snapshot Isolation from the Beginning

My team released a major new version of a system to production. Significant changes were made on all levels. Despite extensive tests and emulation of realistic load scenarios we started encountering deadlocks on our SQL Server DB after go-life.
After a short investigation I realized that enabling Snapshot Isolation on the DB should significantly reduce the likelihood of deadlocks. This article explains it quite well:

“The term “snapshot” reflects the fact that all queries in the transaction see the same version, or snapshot, of the database, based on the state of the database at the moment in time when the transaction begins. No locks are acquired on the underlying data rows or data pages in a snapshot transaction, which permits other transactions to execute without being blocked by a prior uncompleted transaction. Transactions that modify data do not block transactions that read data, and transactions that read data do not block transactions that write data, as they normally would under the default READ COMMITTED isolation level in SQL Server. This non-blocking behavior also significantly reduces the likelihood of deadlocks for complex transactions.”

Enabling this feature does carry risks. Concurrency control implementation and temp table size have to be considered. Given we had years of development spent on the system project management explored other options. Considerable time was invested in alternative solutions. To make things more difficult we were not able to reproduce the scenarios causing deadlocks in a test or development environment. Multiple releases of new versions failed to reduce the errors in production.

Finally we convinced management that the risk was manageable and turned on Snapshot Isolation. It solved the issue immediately and no side effects ever arose.

After this experience I made it a rule to enable Snapshot Isolation on SQL Server from the beginning on any new project. Why this isn’t turned on by default is a mystery… If you develop on a different RDBMS check your option (most systems have a similar setting). While you develop you will continuously test against Snapshot Isolation level. If you have to enable it late in a project make sure you run regression tests.

To Kill a Mobile App

I started developing mobile apps in 2009. My first ‘professional’ app (my definition of professional here is simple: I got actually paid for the work…) has recently reached ‘end of life’. Not surprising for an app that went life in 2010. It had many downloads and fulfilled it’s mission. After 5 years it was decided not to keep the data in the back-end up to date anymore.
When I received an email requesting to ‘take the app offline’ I started to think about this for the first time: Taking the app off the app store (in this case Apple’s App Store) prevents new customers to download and install the app. However there is no way of ‘wiping’ the app off users phones and tablets! The app will still be on thousands of devices. Doing nothing (not updating the data) will leave users with stale data.
My first thought was releasing a new version of the app with a single screen indicating the app is not supported anymore. No doubt Apple would have some objections releasing an app like that. And of course there is no guarantee people actually update their app! We ended up tweaking the back-end data delivered via a service to show a pseudo entry containing a similar message. Less than ideal, as the message was not prominent and users only see it once they actually want to look at some data. We were also not able to turn off any of the apps functionality.
The solution is straight forward: If your app relies on an service to receive data implement a ‘kill switch’. Design the API to that allows the app to display a message and disable functions. And make sure this is part of your initial design. Trying to do this retrospectively can be difficult. In my case the app wasn’t updated for years, the IDE and targeted OS version were outdated and it would have been very costly to update the app just for this.

Thinking outside of the .NET box

In the recent months I was working mainly on iPhone projects in my spare time abd recently I was lucky enough to cut some iOS code for work too! Definetly time to blog about my encounter with Objective-C, XCode the Cocoa and all the other wonderful things that come out of Infinite Loop 1 in Cupertino…

My first encounters with the iOS environment about 2 years ago left me with a feeling of ” wow this is old style!”. Header files, pointers, no automatic memory management… It took several attempts to actually get into it. I quite like reading a book about a programming topic. That’s the reason I picked up a book by Aaron Hillegas about Cocoa and OS X programming. It is definitely a good idea getting a basic understaning of Objective C and Cocoa before delving into iOS.

Some of the differeces are striking and you are definitely forced to think outside of the box as a .NET / C#/ Java programmer (…that’s how I would describe myself…). Weak typing is a shock at first. It took me right back to my roots as Perl/Javascript coder. After a while you start enjoying the freedom. It is actually quite a good balance between warnings on the compiler side and freedom in the runtime.

Cocoa tends to use helper classes and delegates  instead of inheritance to build frameworks. This is described as the KITT versus Robocop approach by Aaron Hillegas’ book. You have to read it yourself to appreciate the analogy (did I mention it is a good read?). That is something I have defintitely taken over in my day to day .NET programming. If detaches your design from the class hierarchy.

Some other concepts like Categories and Key Value Coding can be found in .NET in a much more refined version. It is however refreshing to see how simple you can achieve similar goals.

Memory management is definitely the hardest part of the iOS journey for me. Having to allocate and free memory has not crossed my mind for a long time. Cocoas approach (using a retain/release) model makes it however bearable.

You are rewarded with a remarkebly productive environment. The power you gain from beeing “close to the machine” has surprised me. I have long defended the claim that todays Java and C# jit compilers build executables that are as performant as native compilers. Although I have not measured my obersvations I beileive on small devices the difference still matters.

If you want (or have to) develop for iOS you have choices these days. However in my experience it is hard to beat the results you get from using XCode, Objective-C and Cocoa. Give it a try.

IT books that stand the test of time

Recently I was trying to sort out what IT books I want to keep and which ones are ready to be thrown out. My main focus is on software development. So there is a wast collection of books on Java, C#, Perl, Objective C and many more or less obscure programming language. The occasional certification cram book, books on IDE’s, project management, etc… Of course I found that the majority of these books do not age well! Pascal on Mac OS (pardon me System) 7, CGI programming in Perl, Java 1.0 in a Nutshell and many more books really are mor of a historic interest instead of being picked up and read on a regular basis.

However a few of my books really stood the test of time. They influenced me, are a good read and truly worth recommending to anyone interest in IT

  • The Pragmatic Progammer

    (Andrew Hunt, David Thomas) 0-201-61622-X

  • Design Patterns

    (Gamma, Helm, Johnson, Vlissides) 0-201-63361-2

  • The Mythical Man Month

    (Frederick P. Brooks) 0-201-83595-9

  • Goedel, Escher, Bach

    (Douglas R. Hofstadter) 0-465-02656-7

  • Death March Project

    (Edward Yourdon) 0-13-143635-X

  • Software Craftsmanship: The New Imperative

    (Pete McBreen) 0-20-173386-2

Post a comment if you have any books that you find worth being added to this!

Ignore missing XML comments

When Visual Studio is set to generate XML comment files it will print out warnings for each public member that has no comment. Sometimes you want to ignore some of the warnings (say you have a bunch of pretty much self explaining enum entries. Even worse: you have some generator that creates code without comments) What I have seen many times is that someone decides it’s not worth fixing 100 warnings and disables the creation of XML comment files for the whole project.

You don’t have to do that: Simply use a #Pragma statement to tell the compiler to stop issue warning for a certain Type:

#pragma warning disable 1591

When the section finishes and the warnings should be produced again use the following:

#pragma warning restore 1591

Of course you can do this with any other warning. The example uses the warning CS1591 (Missing XML comment for publicly visible type or member) but you can apply this to any warning, just ignore the CS prefix.

How does automatic versioning in .NET work?

Visual Studio and .NET have a standard way of handling versioning of Assemblies. This is a good thing. You specify the version in the AssemblyVersionAttribute of your AssemblyInfo and the MSIL Assembler (Ilasm.exe) will take care of it and creates an .exe or .dll file with an embedded version number. .NET will even number your builds automatically! All you have to do is set the AssemblyVersionAttribute to “1.0.*” (instead of setting all values explicitly (e.g. “”).

But how does the auto numbering actually work? It took me a while to find out, so I thought I will post this for future reference: The version follows the pattern “<MajorVersion>.<MinorVersion>.<BuildNumber>.<Revision>”. The compiler will now generate a build number and a revision number. The build number is the number of days since January 1st 2000. The revision number is the number of seconds from midnight divided by two.

Thinking about this I have some issues with the way how .NET handles versioning:

Calling the third number build number is quite misleading. You would expect the auto numbering mechanism to come up with a unique build number. But it’s only unique if you do one build a day! Guess what: that’s the way the Microsoft processes work: Daily builds. Not surprising if you build software of the size of Windows or Office… However in today’s enterprise software development we often have faster build cycles. Agile development and XP (Extreme Programming) advocate continuous integration. This means every check-in triggers a build. Effectively this means you need both [BuildNumber and Revision] to have a unique build number. This leads us to the next issue:

This scheme lets you only define a two level version. I think that’s not good enough. It is quite common to have a three level version. Lets define the three levels:

Major is a version you would charge a customer to upgrade. The application might look completely different, you have substantial improvements and major new functions.

Minor Version: In general you do not charge customers for an upgrade. A minor version has some new functions and some enhancements. Users will however not see instantly that it is a new version.

Revision: This is a service upgrade that fixes some errors. No new functions are added.

My conclusion (and I’m aware this will be just a wish and never be realized :-) :

Dear Microsoft: Make the Version a five level number (e.g. this would allow your current auto versioning to continue (and work in an agile environment) but leaves room for a flexible release policy.

How to build a time machine

If you had a time machine, I’m sure you would use it. I have good news for you: I can show you how to build one! At least a software based one…

I think the greatest thing about software development is that we create things in our heads. We live in an imaginary world. Building constantly castles and cathedrals out of nothing but thoughts. No wonder we can build a machine to travel time in this world!

It is actually very simple but highly useful. I show you first a simple implementation in C#/.NET:

namespace TodescoTechnologies.Util 

    /// An artificial Clock. You can change the time  
    /// for test cases without changing the systemtime. 
    public static class DateTimeProvider 
        private static DateTime? mockDateTime = null;   

        /// Set by test cases. This time is returned  
        /// instead of "Now" if set. 
        public static DateTime? MockDateTime 
            set { DateTimeProvider.mockDateTime = value; } 

        /// If no MockDateTime is defined this will  
        /// return the system time. 
        /// If MockDateTime is definied it will return  
        /// MockDateTime.
        public static DateTime Now 
                if (mockDateTime == null) 
                    return DateTime.Now; 
                    return mockDateTime.Value; 

All you have to do is use the DateTimeProvider.Now method instead of DateTime.Now. This will allow you to construct unit tests that happen in the future, mimic the change of a year or daylight savings time changes. All of this without having to change the system time on your development machine or server! You can even fast forward through time and see the state of your system if you have a transaction every ten minutes during a year…

Of course there is room for improvement in my time machine:

  • You can make sure nobody is setting a MockDateTime in your released software. All you have to do is put the MockDateTime property in a conditional directive (You know: #if DEBUG, #endif) and you will have a guarantee that in your released software the time is always accurate. Of course you can use a TEST solution configuration if you have defined such a thing.
  • Instead of having a static MockDateTime this could be an offset.
  • You could implement the accelerated Time with the help of a Timer.

However I found a static set time for unit tests and demos sufficient. It has been very useful in many of the projects I worked on over the years and all it takes is this short piece of code!

Use GUID’s as primary key in your database design

GUID’s are unique identifiers. When you create one it is guaranteed there will never be another one that is the same. Not once. Nowhere. (At least that’s the theory, in reality GUID’s are simply really – and I mean really, really – large random number that are very extremely unlikely to be repeated). GUID’s are called UUID’s sometimes (for example in Java, GUID is actually the Microsoft name) and in SQL Server they call them “uniqueidentifier”. However all of them mean the same thing: A 16 byte (128 bit) number. For a more detailed definition of GUID’s read this:

They have a big advantage: They don’t need a central coordination to be created. This means wherever I create them they are guaranteed to be unique. This is great if you work with databases. It means you can create unique records without being connected with the database. It means as well, you can merge databases without a problem!

Therefore I use GUID’s (uniqueidentifiers) as primary keys when I design a database.

Funny enough I don’t see many people doing it, and I have almost always to convince colleagues using it. The main arguments against is are:

  1. They are big and therefore slow
  2. You can’t read them

Before I start going in to more details I have to mention: The following examples assume a Microsoft environment (SQL Server, .NET Framework).

GUID’s are bigger than your integer primary key (normally 4 times bigger – 16 bytes instead of 4 bytes). The fact that they are bigger usually is not an issue in itself. Most DB’s these days support huge sizes and having a bigger ID won’t make the difference. A lot of people think though that your database will be slower because you have bigger primary keys. That is usually a much bigger issue for people and therefore needs clarification:

We have to separate the issue of speed in two discussions: query statements and insert statements. In Query statements there is almost no difference using GUID’s or int’s. If you think about it, it is logical: You have an index that is sorted and you will do a binary tree search over it. All you do is comparing a few numbers. Comparing a 4 byte or a 16 byte number makes (almost) no difference and will therefore not make your query much slower.

Inserting is a bit more tricky: Inserting a record with a GUID primary key takes considerably longer than its integer counter part! This has a simple reason: If you use an auto incrementing integer primary key you have a nice little side effect: Your record will be inserted at the end of the index (as your primary key was incremented). If you use a GUID (due to the random nature of a GUID) it will be inserted somewhere in the index. Finding this place in the index and moving around the data (resulting in page splits) takes time. If I say it takes time I’m talking about miliseconds. If you insert single records that will make no difference at all. If you plan to insert huge amounts of data on a regular basis however (thousands of records in one go) it might be a performance issue. But there is a solution! Using the NEWSEQUENTIALID SQL Server generates GUID’s that have an incrementing value ( With this approach you will have a very good performance that is comparable to using integer primary keys.

Whenever I hear someone claiming this and this is faster than that and that I say: We don’t know until we run a test! Luckily someone has done exactly that for me:

The second argument is that GUID’s are not readable and not easy to remember. I must say this is a feature, not a bug! I think there is something wrong if you want users to remember primary keys. We are in the 21st century… If you need an Id (lets say an order Id, so they can talk to a sales rep) then generate one separately, but do not use it as your primary key.

Extreme Advice 2: Provide an Interface for every base class

I heard this one while working for a consulting company. The idea is that you write your base class, create an interface out of it (this is quite easy with todays refactoring tools) and then use the interface where possible instead of the base class.

This might look like an overhead to some of your colleagues but it will pay off in the end! I must say I don’t follow this though in day to day coding (there I do it when necessary), but it’s definitely worth doing in frameworks. It will come in very handy for people that need to provide a different implementation or if you need to implement your interface on a class that already inherits from somewhere else. I ran into this problem so many times and it is really hard to solve later on in a framework situation.

Extreme Advice 1: Make all Methods virtual!

“Make all Methods virtual!” This was actually a sign on the door of the Head of Engineering in a company i used to work. I thought at the time this was a pretty extreme advice. It was actually targeted at the C++ programmers in our group but can be applied to C# as well. The background was that this guy used to be a Java programmer. In Java you have the opposite situation: Every method is virtual by default! You have to say explicitly that a method shall be final (non-virtual).

Now I came to the conclusion that this is a very good advice, especially if you are working on a framework. I had recently various situations where we had to change our framework simply because we needed to override a method. This is a big thing because you have to redistribute the framework, do tests etc.

The only downside of making everything virtual is a slight diminished performance (as the compiler needs to put hooks into your code to provide a possibility to override things). There is an excellent discussion about this on artima ( with Anders Hejlsberg. I definitely would argue, that it is worth following this “extreme advice”.