It’s amazing how some seemingly technical problems can be very dangerous to solve by IT in isolation. One such issue is the question whether to merge similar code from different projects into a shared base or not. It looks as if it is completely in the IT domain, but it essentially lies on the critical business path and the answer to the question depends very much on the market situation in which the business operates and marketing plans of the company.

A few of us met up two weeks ago to continue the discussion from the AltNetUK 2009 conference on domain driven design beyond basic patterns and building blocks, or “the second half of the blue book” as Ian Cooper called it. Stuart Campbell raised the question of merging - should two similar projects be refactored and similar parts extracted into a shared kernel. The discussion that followed made it clear to me how dangerous is to consider this question without looking at the big picture from a business side (or again using the terms from the second half of the blue book, considering what really is the core domain and what makes the system worth writing).

This is again where the question ‘Why?’ should be raised before we discuss ‘How?’ and ‘What?’, something that should in my opinion be raised much more often on software projects. Merging common parts of similar projects and extracting a shared component makes a lot of sense, as it can reduce management costs and make changes to both projects easier to synchronize in the future. But that might actually be counter-productive.

If the end client or the company that is developing software for its own operations is in a fairly stable, low margin market where everyone delivers the same old service but their competitive advantage is being able to deliver it cheaper, then consolidating and merging makes sense. Losing a few bits of functionality or making it slower to deliver individual projects doesn’t really matter much, as the market and products should be fairly stable.

On the other hand, if the company is in an emerging innovative market, with high margins and competing on functionality, the overhead of maintenance for two separate projects might be negligible compared to potential gains of delivering faster and being more flexible with features. For a shared kernel to work, it has to be accompanied by serious change management procedures and a lot more integration testing. This introduces a significant overhead on implementing new features in that shared part. It also makes two previously independent projects reliant on the quality and delivery timelines of that shared part. Also, in a dynamic market, opportunities change and lots of code gets deprecated quickly – it is easier to throw it away if it does not belong to a “core” library. Very often, these “shared”, “util”, “api” or “core” libraries are essentially some kind of limbo where undead code lurks because nobody has the time or power to drop it.

The point I’m trying to make is that there is no right or wrong answer to the question of merging in general. In particular, cheaper maintenance is not always the competitive advantage of the business and the main business goal, although it seems as a good idea. Sometimes more expensive maintenance with more flexible delivery makes a lot more sense from a business perspective, especially for companies that claim to be innovators. Unfortunately, if this question is considered in isolation by the IT, it might actually turn out to shoot the business in the foot with the best of intentions.