We had a fantastic discussion on delivering value with software yesterday at Alt.NET UK, centred around two proposed topics: “From concept to cash” and “Building software that matters”. Starting with questions on how do we get to cash quicker, how do we make sure that we’re building the most valuable software possible and how do we measure progress, this open space discussion touched upon some very interesting ideas to improve software processes.
Increasing visibility
A common topic in the discussion was increasing visibility, both the visibility of business value to developers and visibility of developer progress to business. We discussed different metrics to increase visibility of the progress, but this turned out to be quite a tricky subject. One interesting metric mentioned was cycle time, how much it takes to get a story from the backlog into production. This would require that stories have a fairly normalised size though. The conclusion for me is that metrics make it very easy to slide towards efficiency but not effectiveness (with the example of a hamster in a cage which might spin the wheel very efficiently, but going nowhere), which also lead us to the fact that teams should be aware where does the value come from and define what success looks like before a project starts. Ensuring that everyone knows the answers to these questions would significantly increase the value visibility and help people prioritise better.
Some interesting examples of what companies do to increase visibility were estimating technical debt with monetary value (how much will it cost to fix it) and assigning monetary value to stories (how much benefit will it bring) and implementation estimates (how much will it cost to make it). Although everyone agreed that getting these figures right would be a challenge, even rough estimates can help immensely to set the priorities and ensure that we’re building the thing of greatest value.
Another challenge mentioned is actually measuring how much value features bring when they are in production and do they justify the cost of maintenance. We all agreed that it is very hard to measure effects of individual features, but A-B testing was suggested as a way to to measure this. In A-B testing, a part of the system runs with a feature and a part of the system runs without it, and then the results are compared to determine the effectiveness of that particular feature. This approach has additional management overheads and risks involved though.
Waste
Unsurprisingly, the discussion often touched upon lean software development principles and looking at what causes waste in the process. Two good books mentioned about this were Implementing Lean Software Development: From Concept to Cash by Mary and Tom Poppendieck and Agile Management For Software Engineering by David Anderson. Anderson is also apparently working on a new book related to this subject. Additional suggested resources for Lean Software development included Lean Conference 2009 videos, David Anderson’s blog , Karl Scotland’s blog, David Joyce’s blog and the Agile in action Blog.
A common theme in the discussion on waste was building too much software, where people did not know whether they really needed it. As a possible solution for the problem we talked about the DDD core domain idea, where companies should focus on identifying a very small core of the system that makes it worth building, something that brings direct competitive advantage, and source the rest from external providers or buy off-the-shelf solutions. Core domain is explained more in Domain-driven Design: Tackling Complexity in the Heart of Software by Eric Evans.
Another source of waste was building “nice to have” features suggested by the business but not really bringing any value. As a possible solution for this, we talked about the Goals-Features-Requirements breakdown (a variant of which is Goals-Stories-Acceptance tests) where each feature introduced into the system has to be connected directly to a stated project goal, and each requirement has to be directly connected to a feature. If the person suggesting a requirement cannot name a feature (or likewise someone suggesting a feature cannot justify it by naming a goal), this gives us ground to push back and reject scope creep. This idea comes from The Art of Project Management by Scott Berkun.
The development counterpart of this is just-in-case code, something that developers decided to put in because they think it is needed. This was called the biggest source of waste by Mary Poppendieck in the Lean Software Development book. As possible solutions for this, we discussed specification workshops and agile acceptance testing – explained in my book Bridging the Communication Gap. As the answer to the problem ‘we know that we’ll need it soon’, we discussed a fine line between making the system flexible to accept a future change and actually building in flexibility. The former would involve isolating a piece of code that we know is going to change, the latter would involve actually building in a very flexible framework to start with. The conclusion was that very often these complex flexible frameworks don’t pay off as requirements might change and the requested flexibility never actually makes it into the product.
The fourth related cause of waste was working on a problem that people don’t know enough about and identifying that too late. Specification workshops are also a solution for that.
UI design and redesign was also perceived as a common source of waste, perhaps because all the people in the room were developers. Someone suggested Yahoo’s UI design pattern library as a good source of information to make this process quicker and easier.
Prioritisation
The last major theme of this discussion was prioritisation – how do we actually decide what to build first and how to deliver value quickest. Value visibility certainly helps, and assigning monetary value to features and costs of production might help prioritise. There was a concern though that only low-risk features would ever get implemented by this, but we concluded that this would probably be pushed back to the business and they would be able to decide how much risk they want to take.
We also discussed the difference between iterative and incremental development and how that helps with prioritisation – rather than asking customers to prioritise between a car door and an engine, for example, we could ask them to prioritise between a car with a poor door system but a great engine and a car with great doors and a poor engine, delivering a car that they could use in any case. This is explained in a lot more detail by Jeff Patton (also see Alistair Cockburn’s response and my write-up of Patton’s keynote at XPDay 07).
Having a map of goals to stories and tests makes it easier to prioritise as well, because business should be able to prioritise goals and we can derive priorities of lower levels based on that.