Feb 24, 2014
A more polished version of this article is in my book Fifty Quick Ideas To Improve Your User Stories
Detailed estimation works against the whole idea of flexible scope, but many companies I worked with fall into a common trap when they insist on work duration estimates. The typical story is that someone wants to know the rough delivery date for a large piece of work. Scope gets broken down into small, detailed stories, which are then discussed and estimated and the estimates are added up. Dan North at Oredev in 2011 said “We are terrified of uncertainty - we would rather be wrong than uncertain”, and this is where the problem starts. A nice precise number feels good, it feels as if someone is in control. The premise of this process is deeply flawed, because any estimates come with a margin of error, that is rarely considered. Just adding things up does not take into account compound effects of those errors, so the end result is precise, but horribly wrong. There are several popular error reduction techniques, such as estimating with intervals of confidence and estimating based on statistical averages, but in many contexts this is actually not the right problem to solve. Long-term estimates give the wrong impression of precision and promote long-term commitment on scope, which eliminates the biggest benefit businesses can get from agile delivery - flexible planning.
Instead of estimating, try an experiment and start with a budget for a bigger piece of work, both in terms of operational costs and time. This budget can then become a design constraint for the delivery team, similar to scalability or performance. Essentially, instead of asking “how long will it take?”, more useful questions to ask are “when do you need this by?” and “how much can you afford to pay for it?”
The delivery team then needs to come up with a solution to fit those constraints. This, of course, requires transparency and trust between the people delivering software and the people paying for that, so it is much easier to do for in-house software than for third-party delivery.
Setting the budget, instead of estimating, eliminates the need to add up smaller estimates, because the final number is already known. This in turn eliminates the need to break down the larger milestone into lots of small stories and analyse them upfront. This prevents wasting time on unnecessary analysis and avoids commitment on scope - instead establishing a commitment to deliver business value.
Another important benefit is that this approach sets additional design constraints, which enable the delivery team to come up with solutions that fit the business case. Based on a budget, it will be clear if things have to be improvised or gold-plated.
A key aspect of getting this right is to clearly communicate that the budget does not need to be spent completely - the delivery team should ideally come up with a faster, cheaper solution than the total budget.
The best way to decide on a budget, time and money, is to look at the expected business benefit and question its value to stakeholders. The financial budget can then be set as a percentage of perceived value - establishing a clear return-on-investment model. One good example of this is a bank I worked with, where software automation directly reduced operational costs of financial transactions. The more transaction classes the software could automatically process, the less the bank had to pay people to handle exceptions, and they had reasonably good estimates how many full-time salary equivalents each larger chunk of planned software saved.
Most businesses that I’ve worked with, unfortunately, are not in a position to come up with a good financial estimate for the value. People will give many different reasons for this, such as long return on investment cycles, too many factors to isolate one single thing, and unpredictable market. The sad truth is that very few companies actually try to come up with a good model for quantifying expected outcomes at all. The rising popularity of lean startup models is creating a positive impact in this area, but as an industry we’re still far from doing this maturely.
In a sense, not being able to estimate value can help with the argument against detailed delivery estimates, as people asking for a precise number for delivery days cannot themselves provide a precise number for something far more important. But this can also be a power-play mine field, and insisting on quantifying value can alienate business stakeholders. Making a business stakeholder feel stupid in public because they can’t quantify value isn’t the best possible step to take to build trust between delivery and business.
When there is no good value model, I try the following two approaches:
People are often much more comfortable talking about extremes than precise values. Ask about extremes, for example “What is the least amount of money this has to earn to make any reasonable impact? How much would make everyone say that this was worth it?” This often helps to open a good discussion. Even orders of magnitude are a good starting point for the discussion. I’ve been in several workshops where stakeholders decided that the project isn’t realistic all after they quantified the order of magnitude for the extremes. This also works for time constraints. For example, interesting questions to ask are “What is the latest we can launch this so that you still get some value out of it?” and “How soon could you start using it if it was there already?” If the extremes are reasonably close, you can set the target somewhere in the middle. If they are far apart, then you can aim for the low number first, and once that value is achieved re-plan for higher impact. I’m sure that anyone with basic knowledge of statistics is now balking at my unscientific suggestions, but remember the context here: companies without a good value model and no prior success at even aligning the expectations.
If the discussion about extremes leads to a dead end, then there is no shared understanding among the stakeholders about the potential value. This often means that things are too uncertain. In such cases, I propose that the next step should really be about reducing that uncertainty. Instead of deciding on the entire budget, plan incrementally. First decide on a budget for learning - this can lead to prototypes, low-fi interface testing with users, half-manual processes and skeleton apps, or even business people going back to the drawing board to come up with a shared model of value. The learning project scope if often much easier to slice and narrow down, because everyone knows that this will not be the final solution. Once the results of the learning project reduce uncertainty about the larger target, you can decide to incrementally invest in the next step. With less confidence, take smaller steps. With more confidence, take huge leaps.
A common concern among stakeholders who are doing this for the first time is the risk of spending the budget but not getting the value. If the value model is relatively linear - meaning that small deliverables can start providing value quickly and more functionality would bring value incrementally - then you can establish smaller milestones and monitoring. For example, after the first 10% or 25% of the budget, review how much actual value business stakeholders got out, and adjust the plan. If the value model is not suitable for that - for example when any positive outcome requires a huge investment and there is a lot of uncertainty about it - then the learning budget approach can be a nice start.
Get practical knowledge and speed up your software delivery by participating in hands-on, interactive workshops:
Get future articles, book and conference discounts by e-mail.