Lisa Crispin’s talk on defect management techniques for agile teams stirred some emotions at StarEast in early May. The idea that a team might not necessarily need a tool to track defects was, it seems, pure heresy. Luckily there were no calls to burn the witch, but many people at the conference were projecting a mix of confusion and sadness akin to a child just told that you’re taking away his favourite puppy.
Though Lisa presented cases when such tools might be useful, I’ll take a more radical approach: defect tracking tools are placebo, giving people a warm and cosy feeling that they have done something by throwing a digital coin into a fountain of wishes, while hoping for better software.
Several people insisted that defect tracking has a point, but they were not able to give me a single use case for it that couldn’t be done in another way, easier and more productive. One popular argument was that using a defect tracking tool is some kind of a phoenix radar, enabling teams to see if a defect reappears; automated tests do that much better. Another argument was that tracking bugs ensures that they are resolved. No it doesn’t. Tracking bugs ensures that they pile up in a database waiting to be forgotten. A test ensures that a bug is resolved much better than a task with an obscure identifier in an even more obscure system of menus. Some people claimed that defect tracking tools enable them to assign tasks and plan. A low-fi board with sticky notes does that much better. Larger or distributed teams might benefit from a digital planning system, but there are many more productive and less bureaucratic planning tools than a typical defect logger. A very popular argument was that bug trackers allow people to produce useful reports, as if pushing usefulness to one more level of indirection is a convincing argument. Bug trends might be useful to track the effects of process changes, but you don’t really need bureaucratic software for that — you can produce trend reports by quickly looking over cards as they get done.
Comprehensive statistics of past bugs are no more useful for software quality than financial accounts of a chophouse are useful for a steak sandwich. But there is safety in numbers and they are easy to produce. Douglas Hubbard explains that as the Measurement Inversion:
The economic value of measuring a variable is usually inversely proportional to how much measurement attention it usually gets.
A frequent excuse for bug reports is that the management needs them to know the current quality status. Bugs measures signal quality in the same way humidity signals nice weather. There might be zero chance of rain, but that still doesn’t mean that I’ll enjoy it if it’s -10 outside. Alan Weiss explained that nicely in Million Dollar Consulting:
Quality, I patiently explain, is not the absence of something in management’s eyes, that is, defects, but the presence of something in the consumer’s eyes, that is, value.
Instead of reporting things that are easy to measure but have low value, why not spend a bit more time actually defining what quality is and report that? One approach to do produce useful reports, again taking a note from Hubbard, is to work with management to help them formulate what kind of decisions they want to make based on those reports and how much those decisions are worth to them. This is the information value of the reports, whatever they end up being. Then look at the sources of uncertainty for those decisions and investigate what you can measure to reduce the uncertainty.
Don’t cling on to defect tracking tools as if they were a safety blanket, define what quality means in your context, for example number of user registrations, capacity to process transactions, accuracy of report figures, and measure and track those things and the associated risks. And then visualise that! Have a look what the guys at Finn.no did – they put a face on quality by pulling profiles, photos and comments of people from twitter who write about their service. Presence of bugs is irrelevant if the customers are happy. Absence of bugs is irrelevant if the customers complain.
I'm Gojko Adzic, author of Impact Mapping and Specification by Example. My latest book is Fifty Quick Ideas to Improve Your Tests. To learn about discounts on my books, conferences and workshops, sign up for Impact or follow me on Twitter. Join me at these conferences and workshops:
Specification by Example Workshops
How to get more value out of user stories
- Vienna, AT, 4 March 2016