Bug statistics are a waste of time

Lisa Crispin’s talk on defect management techniques for agile teams stirred some emotions at StarEast in early May. The idea that a team might not necessarily need a tool to track defects was, it seems, pure heresy. Luckily there were no calls to burn the witch, but many people at the conference were projecting a mix of confusion and sadness akin to a child just told that you’re taking away his favourite puppy.

Though Lisa presented cases when such tools might be useful, I’ll take a more radical approach: defect tracking tools are placebo, giving people a warm and cosy feeling that they have done something by throwing a digital coin into a fountain of wishes, while hoping for better software.

Several people insisted that defect tracking has a point, but they were not able to give me a single use case for it that couldn’t be done in another way, easier and more productive. One popular argument was that using a defect tracking tool is some kind of a phoenix radar, enabling teams to see if a defect reappears; automated tests do that much better. Another argument was that tracking bugs ensures that they are resolved. No it doesn’t. Tracking bugs ensures that they pile up in a database waiting to be forgotten. A test ensures that a bug is resolved much better than a task with an obscure identifier in an even more obscure system of menus. Some people claimed that defect tracking tools enable them to assign tasks and plan. A low-fi board with sticky notes does that much better. Larger or distributed teams might benefit from a digital planning system, but there are many more productive and less bureaucratic planning tools than a typical defect logger. A very popular argument was that bug trackers allow people to produce useful reports, as if pushing usefulness to one more level of indirection is a convincing argument. Bug trends might be useful to track the effects of process changes, but you don’t really need bureaucratic software for that — you can produce trend reports by quickly looking over cards as they get done.

Comprehensive statistics of past bugs are no more useful for software quality than financial accounts of a chophouse are useful for a steak sandwich. But there is safety in numbers and they are easy to produce. Douglas Hubbard explains that as the Measurement Inversion:

The economic value of measuring a variable is usually inversely proportional to how much measurement attention it usually gets.

A frequent excuse for bug reports is that the management needs them to know the current quality status. Bugs measures signal quality in the same way humidity signals nice weather. There might be zero chance of rain, but that still doesn’t mean that I’ll enjoy it if it’s -10 outside. Alan Weiss explained that nicely in Million Dollar Consulting:

Quality, I patiently explain, is not the absence of something in management’s eyes, that is, defects, but the presence of something in the consumer’s eyes, that is, value.

Instead of reporting things that are easy to measure but have low value, why not spend a bit more time actually defining what quality is and report that? One approach to do produce useful reports, again taking a note from Hubbard, is to work with management to help them formulate what kind of decisions they want to make based on those reports and how much those decisions are worth to them. This is the information value of the reports, whatever they end up being. Then look at the sources of uncertainty for those decisions and investigate what you can measure to reduce the uncertainty.

Don’t cling on to defect tracking tools as if they were a safety blanket, define what quality means in your context, for example number of user registrations, capacity to process transactions, accuracy of report figures, and measure and track those things and the associated risks. And then visualise that! Have a look what the guys at Finn.no did – they put a face on quality by pulling profiles, photos and comments of people from twitter who write about their service. Presence of bugs is irrelevant if the customers are happy. Absence of bugs is irrelevant if the customers complain.

I'm Gojko Adzic, author of Impact Mapping and Specification by Example. I'm currently working on 50 Quick Ideas to Improve Your User Stories. To learn about discounts on my books, conferences and workshops, sign up for Impact or follow me on Twitter. Join me at these conferences and workshops:

Specification by Example Workshops

Product Owner Survival Camp

Conference talks and workshops

45 thoughts on “Bug statistics are a waste of time

  1. @Rakesh, this isn’t stupid. You cannot ensure quality with bug tracking. All bug tracking does is give you a running trend and snapshot in time of the Discovery/Fix/Closure rate on your project. It does give you that “warm and fuzzy feeling” regarding the state of the product at that time. It is a safety net for management.

    But as Gojko has pointed out you have to determine what that “state” really means to you and the company. Defect Tracking tools do have their place and benefit in my opinion. They are a way of ‘helping’ to keep track of what Defects/Issues are reported against the product. This is important from the standpoint of historical data and data mining.

    It is a centralized repsository of information only. How you use it is another story. And that is where I think Gojko is going to. Do some situations require a big behemoth tool or process to record and track defect resolution? No, it doesn’t. I can agree to that. It is the data itself that is important and how it is used.

    Now regarding Defect Statistics (or Metrics) I believe they do have some validity and usefullness. Now any statistic can be corrupted and misused, but if everyone on the project buys into the usage and definition of what the statistic is to be then you have some common understanding. A point of reference to discuss.

    Now I’m sure Gojko (and Lisa Crispin [who I know]) would argue that defect metrics are useless. That is their opinion, and I’m giving mine.

    Look at it this way… People are by nature analytical; we want to know how much (cost or savings), how many, how big or small, etc. We naturally ‘measure’ things and ourselves. It is a way of placing worth/value on a thing. Any type of ‘rate’ is a measure (Defects, Tests, Functions, User Stories, Tasks, etc.).

    Now how you handle those measures, either by electronic means or not, is up to you.

    In as far as tools; I think they do have their place. I see it as a way to centralize the information and get it into a common format/structure (and place) so that everyone is talking the same ‘language’ regarding that information. One person’s “fixed” may not be the same as someone elses when not using a tool or process. I want to remove the ‘human’ equation (and emotion too) so that we can focus on the problems at hand.

    Now that tool could be Excel, Word, Sticky Notes or a high-end tool. How you use the tool is up to you.

    Can you agree to that Gojko?

  2. “Some people claimed that defect tracking tools enable them to assign tasks and plan. A low-fi board with sticky notes does that much better.”

    Perhaps you have a really large board and a lot of sticky notes? Or perhaps you have very few bugs to track?

    The title of your post talks about “Bug statistics”, but the text talks about bug tracking tools. In my shop those two concepts are not the same. Are they in yours?

    Does your view “defect tracking tools are placebo” apply only for Agile teams? Or do you believe it’s a waste of time to track defects using anything other than a corkboard and cards – even in non-Agile environments?

  3. Well, I didn’t reach this conclusion (although I probably agree), but I finally did figure out a few months back that ‘quality’ is basically another word for ‘value.’

    I pretty much agree with your statement, and when I ask interviewees to define ‘quality,’ the last thing I want to hear is Active Defects. So I don’t want to counter your argument, but I would like to find ways to explain that this is the case.

    In waterfall organizations, the schedule drives the product. So, if an organization wants to measure something, active defects are a pretty simple number to grab. Does it translate to value? Probably not, but really, what does? That is a bit flippant, but in a 6-month (2 – year, pick a time) software release cycle, how do we measure the value of the work being done? Of course, agile practices relieve the burden of measuring long cycles, but old paradigms are hard to break, even in software.

    I wonder if a second factor is the idea that as a tester, how do I prove my own value? I tend to produce a lot of testing code, but don’t file a bunch of bugs. If an employee’s value is based on individual accomplishment, defects provide management with a great number to measure employees against one another.

    I would certainly like to work in an organization where we fundamentally decide that quality equals value, and how do we measure the value of the system under development? But I am truly uncertain how to drive that change.

  4. Oh, and rakesh — the goal is to improve quality by finding appropriate quality measures and taking actions to optimize those measures. A defect obviously impacts quality, but there are probably better tools which translate into real quality and real action.

  5. Well said! I find that too many companies start with the metrics they decide to gather, then push teams to gather them. They need to start with their goals around product quality, and decide what metrics or other ways of tracking will help them know if they are making progress towards their goals.

    So many people cling to the notion that a “defect” found during coding is a bug and should be logged. I think this comes from the idea that testing is a separate activity from coding. Testing is just as much an integral part of software development as writing code. Identifying misbehaviors or missed behaviors is a necessary part of coding.

    When I started reading this I was curious because you weren’t in my defect tracking session, then I remembered people tweeting about a defect tracking discussion with you at breakfast. I’m sorry I missed that! And interested to hear that my talk actually generated some conversation around the topic – that is awesome to hear.

  6. I agree with the blog post title: “Bug statistics are a waste of time”
    But I don’t agree that there aren’t productive usage of a defect tracking system.
    I won’t discuss the problems of automating verification of non-functional problems, but instead translate a comment I made to a Swedish post on similar issue (Henrik Andersson, http://testzonen.se/?p=527):

    I would not like to work without a bug system, e.g. for these reasons:

    * openness is good, many can see reports, and sometimes help
    * more people than testers and developers can report bugs and enhancements
    * you can use old bug reports to come up with new test ideas
    * you can use old bug reports for regression testing
    * you can find old reports, and don’t have to report them again
    * when you encounter strange behavior, you can get help from old reports
    * support department can search, and get help when helping customers, e.g. work-arounds
    * you can analyze the data and understand more about your development (NOTE! Not metrics, rather analysis with knowledge about details)
    * you can make references to bugs when reporting status
    * I’m afraid I would forget an aspect of the bug (or side-effects) when verifying the fix
    * new testers can look at bugs to learn the product and its complexity
    * you can write how you fixed a bug, so developers can get help with similar bugs in the future
    * you can document why you chose to not fix a bug
    * no risk that the post-it note will blow away, so a bug is forgotten

    All of these things can of course be accomplished in other ways, but a well-managed database with excellent search capabilities combines many of these in an efficient way.
    To manage without a bug system probably works best if there is one tester and one developer, with perfect memory, no support department, and preferrably one release.

  7. @Jim

    I agree we’re analytical by nature but my argument is that we should pick what we measure more carefully, not just because it’s there and easy to do but because it contributes to the goal. if the goal is to show quality of software, but measures only show the lack of one aspect of it, so they are a misleading and not really useful thing to track.

  8. @Joe

    I think defect logging and tracking is placebo in general, I’ve seen too many projects where defect tracking tools become a repository of hundreds of issues, many of them labeled critical although they’ve been unresolved for months. bugs should be killed, not tracked.

  9. @Rikard

    tests serve most of those purposes better than a bug tracking tool. you’re right about usability and similar tests – they cannot be automated – but you can still have tests around them that can be manually executed and inform exploratory testing sessions.

    bug tracking tools can be used to record tests against bugs, and many teams use them to specify informal “how to reproduce” steps, but there are many better ways out there to capture and publish tests, manual or automated.

  10. I’ve worked with clients who have very detailed defect statistics and trends spanning several releases over several years. That gives them a measurement of defect density, e.g. 3 defects per KLOC.

    Turn around and ask the various development teams “how many lines of code they expect to ‘churn’” in an upcoming release. The teams hallucinate a number, which is used in conjunction with the defect density to determine how the test people will be allocated and how long the test cycle should be.

    It’s bullshit.

    If I spent some serious time refactoring and restructuring some of that code and therefore significantly reduce its size, how is that measured? If I do that while at the same time covering all of the code with automated microtests and possibly integration tests, isn’t the density calculation rendered obsolete?

    I worked on one mission-critical system for over 4 years, and in that time our defect tracking system was two file folders in the team area. When the QA people found a bug, they would print a screenshot and write any pertinent notes on it about the problem. They would bring that printed sheet to us and we would talk about it for a couple of minutes. Occasionally it would be serious enough to warrant dropping current work and fixing the problem. Most of the time, however, the report would go into the red file folder.

    When one of the 10 developers had a free moment, they would grab a report from the folder (it may have been sorted by priority – I can’t remember), and work on it. When they had fixed the problem, proven with an automated test, they would put the report into the other folder, a blue one I believe. Every day, the folders were checked – the developers would look in the red folder, the QA people in the blue one. This system worked wonderfully, and there was never any need to track the defects electronically. That was also partly owing to a commitment by the team to never let the red folder become too full, i.e. if it filled up we did a lean Stop & Fix.

    Actually, I kinda lied. We did track them electronically… as automated tests. :)

  11. So is the issue the use of the statistics/ metrics or the database itself?

    I see the point about knowing what you want to measure in order to determine whether you are achieving your goals. In other words, be more judicious with the metrics you track. Otherwise you are wasting time tracking irrelevant info.

    However, I would say that bug tracking databases can have value.

    I have worked in larger companies with bug tracking databases and smaller ones without them. I see the pros and cons as follows:
    Pros:
    1. The defect tracking database serves to CYA. If you work for a company that is audited, this serves as a record of issues reported. It also documents sign off/ formal verification of the fix by the QA group.
    2. Defects are centrally located and less inclined to be lost, although one might argue that working in quality assurance, we should be organized enough not to lose bug reports.
    3. A bug database does make reviewing open defects much easier, such as when management wants to triage open defects. If each developer has his/ her own bug list, it makes it hard for management to view what is open for prioritization. You can ask every developer to send a copy of their open reports, but is much easier to have them in a database.

    Cons:
    1. I think sometimes the indirect communication between QA and dev, which occurs through defect databases can be less efficient. Often, it is easier to email or print a report and then discuss the issue with a developer in person. In my experience, environments with heavy reliance on defect tracking databases may have less QA/ dev contact. In my opinion more direct contact between QA and development leads to more efficient communication and more learning.
    Possible Solution: Note that this con could be minimized by simply having the tester and developer follow-up on the database entry for a more direct discussion of the issue.
    2. A number of the companies I have worked at have had databases with poor searching abilities. Searching on large text fields is disabled. Thus, you lose the ability to see if other testers have reported the same defect, which would be a benefit, if it was enabled.

  12. @Sue,

    How about rather than using a defect database to CYA, we write code so clean that no database is necessary? Why don’t we use known good engineering practices to reduce our field-reported defect rate over one year to something that can be counted on one hand? How about solving the problem rather than treating the symptom?

    Oh yeah… I blogged about that too: http://practicalagility.blogspot.com/2011/01/normalization-of-deviance-in-software.html

    Dave…

  13. Sorry for the off topic comment, but I noticed you referenced Hubbard’s book. Just wondering what you think of this book; it’s on my list of books to consider purchasing. Would you recommend it? Is there another book on the subject that you think is superior? Thanks.

  14. Good post and I agree with it as a “goal state”.

    However, the real life agile/lean implementations are lacking the maturity level that this post suggests. It is rather rare still that companies (that I’ve coached) would be doing (a)tdd/bdd (or even any kind of automated testing) and therefore they still need some unconvenient way to know if there are bugs.

    Yes, bug tracking is suboptimal (there are better ways) and yes, it does not ensure quality built-in (as you still need to fix it).

    I have one benefit for “bug tracking” to be suggested for more mature implementations of agile (1): We use it for “fix tracking” as a learning tool (2): we track the fixes we implemented to our automated regression tests due bug in production in order to remind ourselves “what kind of things we should think when ATDDing next time”. Do you see that valuable and if yes, what would be “a better way” / what is your implementation for this (if not a bug tracking tool)?

    (1) our current way: http://huitale.blogspot.com/2010/03/huitale-way-our-value-stream-map.html (search for “stop the line” for bug fixing).
    (2) http://huitale.blogspot.com/2008/11/i-release-my-software-everyday.html (old but still contains the learning..)

  15. Bug database by itself doesn’t improve quality and rarely the metrics involved. However I’ve used bug database analysis to do general improvement and thus improve quality. For example after 2-3 iterations it’s already possible to see what are typical bugs found, who made them, do they matter to the client and also what bugs we missed pre-release. Thus I was capable of pinpointing certain issues that involve quality and work out solutions.

    to Sue C: I worked for a company where communication issue was solved little better. Twice per week there were meetings where dev’s and testers discussed found bugs, marked priorities together and who should fix it. Then if there was an issue with the bug, people knew whom to turn considering a specific bug.

    to Dave: The bugs rarely appear because of programmer can’t code. My experience shows that bugs appear because of the interpretation of information from which the code is written. The second most often seen issue I’ve noticed is so-called middle-bugs – Bugs that appear due to interaction of components. Sadly I haven’t seen possibility of “solving problem instead of treating the symptoms” in either case and I can’t say I haven’t tried.
    Bug is something that bugs someone who matters – thus how can you create software if you don’t know every single person who matters and what might bug them. I would really like to see some of your “so clean code” software that no bug database is required.

  16. Perhaps I’ve missed it, but I don’t see the most fundamental reason for tracking bugs: to track the fact that someone (your client) reported an issue, and then to show how that issue was resolved.

    The automated testing, QA and all the rest are a separate issue.

  17. “bugs should be killed, not tracked.”

    I guess I don’t understand why one precludes the other.

    Yes, bugs should be killed.

    Perhaps in some shops you can kill all the bugs without ever tracking any. But, I’ve never worked in such a shop.

  18. @Gojko,

    I can see your point, and a lot of times there is the “Metric Mania” on projects which causes confusion and overkill. KISS (Keep It Simple Stupid) is my motto; get the basic meaningful metrics in place, get people to understand what they mean and how they will be used, and keep it about the software and not the people (keep managements hands out of it).

    But you also combine this with other measures to determine progress and stability. If you’ve tested the crap out of your software and the defect discovery rate is trending down then you do have some level of stability in the code. Stability of code is a factor for determining readiness for release. Defect metrics are a piece of the puzzle to release software. They do have their place, but you need to be cognizent about what you collect and how you use it.

    That is what I believe you were saying in your other statement. Be smart about what you use and how. Correct?

    Sure defect metrics are not perfect, but you use metrics for other things (like Burn Down rate, Velocity, etc.) that can be just as unreliable if not understood and used correctly. And there are tools (Rally, VersionOne, etc.) that are used for those things. Should we pitch them too because they are repositories and reporting engines?

  19. @Josh

    Just wondering what you think of this book; it’s on my list of books to consider purchasing. Would you recommend it?

    I liked it, it has some good ideas and widened the way I look at things. At points too academic for my taste but worth reading.

  20. @Jim

    Stability of code is a technical constraint. if code is unstable, it isn’t likely to be acceptable by the customers. on the other hand, twitter is famous for being unstable but still a great product. so I agree that stability of code is important, but not nearly as important as figuring out what the customers/users/business sponsors really want and ensuring that is delivered. most teams I get engaged to work with with have no understanding of quality beyond technical stuff like stability of code and number of bugs.

  21. @Joe

    Perhaps in some shops you can kill all the bugs without ever tracking any. But, I’ve never worked in such a shop.

    In many places the fact that bugs are tracked delays killing and focusing on what really needs to be killed. try flying without that safety-net and you’ll find that the critical stuff gets resolved immediately and that less important things aren’t such a big deal, so they don’t really need to be tracked.

  22. More bugs in a module indicates there are systemic problems in that module. How is that not useful? This is a very practical often experience plus of bug tracking. If you are saying bugs should never get past testing then that’s more of religious stance.

  23. “In many places the fact that bugs are tracked delays killing and focusing on what really needs to be killed. try flying without that safety-net and you’ll find that the critical stuff gets resolved immediately and that less important things aren’t such a big deal, so they don’t really need to be tracked.”

    I guess your personal experience differs from mine.

    Flying without a net might be appropriate in some (very small?) shops. In others, it would be reckless, and perhaps even criminally negligent.

  24. “One popular argument was that using a defect tracking tool is some kind of a phoenix radar, enabling teams to see if a defect reappears; automated tests do that much better”

    I don’t seem to get this or possibly I don’t agree with this. In my opinion automation is just a tiny part of the whole QA process. How would you know what to automate for in regression tests if we don’t know what issues have been uncovered till now in manual testing. Possibly you won’t term it as bug tracking and term it as an issue tracking, whatever name given we need a way to know how to reproduce the same steps which caused the issue to automate it.

  25. Pingback: Always Agile · Bug tracking considered harmful

  26. “I’ve seen too many projects where defect tracking tools become a repository of hundreds of issues, many of them labeled critical although they’ve been unresolved for months”

    To me this spells bad communication between Test and Dev, if defects are logged and not reviewed I would agree that the Defect Tracker is a waste, it shows no one is doing Bug Triage or checking them out.

    I’ve always been ambivalent about tracking, working in bigger shops with lots of head’s down coding and testing and in Agile/Scrum shops where defects became Stand-up items and tasks. Trackers have their place, metrics I don’t really feel a need for and have never been asked to provide.

  27. Pingback: Tracking software bugs: keep the important ones; drop the rest | Will's Blog

  28. I totally agree that maintaining long, complicated bug statistics for every one of 1001 bugs that you will never fix is a complete waste of time. It is the perfect example of failure demand… you have people and processes that incentivised to maintain a large backlog of bugs, when the goal should be to not have bugs in the first place. Mixed incentives generally generate waste and noise.

    However, as many comments here already point out, there are certain practical benefits to tracking bugs in a tool, particularly as a communication channel if you have offsite or distributed development teams, or simply as a way of documenting what needs to be fixed in an accessible way.

    So, I have a counter proposal: Why not set a limit of the number of bugs you want to deal with – I suggest 150 as a reasonable number for very large software systems (it could be much smaller for smaller teams/products). The highest priority x items are tracked, the rest is ignored. As the high priority issues are resolved, the relative priority of the next items increases, and the bar goes up. I talk more about this in my blog post here if you are interested: http://blog.williamgill.de/2011/05/19/tracking-software-bugs-keep-the-important-ones-drop-the-rest/

    It’s a kind of compromise that I think retains some of the practical benefits of tracking bugs, without the noise and waste of tracking absolutely everything.

  29. I could see this working with short release cycles (i.e. one aspect of agile programming). The optimal situation for this would be daily builds coupled with automated regression/smoke tests.

    If you are just reporting the issue’s you’ve found that day, and the developers are working with QA to close each of those defects each day, why worry about yesterday when you know you’ll have a new set of defects to work towards raising quality the next day?

    On the other hand, if your team doesn’t have short release cycles, this could be “criminally negligent” as issues would fall through the cracks.

    There certainly is a threshold here before this technique could be classified useful, and alternativly, before tracking bugs would become useless.

  30. Gojko,

    A great post on an interesting subject. I certainly agree on the issue of bug statistics. A bug is a collection of descriptions of behaviour which someone might find undesirable, and as such bear little or no relation to each other that merit quantitative analysis.

    As I wrote on Rob Lambert’s blog last year, I think that bugs in a bug tracker can be boiled down to 3 essential roles:-
    1. A reminder to do a piece of work on a prioritised basis
    2. A packaged demonstration of a behaviour that may have negative consequenses for a stakeholder
    3. A token under which progress and decisions on an subject can be documented

    If we examine each in turn.
    1. A reminder to do some work is only necessary if we are not going to tackle that work immediately. If you are going to tackle the problem straight away, no reminder is needed, I try to take this approach on issues found in active development work where I try to limit raising a issue to when it cannot be addressed immediately and there is a risk that the issue might fall out of the iteration.
    2. Again, if the problem is going to be tackled immediately and can be demonstrated either manually or by an automated test, why the need for a packaged demonstration kit? If the work is to be deferred then a bug may be required to document the problem. However we need to bear in mind the information that can be lost between having the bug in front of you and trying to recreate it from a set of instructions. I’ve seen bug tickets before marked as ‘seems to have gone away’ only for that problem to re-appear later, as the full problem was not documented in the bug ticket.
    3. If there is not a simple fix, or the fix involved is high risk, then it may be that we want to track the decisions made on that issue. It is sometimes a useful approach to raise a bug to tackle an issue where the first attempt at a resolution failed as that implies a deeper problem or misunderstanding somewhere, however in this case the primary means of communication remains face-to-face, with the bug acting as a decision repository, a role which a Wiki or test based documentation could be used as an effective replacement.

    In all of these situations there are alternative approaches which could perform the same role. At our organisation we do still use a bugs database, however we treat each behaviour documented in it on its own merit. We try to restrict the items in it to things that are identified outside of the scope of the current stories and so need to be prioritised separately. The idea of using bug counts to obtain information on the quality of the product, or worse as arbitrary targets for release decisions, is an anathema to me.

  31. Goyko,

    I completely agree with you. I also don’t see the benefit of keeping bug reports from the beginning of the project or make them a report because the management likes it.

    I’d only like to add one point: I see value in tracking bugs for a (and each) sprint only. And, aging, I’m not talking about show a report or making a graph or something like that. But, I like the team to see what kind of problem is going on during the sprint and try to figure a way out how to improve in the next sprint. Maybe improve the team skill in a certain technology, maybe improve the communication, or something else. After that the tracking must be discarded and a new one starts in the next sprint.

    Choosing how to document that (or even to track bugs), should be a team decision only. If they decide to use a a white board, fine. If they want to use a tool, fine.

    I see that as a way to improve the overall abilities of the team and not a way of punish them or make a management happy with a fancy report.

    And, again, that is team choice. Not a management choice.

    Thanks

  32. I like defect reports, they are like a dog training technique for developers – rubbing their nose in their own excrement if you will…

    It makes for a much better working environment as everyone knows where they stand, that and the defect klaxon to sound when we find a defect……have found a plugin that automates the sounding of the klaxon on the raising of a severity 7 or above defect, just waiting for my boss to sign it off!

  33. Great point with the statement that “value” and bugs are not inverses! You can have a valueless component with almost no bugs, and a very valuable component with tons of bugs. In fact, in my experience, as a piece of software gets more valuable, it gets used more and more — which means that more bugs are filed against it. Conversely, if nobody is filing bugs against your software, chances are nobody’s using it. :-)

    Measuring value is always a better idea, but exactly how to do this will probably be domain-specific in a lot of cases.

  34. I think automated acceptance tests are a better way. In our organization developers and hounded by the testing police based on defect reports. So just based on the bad experience we have here defect reports waste your time.

  35. Catching up with my reading, I just encountered this blog post.

    This is also something I feel quite passionate about. I first talked about this back in 2004… wrote about it in 2005… and I was flamed endlessly for it… So much so that I got bored of carrying a metaphorical fire extinguisher with me everywhere I went… I tried to communicate the message more gently in a blog post in 2007 called ‘the secret backlog’. That seemed to be accepted more openly, although not without controversy.

    I then wrote about this suggestion that we should not have bug tracking systems anymore in Lisa and Janet’s Agile Testing book (page 426), a section called “the hidden backlog”.

    I really like your extension of this notion with the idea that bug-tracking systems are a placebo… And I like the emphasis on what quality means in context.

    There are teams for which dropping their bug tracking system would be a problem. Weening teams off the classic bug tracking system by gradually fulfilling its purpose with alternative approaches that are integrated into the overall development process (backlogs, stories, acceptance-tests). By taking full advantage of these other faculties, the team will eventually realise that it retains its bug-tracking tool for one reason and one reason only… “just because…”

    http://andypalmer.com/2010/03/thats-just-the-way-we-do-it-here/

  36. Hi Gojko (and all participating),

    My colleague recommended your blog and I see that for a reason; It’s brilliant!

    However for example this post leaves some questions unanswered. Perhaps someone has addressed them in comments, which I didn’t read through and through because of sheer laziness. Here’s what bothered me the most…

    Firstly; How about reporting journey from bad quality to good? I know, good vs. bad is relative, but what I’m after are the challenges that were encountered during development process and how they were tackled. Often when interviewing people applying for jobs/gigs I ask about their shortcomings instead of successes. That way the answer reveals so much more. So unfortunately often (scrum or so) teams have presented their “baby” as the best thing ever happened in this world, but as the demo or other delivery gate show is over and the “baby” is put into it’s wild living environment, it’s steaming pile of shit. Those who have managed to tell the story from tough times to success are victors by default, where as those who don’t manage to do that have to be turned away with their pile of shit. Why is that?

    Secondly; How about the emotional connection to what you’ve made, your creation, your “baby”? I know there are many who think that testers should be as close to developers as possible. I know you think so and so do many commentors here too. I do too, to some extent. But when we get too close to what we are supposed to inspect, emotions come in play in a harmful way. We might want to push it forward even though it doesn’t deserve to be pushed forward. We might say that the ugly baby is beautiful, because it’s ours. I’ve seen numerous brilliant testers fall into that trap and become politicians. What’s your insight on this matter?

    Kind regards,

    Sami

  37. Just come across this article

    From my own experience, software feature can change after some issue found or some suggestion made, if we don’t have an issue tracking system, how can I know the reason of why someone make some change? If we don’t know this information, then isn’t anyone will just change that break something else and there is no way to check?

  38. The idea that defect stats are irrelevant is quite funny. I wonder how these people get slots to speak at these conferences anyway. IMO I think that defect stats are good. They are one part of a comprehensive measurement strategy. Basically like they say, ‘you can’t manage what you don’t measure’. But, firstly defect stats should be used in conjunction with other measurements to determine not only the software’s quality and the productivity and effectiveness of the dev/QA team. Regarding the latter, defect stats and their usage and interpretation needs to be done carefully in order not to encourage the wrong behaviors.

  39. Very interesting topics. In my experience the bug trackers is necessary and I don’t see any other way to handle a QA department and increase collaboration between teams if the bug tracker is not present. I love it, because you can rapidly create some statistics and show the progress/time/frequency/quantity, etc.
    The only issue is when the bugs are very low and start getting old.

  40. Defect Tracking Tools are placebo? It is no more a placebo of confidence than that of an agile team at the end of a release when they review subjectively-authored sticky notes and give their gut-feeling quality judgement?

    It seems to me that you’ve had bad experiences with defect trackers. Perhaps they were implemented poorly, mis-managed, or not managed at all? Why is it that your opinion of the tool matters so much if other teams find so much benefit from it?

    Perhaps in the past when you’ve used defect trackers, you were the type of developer who had marked corrected issues as ‘fixed’ with no other information and caused your testing partner to work inefficiently and inaccurately.

    Tracking defects (whether fixed in the end or not) allows testers to understand the sensitive areas of the application and where to focus their efforts. There are plenty of product and process trends that can be extrapolated with data from defect tracking.

    Defects, Test Cases, and time spent testing are the 3 critical components for accurate quality metrics. You might not agree, but those are what is needed to get an INTERNAL assessment of the products quality.

    If the business decides that a defect isn’t important or severe enough to fix, it isn’t justification to not track it. The fact is whether you track it or not, it’s still there.

    You can choose not to track it, and let the customer find it, but you’ll have to have superhuman recollection to recall why the hell didn’t we fix that when we *think* we saw that.

    I can hear it now,
    Bob: “Didn’t your automated test find that?”
    Ted: “I think it did, I see the failed case. Why didn’t we enter a defect?”
    Bob: “Maybe we decided we wouldn’t fix it?”
    Ted: “Dunno, but my manager is ready to rip me a new one”

    One could say, as Lisa does in some of her posts, that the failed automation test case should be the documentation of the defect. The problem with that is that the test case states the actions performed and expected results. They don’t relay the actual results or communicate the details of the correction- so that my tester knows exactly what was changed and other places that could be affected (when shared code).

    Lastly, a defect and its corrective measures aren’t always a piece of information beneficial to the tester and developer. Defect trackers also help spread knowledge among the team through public views, email notification, defect subscriptions, etc.

    Your argument is not unique, but it doesn’t mean it’s right. Radical thinkers like yourself have a ton to offer a stagnant and outdated industry. However there ARE still fundamental good things that should continue to be done. This is one of them. It just needs to be done right.

    Here my radical idea- when a defect is found and the team agrees to not fix it, rip out the code that caused the issue. Start removing all broken functionality until all you have is what works.

  41. It is not really a placebo. It can actually harm. Automating failure demand is simply not a good idea. If you have the chance, just work with a zero open bugs policy.

  42. I’m quite late on the topic but I really don’t see the point of the “no bug tracking” is better. Unless you work with 10 coders in the same office maybe.
    Consider now several dev teams working together on the same application, each located in a different country, the application being a mammoth industrial app. with several branches kept as clients do not always want the latest version. Would a board with sticky colored papers be able to replace a centralized defect database ?

    And where do you keep your already fixed defects so one could know that the bug experienced by this client is already fixed in version +1 and could be backported ? With just an auto test ? are you kidding ?

    Bug tracking is as essential a versioning manager for you source code.

    About bug statistics it’s not always good and we know it, but don’t thow the essential defect tracking tool with it.

  43. Pingback: Bug é algo terrível para se desperdiçar | iMasters

  44. Hi Gojko,

    You pick some truths, add some spice and mix everything to cook a nice troll.

    Maybe you’ve met too many bug reports that are not *actionable* or defined as Hubbard would have.

    Fortunately, this does not makes tracking tools placebo for most people.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>