A new problem for the agile testing community

As an industry, I think that we are now well beyond the perception that quality can be controlled for success. Lots of good thinkers and thought leaders over the last 20 years have moved our thinking from control to assurance, but for the majority of teams I work with the responsibility of QA folks isn’t really assuring quality. In order to deliver effectively with short iterations, quality assurance has to be the responsibility of the entire team. Many techniques and practices that assure quality are in the requirements, analysis, programming and operations space, so people with testing hats on can participate but not really drive most of that. Trying to define the responsibility of testers in an agile world at one of my clients, we got to the conclusion that testers in their team should have two key roles:

  • supporting the team in deciding what needs to be tested, how to test it and performing a specialist part of testing
  • visualising current state of quality for stakeholders to make more informed decisions

This struck a chord with something I’ve picked up from Elisabeth Hendrickson (it’s incredible how many gold nuggets of knowledge she drops everywhere she goes) at agile testing days 2009. she said that people asking for control really want visibility. so quality control is out of the door, assurance is following, what people really want is quality visibility. most of what we’ve been focusing on as a community over the recent years has been in the space of supporting the team (specification by example, exploratory testing, tdd) and the most efficient things in the visibility space are still horrible tables with incomprehensible amounts of data or static code analysis tools.

So I’d like to throw a new problem at the agile testing community: how do we effectively visualise quality? how to we do it in a way that is easy to maintain but provides people with enough information to make decisions on releasing and process improvement?

The answers to these questions will no doubt be very contextual but we can probably repurpose, reuse or invent some good tools that work in particular contexts. I’ve started collecting some tools and techniques from lots of different places. Not sure yet what to do with it, but it will most likely be either a wiki or a book or both.

If you have a good way of visualising quality or a story to tell, please get in touch. I’d like to talk to you about it.

mail: gojko@gojko.com
skype: gojkoadzic
twitter: gojkoadzic

The wolf who cried boy

Hotel room. Time check: 10PM. Go to sleep, big workshop tomorrow. Buzzing. I’m half awake. Buzzing continues. Time check: 2AM. More buzzing. Buzzing? Definitely buzzing, outside my room door. Door open and buzzing is now really loud. I guess it’s some kind of burglar alarm. Burglar alarm in a hotel? Doesn’t make sense. Probably a fire alarm then. Close the door and go back to sleep – big workshop tomorrow. I’m half-awake. Time check: 8AM. Did I sleep over a fire alarm? Quick call to reception confirms.

With a horrifying possibility that someday I’ll end up in a reverse game of the boy who cried wolf, I realised that I care as much about electronic alarms as the government cares about climate change. Someone shouting “Fire” would probably get me running down the stairs in my shorts, but an electronic siren has the same emotional effect on me as Wuthering Heights has on Klingons. I’ve survived far too many fire alarm tests to care. My brain sends such sounds straight from the ear canal to /dev/null, stopping only to curse car owners who install stupid alarms that can’t differentiate rain from a thief. I hope your car really gets stolen one day. As far as I’m concerned, electronic alerts are worth less than a Zimbabwean dollar.

I wonder whether there is a lesson in that for software development. I got an e-mail this morning from a production support mailing list of one of my clients:

The first issue we had found during regression but had incorrectly identified as an existing issue.

Maybe that is it.

How is it even possible for code to be this bad?

If you have a few minutes to spare, have a look at the Hudson/Jenkins source code on GitHub. Writing code like this wouldn’t just be a firing offense for any self-respecting commercial software team, it should also be enough to sue the writer for gross negligence and damaging company property. Yet, this project is undoubtedly a huge success and its authors should get the respect for creating it. Where does this leave the whole debate of software craftsmanship? Continue reading

Product Management using Effect Maps: Slides and Books

Here is the presentation from my talk at Scan Dev 2011 on Product Management using Effect Maps:

These are the books I mentioned:

  • Mijo Balic, Ingrid Ottersten, Effect Managing IT, 2007, Copenhagen Business School, 978-8763001762
  • Mike Cohn, User Stories Applied for Agile Software Development, 2004, Addison-Wesley Professional, ISBN 978-0321205681
  • Gerald Weinberg, Quality Software Management, 1991, Dorset House Publishing, ISBN 978-0932633224
  • Gojko Adzic, Specification by Example, 2011, Manning, ISBN 9781617290084
  • Scott Berkun, Art of Project Management, 2005, O’Reilly, ISBN 978-0596007867
  • Mark Denne, Jane Cleland-Huang, Software by numbers, 2003, Prentice Hall, ISBN 978-0131407282
  • Tom Gilb, Competitive Software Engineering, 2005, Butterworth-Heinemann, ISBN 978-0750665070
  • Douglas W. Hubbard, How to Measure Anything, finding the value of “intangibles” in business. Wiley, ISBN 978-0-470-62568-2