Are tools necessary for acceptance testing, or are they just evil?

While doing research for my new book, I was very surprised to find out that Jim Shore gave up on acceptance testing. I use his “describe-demonstrate-develop” process description all the time in my workshops, so I guess I better stop doing that. Jim Shore wrote:

My experience with Fit and other agile acceptance testing tools is that they cost more than they’re worth. There’s a lot of value in getting concrete examples from real customers and business experts; not so much value in using “natural language” tools like Fit and similar.

The two failure patterns that Shore describes in his post are falling back on testers to write everything and merging acceptance and integration tests. I’ve experienced both of these myself, and it seems that they are common in general. We discussed both during the top 10 ways to fail with acceptance testing openspace session at CITCON Europe last year. However, there are good ways to solve both problems.

I never really expected customers to write anything themselves, but I was relatively successful in persuading them to participate in specification workshops that led to examples which were then converted to acceptance tests later. Another idea I discovered while doing the research for my new book is discussing the key examples with customers and then going off to write detailed test pages, which then get verified by the customers. The third good idea is doing ad-hoc specification writing sessions when a developer needs information, by involving a tester and a business analyst. This is a lot less formal than a specification workshop and gives you similar benefits if you have all the knowledge in the room (or readily available) most of the time.

Not preserving acceptance tests as a separate group and mixing quick and slow tests is something that most people, at least according to my ongoing research, get burned with at some point but again teams learn from that and improve.

One of the biggest benefits from acceptance testing for me was that the teams finally get a source of information on what goes on in the system as reliable as the code itself. Without acceptance tests, code is the only thing you can really trust and any other documentation gets outdated very quickly. And such tests are much easier to read and understand than the code because they are on a higher level and in a natural language. Having a live specification helps me quite a lot when change requests come in later. It also helps with handing over and taking over code. Acceptance tests stay relevant throughout the project because they are automated, and automated tests are kept up to date in order for them to pass. Automation, and consequently a tool, are necessary to get this benefit. With informal agreements and on-site reviews that Jim Shore describes, I guess something else needs to be in place to facilitate this.

I agree with Shore that it takes a while for the problems with tools such as FIT to surface, but I’m not sure whether that is tool related or not. Most people I spoke to so far said that it took them between six months and a year to discover that acceptance testing isn’t about the tools but about communication, and that the biggest benefit is in the examples as Shore wrote. A notable exception to six months to a year rule was Lisa Crispin‘s team who generally started out knowing what they need (but that’s because she had done it before). Clear examples and improved communication are the biggest benefits of the process, but using a tool brings some additional nice benefits as well. A tool gives us an impartial measure of progress. Ian Cooper said during the interview for my new book that “the tool keeps developers honest”, and I can certainly relate to that. With tests that are evaluated by an impartial tool, “done” is really “what everyone agreed on”, not “almost done with just a few things to fill in tomorrow”. I’m not sure whether an on-site review is enough to guard against this completely.

I'm Gojko Adzic, author of Impact Mapping and Specification by Example. To learn about discounts on my books, conferences and workshops, sign up for Impact or follow me on Twitter. Join me at these conferences and workshops:

Product Owner Survival Camp

Specification by Example Workshop

Conference talks and workshops

7 thoughts on “Are tools necessary for acceptance testing, or are they just evil?

  1. hmm, interesting reasons. I would say the biggest problems I have seen are long term readability, maintainability and speed. Of course, these are not issues with the tool necessarily either and are correctable. These are the reasons I see teams give up on automated acceptance tests. And like the examples you listed it takes a while for the issues to surface to a point the team gives up.

  2. For me the crucial question is: How good – compared to some ideal way of acceptance testing – must AT be so that it will pull its own weight? I’ve seen projects where AT has gone terribly wrong and others where it was the essential element for success. How can I tell beforehand if it’s going to be one or the other?

  3. Gojko, the link to Jim Shore’s post points to his webpage, not to the blog posting itself. You might want to fix that.

  4. Pingback: George Dinwiddie’s blog » The Reality of Automated Acceptance Testing

  5. Pingback: Money quotes from the Shore acceptance-testing discussion « Excellentist

  6. Pingback: Questioning Automated Acceptance Testing « The Art of Software Development

  7. Pingback: Where we are with acceptance testing and our BDD journey today - Ian Cooper - CodeBetter.Com - Stuff you need to Code Better!

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>