While doing research for my new book, I was very surprised to find out that Jim Shore gave up on acceptance testing. I use his “describe-demonstrate-develop” process description all the time in my workshops, so I guess I better stop doing that. Jim Shore wrote:

My experience with Fit and other agile acceptance testing tools is that they cost more than they’re worth. There’s a lot of value in getting concrete examples from real customers and business experts; not so much value in using “natural language” tools like Fit and similar.

The two failure patterns that Shore describes in his post are falling back on testers to write everything and merging acceptance and integration tests. I’ve experienced both of these myself, and it seems that they are common in general. We discussed both during the top 10 ways to fail with acceptance testing openspace session at CITCON Europe last year. However, there are good ways to solve both problems.

I never really expected customers to write anything themselves, but I was relatively successful in persuading them to participate in specification workshops that led to examples which were then converted to acceptance tests later. Another idea I discovered while doing the research for my new book is discussing the key examples with customers and then going off to write detailed test pages, which then get verified by the customers. The third good idea is doing ad-hoc specification writing sessions when a developer needs information, by involving a tester and a business analyst. This is a lot less formal than a specification workshop and gives you similar benefits if you have all the knowledge in the room (or readily available) most of the time.

Not preserving acceptance tests as a separate group and mixing quick and slow tests is something that most people, at least according to my ongoing research, get burned with at some point but again teams learn from that and improve.

One of the biggest benefits from acceptance testing for me was that the teams finally get a source of information on what goes on in the system as reliable as the code itself. Without acceptance tests, code is the only thing you can really trust and any other documentation gets outdated very quickly. And such tests are much easier to read and understand than the code because they are on a higher level and in a natural language. Having a living documentation helps me quite a lot when change requests come in later. It also helps with handing over and taking over code. Acceptance tests stay relevant throughout the project because they are automated, and automated tests are kept up to date in order for them to pass. Automation, and consequently a tool, are necessary to get this benefit. With informal agreements and on-site reviews that Jim Shore describes, I guess something else needs to be in place to facilitate this.

I agree with Shore that it takes a while for the problems with tools such as FIT to surface, but I’m not sure whether that is tool related or not. Most people I spoke to so far said that it took them between six months and a year to discover that acceptance testing isn’t about the tools but about communication, and that the biggest benefit is in the examples as Shore wrote. A notable exception to six months to a year rule was Lisa Crispin’s team who generally started out knowing what they need (but that’s because she had done it before). Clear examples and improved communication are the biggest benefits of the process, but using a tool brings some additional nice benefits as well. A tool gives us an impartial measure of progress. Ian Cooper said during the interview for my new book that “the tool keeps developers honest”, and I can certainly relate to that. With tests that are evaluated by an impartial tool, “done” is really “what everyone agreed on”, not “almost done with just a few things to fill in tomorrow”. I’m not sure whether an on-site review is enough to guard against this completely.