How do we sell the test-driven approach on a wider scale to business analysts and customers? How do we get them involved in that process? It seems that a lot of the problems and confusion obstructing that effort come out of the misleading name of the term “acceptance tests”.

Dan North suggests using the word “behaviour” instead of “test” to point out how acceptance tests express system behaviours. I found this approach very useful for describing the role of TDD to “outsiders”, who are not interested in low-level functional testing and tune out as soon as people start talking about anything related to testing, including Test-Driven Development. Dan writes:

It suddenly occurred to me that people's misunderstandings about TDD almost always came back to the word “test”. That's not to say that testing isn't intrinsic to TDD - the resulting set of methods is an effective way of ensuring your code works. However, if the methods do not comprehensively describe the behaviour of your system, then they are lulling you into a false sense of security. I started using the word “behaviour” in place of “test” in my dealings with TDD and found that not only did it seem to fit but also that a whole category of coaching questions magically dissolved.

Naresh Jain suggested that I should use “executable specification” instead of “acceptance test” in a book that I am currently working on. It amazed me how much better that name fits what we call acceptance tests and how that change in perspective automatically answers some frequent questions.

Do we need acceptance tests?

“Acceptance test” is a very misleading name for the kind of automated end-criteria verification for a user story or a piece of functionality. But if you start thinking about acceptance tests as executable specifications, and ask this question again, the focus changes completely. This becomes a question should that specification be executable, not whether it should or should not be written at all. The benefits of automation quickly become obvious even for the users if they want to verify the system more than once, so it should not be too hard to guess the answer to this question.

Why should we write tests before the code?

Automated acceptance tests are not just something you do in the User Acceptance Testing (UAT) phase that leads to the sign-off before moving to production. But those two things do sound very similar, and that is a major source of confusion for teams that are just starting to use agile methods. Everyone is familiar with UAT testing, and that comes on the end. How does then writing acceptance tests make sense on the start? If this question is asked as “why should we write an executable specification on the start”, the answer becomes obvious. It does not make sense to write specifications after development.

Should we use acceptance tests or unit tests?

There seems to be some kind of confusion where to draw the line. A common misunderstanding is that one group can replace or exclude the other. But in my experience, it’s best to use both. These two types of tests are used for different purposes: unit tests should focus on the code, acceptance tests should focus on customer benefits and requirements. Acceptance tests can be quite enough to drive the development at a good speed, but they rarely check edge cases and infrastructural parts of the code. So unit tests have to be written to cover these areas at least. Failed acceptance tests signal that there is a problem, but do not locate the source as clearly as unit tests do, so unit tests are better at pin-pointing problems.

If this question is asked as “Should we use executable specifications or unit tests?”, it becomes pointless straight away. One group no longer sounds similar to the other, so there is no confusion about which is better. You need both the specification and tests for your code, and should not choose between one of them.

Who should write acceptance tests?

If developers are left to do write tests on their own, then tests turn out too technical and task-oriented. If tests are written by customers on their own, then developers are missing a chance to learn about the problem domain. So, ideally, a developer and a customer representative, or a business analyst, should write those tests together. This was recently suggested as a best practice.

If you think of acceptance tests in terms of an executable specification, then the answer becomes clear. Customers must be involved because it is a specification (of what they want), and developers must be involved because it is executable. So both groups must work together.