Long term maintenance cost is one of the biggest issues that teams face today when implementing agile acceptance testing. Tests that are just written and automated without any long term planning are guaranteed to cost you more than they are worth. But then again, a properly designed testing framework saves a lot of money, time and effort in the long run. It seems that the community is now going through the same learning cycle as we went through with unit tests, with people writing any crap code in unit tests at first, then learning that testing code maintenance hurts as much as it would hurt for normal code, and cleaning up their act. The ongoing research for my new book has helped me understand that, in the case of acceptance tests, the problem is much deeper.
At the recent open space session on the future of acceptance testing at Open Volcano 10, we discussed treating test code as second class as one of the biggest problems with agile acceptance testing implementation and one of the biggest challenges for teams in the future. It also appeared on the top 10 reasons why teams fail with acceptance testing at CITCON Europe 09. Long term test maintenance costs also sparked a recent discussion on tools in the blogosphere (see Jim Shore’s original article and responses from George Dinwiddie, Ron Jeffries and my reply and comments on those posts). I now believe that most of these issues come from the same problem.
An acceptance testing framework is a product on its own
Although most of them don’t realise it themselves, the most successful teams I interviewed actually treat their acceptance testing framework as a product in its own right. I’m not talking about the automation tool – though some of the teams have developed their own tools or extended opensource tools significantly – but about the test specifications, the infrastructure around them and the integration layer for automation (fixtures, step definitions, keyword implementations).
Many teams have ended up designing a domain specific language for specifications and tests (implemented using FIT, Cucumber, Robot Framework or even directly in code), structuring their acceptance tests around it to keep them consistent, easy to understand and minimise long term maintenance costs. When retrofitting acceptance testing into already existing products, they often had to make the system and the architecture more testable, which required senior people to plan and implement serious design changes. Implementing agile acceptance testing successfully, and getting the most benefits out of it, requires a significant investment in all of this. It requires structure, design and planning. It requires thinking about how to maximise the return on investment in the framework. It requires on-going cultivation and maintenance.
If we think about tests as by-products of the development process, this investment is very hard to justify because it doesn’t give direct value to any of the project stakeholders. The stakeholders of the framework, however, aren’t the customers or the business users. They are the members of the development teams (and maintenance teams if separate). Many teams I interviewed have at some point ripped out the heart of their system and replaced it, or rewritten the entire system, while keeping their acceptance tests and using them to guide the whole effort. This is where the investment in live documentation really pays off. Such a framework is genuinely a separate product, with a different lifecycle and a different group of stakeholders.
Understanding that the acceptance testing framework is actually a product resolves a ton of things I’ve considered genuine problems in the past, but now look at them as just symptoms of this deeper issue. For example, once you start thinking about the framework as a product, the question of whether to put the tests in a version control system or not goes away immediately. Cleaning up “test code” no longer requires a separate explanation. Working on enhancing the structure or clarity of tests is no longer something to put on the technical debt list – it is part of a backlog for a completely separate product. The flaw in delegating the work on acceptance tests to junior developers and testers suddenly becomes obvious. On the other side of this equation, looking at the testing framework as a separate product also prevents teams from falling into the trap of goldplating the tests at the expense of the primary product.
I'm Gojko Adzic, author of Impact Mapping and Specification by Example. To learn about discounts on my books, conferences and workshops, sign up for Impact or follow me on Twitter. Join me at these conferences and workshops:
Product Owner Survival Camp
- Zurich, CH, March 31-April 1
- Paris, FR, 28-29 April
- Oslo, NO, 5-6 May
- Munich, DE, May 20-21
- Rijswijk, NL 3-4 June
- Frankfurt, DE, 14-15 October
Specification by Example Workshop
- Rijswijk, NL, March 6-7
- Brno, CZ, March 27-28
- Stockholm, SE, 3-4 April
- Vienna, AT, May 13-14
- London, UK, May 15-16
- Oslo, NO, 12-13 Jun
- Oslo, NO, 16-17 October
Conference talks and workshops