I facilitated an openspace session on acceptance testing and collaboration between business people, developers and testers at CITCON Europe last week. We started with a list of five common reasons why I’ve seen teams fail with acceptance testing but added five more during the discussion, so here’s an expanded top 10 list:

  1. No collaboration — as a specification of a system that acts as the shared definition of done, acceptance tests need to be collaboratively produced and agreed upon. Business people, developers and testers should all provide input, each from their own area of expertise. The discussion that happens around the specification is actually more valuable than the tests themselves, as people build a shared understanding of the domain and the problem at hand. If this discussion doesn't happen, then teams still feel the effects of the telephone game. Symptoms of this problem are developers writing acceptance tests themselves, testers being expected to handle everything about acceptance testing and business dictating tests without any feedback from developers or testers. The solution for this are collaborative specifications and specification workshops.
  2. Focusing on how, not on what — business specifications should be clear and understandable and allow the team to implement the best possible solution without prescribing the design or the implementation. Too often tests describe how something should be implemented rather than what should be implemented. Tests such as these are too low level, hard to understand and always turn out to be a pain to maintain long-term. Instead of facilitating system change they inhibit it as people will be reluctant to introduce changes that break a lot of tests. Symptoms of this problem are action-oriented acceptance tests, lots of workflows or duplication in tests and tests that are very technical. The solutions for this are communicating intent and distilling the specification.
  3. Tests unusable as live documentation — one of the greatest benefits of agile acceptance testing is that we are gradually building a human readable specification of the system. This specification is ideally automated to a great degree so that we can be confident that it is in sync with the code. This solves the problem of the running code being the only thing you can really trust about what the system does. Correct, easily understandable and accessible documentation is crucial for future change. If tests don't serve as a live documentation that those benefits are lost. Symptoms of this problem are very technical tests, long tests, hard to understand tests, tests that are poorly organised so that you can't quickly find relevant tests for a particular piece of functionality. Solutions for this is to ensure that tests can be used as a live documentation, reorganising them and updating when the ubiquitous language changes.
  4. Expecting acceptance tests to be a full regression suite — acceptance tests are primarily a specification of what the system should do and should deal with exemplars that represents whole sets of cases. Specifying too much and covering all possible edge cases will make them hard to understand and might introduce an effect similar to paralysis by analysis. So acceptance tests cannot provide a full regression test suite and additional tests are required (including exploratory testing). Symptoms of this problem are tests that are very long and verify a large number of similar cases. Note that this doesn't mean that regression tests can't be written and automated using the same tools as acceptance tests.
  5. Focusing on tools — some teams focus too much on tools and features of particular tools, disregarding things that do not naturally fit into their chosen toolset. This makes the specification vague in parts that aren't covered by a particular tool. Instead of that, teams should be focusing on the business specifications and then choosing the right tool for the job. In particular, tools should not play an important part in the specification workshop. Symptoms of this problem are ignoring the UI (as UI tests are hard to automate) and wasting time on converting specifications in a format suitable for a particular tool during the specification workshops (which breaks the flow of the discussion and wastes valuable time).
  6. Not considering acceptance testing as value added activity — agile acceptance testing is not about QA, but more about specifying and agreeing on what needs to be done. Teams that consider it part of QA often delegate the responsibility to it to junior programmers and testers, which effectively means that junior team members will be writing the specification (which is wrong on so many levels that I don't know where to start). The solutions for this are specification workshops.
  7. "Test code" not maintained with love — tests and code to automate them are often considered as less important than production code, so given to less capable developers and testers to implement and maintain. This leads to huge maintenance problems later and can cause teams to completely abandon full suites of tests. Acceptance tests are the specification of a system so they are crucial to the success of a project. They also need to be maintained and kept in sync with the live project language and design in order to serve as a live documentation.
  8. Objectives of team members not aligned — if the job of business analysts is only to deliver the specifications, the job of developers only to ship the system and the job of testers to ensure quality, then people won't pull in the same direction or invest time in producing good acceptance tests. This is a management issue but failures with acceptance testing might make it more visible.
  9. No management buy-in — successful acceptance testing requires an active participation of the business and with no management buy-in the benefits will simply not be there (also related to #1).
  10. Underestimating the skill required to do this well — introducing acceptance testing is a tall order and often means changing the way teams are organised and approach specifications. This requires a lot of time and investment in skill-up, mastering tools, facilitating workshops and dealing with resistance to change. Underestimating the effort required to do this properly might cause teams to think that they failed and get disappointed too early.

Xavier Warzee filmed the session so watch the videos if you are interested to see more of this discussion (scroll down on the page, it’s the last batch of videos). You can also check out two more write-ups by Nicolas Martignole and Xavier Bourguignon (both in French) and session notes from the CITCON wiki.

Image credits: Cirilo Wortel