Let’s change the tune
As a community, we’re very guilty of using technical terms and confusing business users. If we want to get them more involved, we have to use the right names for the right things and stop confusing people. This lesson is obvious in acceptance tests and we know that we need to keep the naming consistent and avoid misleading terms, but we don’t do this when we talk about the process. For example, when we say continuous integration in the context of agile acceptance testing, we don’t really mean running integration tests. So why use that term, and then have to explain how acceptance tests are different from integration tests? Until I started using Specification Workshops as the name for a collaborative meeting about acceptance tests, it was very hard to convince business users to participate. But a simple change in naming made the problem go away.
By using better names, we can avoid a lot of completely meaningless discussions and get people started on the right path straight away. So here is what I propose:
One of the biggest issues teams have with acceptance testing is who should write what and when. So we need to come up with a good name for the start of the process that clearly says that everyone should be involved, and that this needs to happen before developers start developing and testers start testing, because we want to use acceptance tests as a target for development. Test first is a good technical name for it, but business users don’t get it. I propose we talk about Specifying Collaboratively instead of test first or writing acceptance tests.
It sounds quite normal to put every single numerical possibility into an automated functional test, why wouldn’t you do it if it is automated. But such complex tests are unusable as a communication tool. So instead of writing functional tests, let’s talk about Illustrating with Examples (thanks, Dave) and expect the output of that to be Key Examples to point out that we only want enough to explain the context properly.
Key examples are raw material, but if we just talk about acceptance testing then why not just dump all those complicated 50-column 100-row tables into an acceptance test without any explanation, it’s going to be tested by a machine anyway. But these tests are for humans as well for machines, so let’s talk about a whole new step after this, about the process of extracting the minimal set of attributes and examples to specify a business rule, and about adding a title, description and so on. I propose we call this Distilling the specification.
I just don’t want to spend any more arguing with people who already paid a license for QTP that it is completely unusable for acceptance tests. As long as we talk about test automation, there is always going to be a push to use whatever horrible contraption testers already use for automation, because it’s logical to use a single tool. Acceptance testing tools don’t compete with QTP or things like that, they address a completely different problem. Instead of talking about test automation, let’s talk about automating a specification without distorting any information - Literal Automation. Literal automation also avoids the whole scripting horror and using technical libraries directly in test specifications. If it’s literal, it should look as it looked on the whiteboard, it should not be translated to selenium commands.
After Literal Automation, we get a specification that can be checked against the code automatically, an Executable Specification.
We want to run all the acceptance tests frequently to make sure that the system still does what it is supposed to do (and equally more important to check that the specification still says what the system does). If we call this regression testing, it’s very hard to explain to testers why they should not go and add five million other test cases to a previously nice, small and focused specification. If we talk about continuous integration, then we get into the trouble of explaining why these tests should not always be end-to-end and run the whole system. On the top of that, for some legacy systems we need to run acceptance tests against a live, deployed environment. Technical integration tests run before deployment. So let’s not talk about regression testing or continuous integration, let’s talk about Continuous Validation (or even Frequent Validation).
The long term pay-off from agile acceptance testing is having a reference on what the system does that is as relevant as the code itself, but much easier to read. That makes development much more efficient long term, facilitates collaboration with business users, leads to an alignment of software design and business models and just makes everyone’s life much easier. But to do this, the reference really has to be relevant, it has to be maintained, it has to be consistent with itself and with code. we should not have silos of tests that use terms we had three years ago, and those we used a year ago, and so on. Going back and updating tests is a very hard thing to sell to teams who are busy, but going back to update documentation after a big change is expected. So let’s not talk about folders filled with hundreds of tests, let’s talk about a Living Documentation system. That makes it much easier to explain why things should be self-explanatory, why business users need access to this as well and why it has to be nicely organised so that things are easy to find.
What do you think about this? Does it make sense? Will it help? Do you have a better name for one of these concepts, that explains it more clearly?