I’ve been ranting, writing and teaching about the danger of using scripts as specifications for a while. This is one of the top reasons why teams fail with specification by example. I’m by no means the first or the only one to warn about this. Ward Cunningham and Rick Mugridge wrote against that in FIT for developing software. David Peterson wrote about that in Concordion Technique. Dan North recently wrote about that in Whose Domain is it Anyway. We all presented different heuristics to determine whether examples in a specification are at the right level of abstraction or not, mostly focusing on the difference between a series of steps that explain how something is done and a more direct explanation of what a system needs to do. But a relative novice was able to look at this with a fresh pair of eyes and come up with a much better way to explain it.
It is truly amazing how much clarity a fresh perspective, uninhibited by lots of background knowledge, can offer. During a recent workshop in Oslo, while we were discussing the difference between scripts and specifications, John Inge asked:
So if I summarise something and I use different words, then it is not a good specification?
In this question, he formulated a fantastically simple yet powerful way to decide whether we need to refine a specification with examples further or not. If I try to summarise a set of examples and use different concepts, skip over parts and explain them differently, then the examples will probably be hard to understand later. It might also point out to unnecessary details or duplication, which means that the examples are not at the right level of abstraction. This also means that the names, concepts and relationships in the mental model of the person explaining a specification aren’t directly represented in the examples. If the person explaining examples is a business user, this is a failure from the point of aligning business and software models, regardless of whether the examples illustrate a script composed of a series of steps or not.
If a summary of examples only requires the use of the concepts, names and relationships mentioned in the examples, that probably means that those examples are very direct and cannot be simplified further. But it also means that the examples are aligned with the mental model of the person explaining them, which is a good start for the application of ubiquitous language.
John wrote a blog post about this, summarising the idea:
To see if you have captured what in your examples/tests you can use these criteria:
- You are using keywords from the examples to summarize them
- You can’t summarize the examples in more simple terms.
Try it now, for example on the two sets of payroll examples in this post. They both represent the same thing, so you can summarise or explain the examples for both cases in the same way (the header of the second version is probably a good summary). In the first case, many of the concepts and relationships from the summary are not directly expressed in the examples and the examples contain a lot of additional cruft. In the second case, the summary and examples are completely aligned.
To me, this looks as a very simple but powerful heuristic. How does it work when you try it on your examples?