Let’s change the tune

As a community, we’re very guilty of using technical terms and confusing business users. If we want to get them more involved, we have to use the right names for the right things and stop confusing people. This lesson is obvious in acceptance tests and we know that we need to keep the naming consistent and avoid misleading terms, but we don’t do this when we talk about the process. For example, when we say continuous integration in the context of agile acceptance testing, we don’t really mean running integration tests. So why use that term, and then have to explain how acceptance tests are different from integration tests? Until I started using Specification Workshops as the name for a collaborative meeting about acceptance tests, it was very hard to convince business users to participate. But a simple change in naming made the problem go away.

By using better names, we can avoid a lot of completely meaningless discussions and get people started on the right path straight away. So here is what I propose:

One of the biggest issues teams have with acceptance testing is who should write what and when. So we need to come up with a good name for the start of the process that clearly says that everyone should be involved, and that this needs to happen before developers start developing and testers start testing, because we want to use acceptance tests as a target for development. Test first is a good technical name for it, but business users don’t get it. I propose we talk about Specifying Collaboratively instead of test first or writing acceptance tests.

It sounds quite normal to put every single numerical possibility into an automated functional test, why wouldn’t you do it if it is automated. But such complex tests are unusable as a communication tool. So instead of writing functional tests, let’s talk about Illustrating with Examples (thanks, Dave) and expect the output of that to be Key Examples to point out that we only want enough to explain the context properly.

Key examples are raw material, but if we just talk about acceptance testing then why not just dump all those complicated 50-column 100-row tables into an acceptance test without any explanation, it’s going to be tested by a machine anyway. But these tests are for humans as well for machines, so let’s talk about a whole new step after this, about the process of extracting the minimal set of attributes and examples to specify a business rule, and about adding a title, description and so on. I propose we call this Distilling the specification.

I just don’t want to spend any more arguing with people who already paid a license for QTP that it is completely unusable for acceptance tests. As long as we talk about test automation, there is always going to be a push to use whatever horrible contraption testers already use for automation, because it’s logical to use a single tool. Acceptance testing tools don’t compete with QTP or things like that, they address a completely different problem. Instead of talking about test automation, let’s talk about automating a specification without distorting any information – Literal Automation. Literal automation also avoids the whole scripting horror and using technical libraries directly in test specifications. If it’s literal, it should look as it looked on the whiteboard, it should not be translated to selenium commands.

After Literal Automation, we get a specification that can be checked against the code automatically, an Executable Specification.

We want to run all the acceptance tests frequently to make sure that the system still does what it is supposed to do (and equally more important to check that the specification still says what the system does). If we call this regression testing, it’s very hard to explain to testers why they should not go and add five million other test cases to a previously nice, small and focused specification. If we talk about continuous integration, then we get into the trouble of explaining why these tests should not always be end-to-end and run the whole system. On the top of that, for some legacy systems we need to run acceptance tests against a live, deployed environment. Technical integration tests run before deployment. So let’s not talk about regression testing or continuous integration, let’s talk about
Continuous Validation (or even Frequent Validation).

The long term pay-off from agile acceptance testing is having a reference on what the system does that is as relevant as the code itself, but much easier to read. That makes development much more efficient long term, facilitates collaboration with business users, leads to an alignment of software design and business models and just makes everyone’s life much easier. But to do this, the reference really has to be relevant, it has to be maintained, it has to be consistent with itself and with code. we should not have silos of tests that use terms we had three years ago, and those we used a year ago, and so on. Going back and updating tests is a very hard thing to sell to teams who are busy, but going back to update documentation after a big change is expected. So let’s not talk about folders filled with hundreds of tests, let’s talk about a Living Documentation system. That makes it much easier to explain why things should be self-explanatory, why business users need access to this as well and why it has to be nicely organised so that things are easy to find.

What do you think about this? Does it make sense? Will it help? Do you have a better name for one of these concepts, that explains it more clearly?

I'm Gojko Adzic, author of Impact Mapping and Specification by Example. My latest book is Fifty Quick Ideas to Improve Your Tests. To learn about discounts on my books, conferences and workshops, sign up for Impact or follow me on Twitter. Join me at these conferences and workshops:

Specification by Example Workshops

How to get more value out of user stories

Impact Mapping

25 thoughts on “Let’s change the tune

  1. Hi Gojko – very timely indeed. I am currently wrestling with this very problem on an in-flight project. I have used both “Test-First” and “ATDD” but neither really communicate the message. It not only doesn’t capture the imagination it also means that we are pushing out that scary “Test” word.

    I am currently writing a slide deck entitled “To Improve Quality we MUST stop testing!!” – and I am hoping that as I work in QA this may attract some attention. What I am waiting for is that moment of inspiration that will lead me to the alternative, inspirational name for what we do.

    I currently refer to “executable specifications”, “specification by example”, “domain modelling”, “distilling the specifications”,”continuous validation” etc … but they are just bits and pieces of a grander theme.

    However, just like “test” the “specification” word also carries baggage with it … as does “automation”. Maybe we just need to invent some language of our own in order to move forward baggage free …

    Development by Examplification – combo of example and specification
    Testification or Testample – for dual combination of test/specification and test/example.

    Hopefully Gojko you have started a thread that will lead us to Nirvana …

  2. We’re humans, hence not comfortable with large amount of information (big specs or huge lists of use cases) but we are somehow smart instead, which is why I like the idea of “Distilling the specification” (complementary to DDD “Distilling the domain”).

    I’ve always loved this idea but I found it quite hard when I attempt to materialize the distilled specifications into automated tests. The only easy and obvious examples that work are invariants of all kinds (e.g. conservation of money in a banking system).

    The other wording you suggest makes equally sense to me by putting the emphasis on what we really want and the value expected for ourselves (“Frequent Validation”, “Living Documentation”), rather than on the various means to achieve it.


  3. I really like some of the terms you have suggested… I know Brian Marick has been trying for years to get a perfect language too. The problem I see is trying to get everyone to change.

    Most of the terms are simple which makes it easy to adapt. I especially like Illustrating by examples, and key examples to help explain ATDD. I also like continuous (or frequent) validation, and living documentation.

    Some of the others though like distilling the specification don’t catch my imagination, and I think may have a hard time getting the mainstream testers to use the terms.

  4. I would go even further and argue that our users don’t care how beautiful the source code is of our acceptance tests. In the YouTube world that we live in now, users are quite happy with a 30 second demo video of some feature. Just look at almost any iPhone commercial — they are short and sweet example tests of iPhone functionality. I would argue that as a profession, we’re all doing it wrong — obsessing over creating the perfect “screenplay” — when we should be focused on making great movies. If you want to really do “Illustrating with Examples”, I believe movies, not source code are *better* examples for showing off the applications we create.

  5. Very nice.

    The only term I somewhat dislike is “Literal Automation”, but I can’t really say what makes me uncomfortable nor do I have a better suggestion.

    “Illustrating with Examples” and “Key Examples” nails it perfectly.

  6. +1 to “Illustrating with Examples” and “Key Examples”

    I like these because of customers can understand them. I plan to use these in my next customer meeting.

    I take “Key Examples” to represent the core value of the feature to be implemented, so that one or two “key examples” might account for 75%+ of the intended value. Is that how you’re using that term? (I apologize if I’m out in left field and missing your point here.)

    Also, to keep things simple, perhaps we could then refer to the next step as simply “enacting the examples” (i.e., acceptance-test-driven-development).

  7. I’m not set on any of these terms. Suggest a better one if you don’t like it. But let’s create a consistent vocabulary.

  8. I am fond of saying “Automated Specification”. I also talk a lot about “Concrete Examples” which are the ideas that we get from the customer that are capable of becoming “Automated Specifications.” These have to be specific and collectively representative.

    I have never liked the term “ATDD”. It sounds too much like TDD which confuses both my TDD students and my customers. The “ATDD” rhythm is completely different than the TDD rhythm. While the approaches are conceptually similar they feel entirely different. It takes a while to get that across, and I suspect it takes longer if we insist on using these nearly identical initialisms.

  9. “Executable Specifications” is the global term encompassing the following three activities:

    1. Illustrating with Examples (expect the output of that to be “Key Examples”)

    2. Distilling the specification (extract the minimal set of behaviors from “Key Examples” and connect behaviors with code)

    3. Ensuring implementation correctness (Continuous Validation)

  10. I don’t think “Literal Automation” works at all – I have no idea what it’s trying to say, because “literal” has multiple meanings. I might like “Lossless Automation” or “Precise Automation” or “Undistorted Automation”, which more clearly represent the idea of exactitude.

    I like “Continuous Validation” a lot.

    “Key Examples” is OK, though I would have liked “Essential Examples” better – you’re looking for the examples that cover the essence of the product and it captures more of the idea of required behaviors.

    However, I’m not sure what the distinction between your “Key Examples” step and your “Distilling the Specification” step is – I would expect the result of distilling to be the core (essential) examples. I think “Visualizing the Specification” or “Expressing the Specification” might be more descriptive of what happens here.

    I share Brian’s discomfort with “Executable Specification”, because it’s inherently confusing to practitioners (even if it makes sense to customers) – it already has a different meaning. Tests are *negative* specifications – they identify what is NOT the product. But, I’m not coming up with anything I like better that would suitable for using with customers. “Executable Metrics” captures a little of the idea, but is also a little misleading. Maybe “Executable “Rules”?

  11. I found “Literal Automation” jarring too. It doesn’t seem to encapsulate the idea.

    I’m posting to suggest that maybe “animation” is a better word than “automation”. I think it has less technical baggage – not sure what to combine it with though: “Story animation”, “Feature animation”, “Example animation”. Hmmm… not quite there yet.

    Anyway. Great post Gojko!

  12. My thoughts are distilled here but there is a longer write-up and supporting diagrams as a blog post (link below).

    – the conversation would benefit from focusing on a single noun
    – ‘specification’ is the noun that seems to work best
    – the specification goes from ‘identified’, to ‘illustrated’, to ‘refined’, to ‘executable’

    After having done the exercise of mapping the terminology to use the single noun ‘specification’, it strikes me that this post has nothing to do with testing, although it requires testing skills to perform (hence the collaborative specification label that Gojko gave it). Nice. We get to leave the term ‘test automation’ behind since that’s not what we’re talking about.


  13. Good stuff! I am also not a fan of ‘Literal Automation’.

    A specification can be readable by product owners, written by product owners and executable. I seldom see product owners write executable specifications because of their inability to effectively deal with run time errors from lower layers of the app. (See The Law of Leaky Abstractions by Joel Spolsky – http://www.joelonsoftware.com/articles/LeakyAbstractions.html) Thus, executable specifications are best written by developers. In Java and .NET using the fluent style seems to be readable enough and any attempts at making the specification more literal than that has questionable ROI because the developers (writers) loose the power of their IDE and static typing.

    My current shop has product owners write readable, but not executable specifications (wiki/Jira) and developers write executable, but not very readable specifications via UI/Functional tests. So far I have not been able to make a business case that moved either party in the direction of combining the two. Our ‘communication gap’ is perceived to be quite minimal because we all sit in close proximity to each other and attend all of the meeting together (dev, PMs, testers).


  14. Specification does mean many things to many people, but it (whatever it be called) also does run consistently through the process as the object that is shaped from requirement / story into fully formed (and automated) acceptance tests.

    1. Collaborated Specification – illustrated or defined by key examples

    2. Refined or Elaborated Specification (instead of distilled which carries images of watering it down to me) – turn the examples into what we could call a set of test cases (description, expected results, data etc)

    3. Turned into an Executable Specification by literal automation, maybe liberal automation :-)
    The only adjectives I can think of to replace this are negative like unadulterated, undiminished or uncondensed automation. Maybe “Whole” or “Total” Automation?!?

    4. Continuous Validation is spot on for me (also fits well with V,V&T I think)

    5. as is the Living Documentation that the executable specs form and are maintained as.

    To note though, ISO have been writing a new glossary of terms along with their new standard for about 3 or 4 years (working draft issued to an elite for review http://softwaretestingstandard.org) and with other standards out there (BS-7925-1, IEEE etc) grounding terms is tough. This is accentuated by the increasingly prevalent blurring of methodology and practices between traditional and agile, albeit that most recognised standards seem to be traditionally bent.

    So the practices themselves are the important things to convey. Terms that describe the underlying practices better, definitely serve to bridge the gap between the business and IT. None of which any standard I’ve ever read has managed to do.

  15. Yes, good stuff!

    I particularly like “Specifying Collaboratively” and “Continuous Validation”. I would however include conversations in the latter, it’s too important to be left only to the tools! Perhaps it even includes feedback about or from already-implemented systems.

  16. I like Gojko’s attempt to reframe our thinking by reframing our vocabulary. A number of these ideas make terrific sense, and I hope they get adopted. (And I hope we are soon able to dispatch with the idea of “acceptance tests”, as I propose here.

    For those who don’t follow the link: acceptance tests tell us NOTHING about whether the product is really acceptable, since all the acceptance tests can pass even when the product has terrible problems—as a trivial example, run all the “acceptance tests” pass, watch the bars turn green, rejoice, and then observe that immediately after you observe the green bar, the product crashes and delets all your data. Passing acceptance tests can only tell us that the product appears to meet some requirement to some degree in some circumstance—typically in an insulated test environment of some kind. On the other hand, failing acceptance tests can tell us that the product isn’t ready. They don’t tell us if the product is acceptable; the DO tell us if the product is unacceptable. Ergo, they should be called rejection tests. (Rejection checks, actually, but I’ll leave that to another day.)

    I have a number of things that I’m still confused about, though.

    Gojko says, “Instead of talking about test automation, let’s talk about automating a specification without distorting any information – Literal Automation.” I’m curious how one would automate a specification without distorting any information. Isn’t a specification an expression of an idea? Isn’t an automated test a different expression of that same idea? And doesn’t any change in the medium of an idea introduce some distortion?

    Martin says, “It not only doesn’t capture the imagination it also means that we are pushing out that scary ‘Test’ word.” What, exactly, is scary about any word? Surely what scary is the ideas behind the word. So what is scary about (either the word or the idea) “test”? It means to question the product in order to evaluate it (Bach); to gather information for the purpose of informing a decision (Weinberg); an empirical, technical investigation of a product, done on behalf of stakeholders with the intention of revealing quality-related information of the kind that they seek (Kaner); or “an investigation of the behaviour and interactions of computer programs, people, systems, and services” (me). All of these add up to the same thing: examining how well the product serves the customer and the other stakeholders, and finding possible problems in it. There’s often talk of “maturity” in our business, and most of it is tosh. But there is a way in which maturity is important: we do need to be able to deal with the fact that despite our best intentions and our most careful thinking, we can still misinterpret things, we can still fail to understand a problem before we are provided with evidence of it, and things still go awry. The mature approach, it seems to me, is to develop acceptance of our limitations and manage them without being afraid.

    —Michael B.

  17. @Michael,

    re automating literally – most of the workshops I facilitate end up with people writing the acceptance test examples on a whiteboard almost in the same form that they will end up in fitnesse. when putting into fitnesse we go through the process of refining or distilling the specification, clean it up a bit and add a title and a description, but mostly it’s the same. the idea with automating literally is to point out that any test-specific workflow or scripting should go to the automation layer, not the acceptance test.

  18. Pingback: managing expectations » Blog Archive » def’n 2: managing *Expectations* - a blook for mastering agile requirements

  19. Interesting!

    I’m a bit too scientific to like the term specification. To me specifications must be complete, not specific/pointwise. Examples/tests/scenarios/and-so-on only prove correctness at points, not infinite sets. But I also dislike “test” since it is scary and overloaded ad inifituum (another misfeat of ‘specification’ as Steve mentions).

    I liked Adams view, trying to see one concept (he chose ‘specification’ in lack of a better word) in transition instead of a plethora of disconnected words.

    When reading Adams blog post, I saw him use the well-known terminology “business rule”. That wording is also used in Gojko’s blog post.

    Isn’t that what we are talking about? “Rules”? Then, we’d get something like:

    “Writing rules together” instead of “specifying collaboratively”. Whole team.
    Illustrating rules with (key) examples. Or maybe even “explaining rules”, which feels more natural. Whole team.

    Then, developers “pinpoint” the rules even more with detailed, numerous examples, looking at the key examples as a guide. This time executable.

    Gojko; did you post any follow-up on this subject..?

  20. Oh :O)

    Great. Hmm would you recommend me reading that one or Bridging the communications gap first? Is there sneak preview examples online somewhere?

  21. BtCG is about my experiences. SBE is about other peoples’ experiences. I guess read both. It doesn’t matter which one comes first.

Leave a Reply

Your email address will not be published. Required fields are marked *