The false dichotomy of tests

It seems that there is a new misleading trend in the community to believe that tests are either unit or acceptance checks. I’m not sure where this is coming from, but it is becoming more widespread at least according to questions I get at conferences and workshops. A perfect example of that is this tweet:

After all, isn’t unit testing about isolating the class for test. Testing a set of classes is called by another name: acceptance tests.

It seems that the development community has a good grasp of what is and what isn’t a unit test, but everything else seems to be bundled in a Kraken of integration, acceptance, component, system and end-to-end tests. The situation is much much more complex than that. That’s why I like Michael Feathers’ description of what a unit test is not (scroll to the middle). He says clearly that tests that are not unit tests can sometimes be written in a unit testing harness, and if they are clearly separated from unit tests there isn’t necessarily a problem. He doesn’t try to put every non-unit test in a single box.

So let me repeat something that was said many times over:

Tests are code

This might be confusing at first, so let me rephrase it:

Tests are important long-term artefacts of your delivery process, as important as other long-term artefacts including code. You will need to maintain them in the long term. You should write tests in a way that makes them easy to maintain, or you’ll end up crying in the shower.

Why do we need to learn this lesson over and over again? It took the community years to grasp that unit tests have to be clean or there will be trouble. The situation is the same for all other types of tests, regardless of how you call them. Teams that do TDD, BDD, specification by example or something similar properly will end up with a huge percentage of tests in overall code, sometimes even over 50%. Disregarding good coding principles for such a huge chunk of code makes no sense. So let’s try to apply one of the most important guidelines for good code design, the Single Responsibility Principle, to the Kraken problem.

A test should have a single responsibility. If you look at a test from that perspective, the flaw becomes obvious in a single test that checks if a business function works OK from a business perspective, whether the UI plugs in nicely with the application layer, whether the application layer talks to the database, whether different classes talk together properly from a technical perspective and a lot more.

We can also apply the high cohesion/low coupling principle to a set of Kraken tests. As they are all over the place, they will have very low conceptual cohesion. In terms of code sharing, these things in practice often have a lot of utility code as they touch the same areas of the system, so they will be quite coupled.

It’s not important what you call it, but what it does

The testing terminology is quite confusing even at a level below what is a unit or not a unit test. (For a complete brain-melt about what different institutions call the same things see John Kent’s attempt to classify test entities.) People will disagree on what an integration test is, or what a system test is, or what needs to be in an acceptance test. They have disagreed on this for decades and will continue disagreeing on it. But that doesn’t mean that anything that’s not a unit test is a acceptance testKraken. Don’t accept a false dichotomy of unit tests and everything else. Some of those other tests will be used for driving the design, some for derisking, some for inspecting, some for specifying, some will look at the system from a higher level technical perspective, some will look at it form a higher level business perspective.

My take on this is: don’t worry too much about what you call a test, as long as you are clear on what it does and it does a single thing. That rule of thumb will help you figure out whether you have specified, inspected and derisked everything properly and make it easier your test code base easier to maintain in the future.

I'm Gojko Adzic, author of Impact Mapping and Specification by Example. I'm currently working on 50 Quick Ideas to Improve Your User Stories. To learn about discounts on my books, conferences and workshops, sign up for Impact or follow me on Twitter. Join me at these conferences and workshops:

Specification by Example Workshops

Product Owner Survival Camp

Conference talks and workshops

2 thoughts on “The false dichotomy of tests

  1. Hi Gojko -

    This is an interesting topic to me. I have to admit, I’ve partially fallen into this trap, but I want to keep myself honest. From an automated test execution approach, what has been your experience with running buckets of test types? For example, in my mind, there are two clear types to run, at least:

    1. Unit tests run with every CI dev build, quickly for rapid dev team feedback (under 10 min)

    2. Acceptance tests run daily or more often if you can afford to get feedback on existing user functionality and possibly regressions in that.

    Do the ones that don’t fit nicely into those buckets, get there own “integration” execution as well, or something like that? Just curious what your experience has been.

    thanks,
    Sean

  2. that all depends on how long it takes to run such tests. I’ve often seen them bundled with all other slow tests if the whole slow test run takes some reasonable time (say 20-40 minutes). In the cases that the slow tests take longer to run, then I’d suggest looking at the most critical tests and running them first (eg. current iteration only/rest overnight).

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>