Applying TDD to our project

Test-driven development is just a set of simple practices supported by a few lightweight tools, most of which are free. On the surface, these practices and tools aim to improve the quality of software with more rigorous and more efficient code checking. However, that is just the tip of the iceberg. There is much more to TDD then verifying code. As the name suggests, in test-driven development, tests play a proactive, leading role in the development process, rather than the reactive role of checking software after it is written.

Guiding the development

A strict implementation of a feature with TDD consists of three stages:

  1. Write a test for the feature and write just enough code to make the test compile and run, expecting it to fail the first time.

  2. Change the underlying code until the test passes.

  3. Clean up the code, integrate it better with the rest of the system and repeat the tests to check that we have not have broken anything in the process.

This sequence is often called “red-green-refactor” or “red bar-green bar-refactor”. The name comes from the status icon or status bar in most GUI test runners, which is red when tests fail and green when they pass.

Once the first test passes, we write another test, write more code, make the new test run, clean up again and retest. After we have repeated this cycle for all the tests for a specific feature, our work on the feature is done and we can move on to the next feature. Robert C. Martin summarised this connection between tests and production code in his Three Rules of TDD:[8]


The Three Rules of TDD

Over the years I have come to describe test driven development in terms of three simple rules. They are:

  1. You are not allowed to write any production code unless it is to make a failing unit test pass.

  2. You are not allowed to write any more of a unit test than is sufficient to fail; and compilation failures are failures.

  3. You are not allowed to write any more production code than is sufficient to pass the one failing unit test.

 --Robert C. Martin

This approach may seem too strict or radical at first. Additional tests are often written after the first version of the code has been developed. Some people write the code first, then do the tests and modify the code until all the tests pass. This technique also works, as long as the tests are written in the same development cycle as the code, and extra care is taken to focus on the user stories, not the code.

[Important]Think about the intention, not the implementation

A test is pointless as a quality check for the code if it just replicates what the code does. Any bugs that found their way into the code will just be replicated in the test. Tests that just replicate code are very dangerous because they will give you a feeling of comfort, thinking that the code is tested properly, while bugs are waiting round the corner. Do not write the test by looking at the code, as you may get blinded by the implementation details. This is one of the reasons why it is better to write tests before the code, not after.

Tests that guide the development should always reflect the intention, not the implementation.

If you are just getting to know TDD, I strongly recommend doing it first by the book, as described above. Once you get accustomed to practices and underlying principles and see how they fit into your particular way of working, you can optimise and adjust the process.

Automated acceptance testing

Although the three rules of TDD deal with unit tests, they apply equally to tests that operate on a much larger scale. Iteration goals and business rules can also be translated into automated tests and used to help programmers focus their development effort. Tests that describe these goals are known as acceptance tests or customer-oriented tests. They do not focus on the code, but on the customer's expectations. Unit tests check whether the code does what programmers wanted it to do. Acceptance tests check whether the product satisfies the customer's requirements. In the same way that unit tests act as a target for code functionality, acceptance tests act as a target for the whole project. So, the first step of development is to ask business analysts or customers how we can verify that the code we are about to write works correctly. Having the acceptance criteria written down helps to flush out any inconsistencies and unclear requirements. Having the criteria in the form of an automated test makes it easy to check that we are on the right track.

The name “acceptance test” is a bit misleading. When used properly, an acceptance test is more a specification for development than a test. Naresh Jain suggested that I should use “executable specification” instead of “acceptance test” in this book. I like that name much more because it truly reflects the role of acceptance tests in modern programming, and clears up a lot of questions that I often get. In spite of being misleading, “acceptance test” is an established name for this concept, and I decided to stay with it in this book. However, keep in mind that we are actually talking more about specifications than tests.

Who should write acceptance tests?

Acceptance tests should reflect the customers' perception of when the system meets requirements, so they must be defined by a customer or a business analyst. This still leaves us with the question of who should translate this definition into FitNesse tables. There was an interesting discussion on this topic at the XPDay 2007 conference in London, during a workshop called “Working With Customers towards Shared Understanding”. Several participants noted that if developers are left to write acceptance tests on their own, the tests turn out too technical and task-oriented. Acceptance tests are more effective if they are focused on larger activities and expressed in the language of the business domain. Although FitNesse allows customers and business analysts to write tests directly without involving developers, that may be a step too far. Customers often forget about edge cases and focus only on general rules. Antony Marcano, one of the maintainers of, pointed out that discussions between developers and customers during test writing help a lot to clarify the domain and enable developers to understand the problem better. If tests are written by customers on their own, then the value of these discussions is lost. So, ideally, a developer and a customer representative, or a business analyst, should write the tests together.

[Tip]Is it better to use acceptance or unit tests?

When acceptance tests drive the development, large parts of the production code are covered by these tests. Some people tend to write fewer unit tests because acceptance tests already check the functionality. Although this practice does save some time, it may make it harder to pinpoint problems later. Failed acceptance tests signal that there is a problem, but do not locate the source as clearly as unit tests do. Acceptance tests also rarely check edge cases so unit tests have to be written at least to cover those issues. Infrastructural parts of the code, not especially related to any user story, will also not be properly covered with acceptance tests.

In my experience, it's best to use both unit and acceptance tests. One group does not exclude the other. If this question bothers you, ask it again but use executable specification instead of acceptance test. You need both the specification and tests for your code, and should not choose between one of them.

Looking at the bigger picture, James Shore offers his “Describe-Demonstrate-Develop” best practice for using FitNesse:[9]



Now I draw a very sharp distinction between FIT and TDD. I use FIT at a higher level than TDD. If TDD is “red-green-refactor”, then my use of FIT (“Example-Driven Requirements”) is “describe-demonstrate-develop“. The “develop” step includes all of TDD.

  1. Describe. In the FIT document, use a short paragraph to describe part of the functionality that the software supports. (This should be done by business experts.)

  2. Demonstrate. Provide some examples of the functionality, preferably examples that show the differences in possibilities. Sometimes only one or two examples is enough. (This should be done by business experts, too, possibly with help from testers and programmers.)

  3. Develop. Develop the functionality using TDD. Use the structure and terms of the examples to provide direction for the domain model, per Eric Evans' Ubiquitous Language. Turn each kind of concept in the examples (such as “Currency”) to drive the creation of new types, per Ward Cunningham's Whole Value. (This should be done by programmers.) Don't run FIT during this stage until the small increment of functionality is done. When it is, create the FIT fixture and hook it up. Use your Whole Value classes rather than primitive types. When you run FIT, the examples should pass.

  4. Repeat. Continue with the next section of the document. Often, the business experts can go faster than the developers and finish several more Describe/Demonstrate sections before the programmers finish Developing the first section. That's okay and I encourage that they do so. There's no need for the business experts to wait for the programmers to finish Developing a section before the business experts Describe and Demonstrate the next one.

As you expand the FIT document, you should see opportunities to reorganize, change, and improve sections. Please do. You'll end up with a much better result.

 --James Shore

Testing to prevent defects, not to find them

TDD brings into the software world a lot of ideas from zero quality control (ZQC), Toyota’s approach to achieving product quality. Understanding the principles of ZQC and applying them while writing tests can significantly improve the effectiveness of TDD.

The basic idea of zero quality control is that quality has to be built into the product, and does not come from controlling and sorting out defects at the end. Toyota’s solution consists of a design approach that aims to create mistake-proof products and uses successive inexpensive tests to detect problems at the source.

Poka-Yoke, or mistake-proofing, is one of the most important principles of zero quality control.[12] It is an approach to manufacturing that aims to prevent problems by either making products error-proof by design or by providing early warning signals for problems. Although Toyota made these practices famous, other designers have been applying them for quite a while. For example, any average elevator checks the load before departing and stops working if it is overcrowded. Some also give a clear warning using a flashing light or sound. This is how an elevator stops a potential problem by design.

Checking at the source, rather than at the end, was one of the most important ideas described by Shigeo Shingo (1909-1990) in his book on zero quality control [9]. Mary Poppendieck often comments on the idea that “inspection to find defects is waste, inspection to prevent defects is essential”.

On production lines, the mistake-proofing principles are applied using Poka-Yoke devices: test tools used to check, inexpensively, whether a produced item is defective. Poka-Yoke devices enable the workers to identify problems straightaway on the manufacturing line. They allow quick and cheap checking, so that they can be used often to verify the quality at different stages.

Software tests and testing tools are our Poka-Yoke devices, allowing us to check quickly whether procedures and classes are defective, straight after we write them. Tests are automated so that they can be quickly and effectively executed later to confirm that the code is still working.