Test-driven development is just a set of simple practices supported by a few lightweight tools, most of which are free. On the surface, these practices and tools aim to improve the quality of software with more rigorous and more efficient code checking. However, that is just the tip of the iceberg. There is much more to TDD then verifying code. As the name suggests, in test-driven development, tests play a proactive, leading role in the development process, rather than the reactive role of checking software after it is written.
A strict implementation of a feature with TDD consists of three stages:
Write a test for the feature and write just enough code to make the test compile and run, expecting it to fail the first time.
Change the underlying code until the test passes.
Clean up the code, integrate it better with the rest of the system and repeat the tests to check that we have not have broken anything in the process.
This sequence is often called “red-green-refactor” or “red bar-green bar-refactor”. The name comes from the status icon or status bar in most GUI test runners, which is red when tests fail and green when they pass.
Once the first test passes, we write another test, write more code, make the new test run, clean up again and retest. After we have repeated this cycle for all the tests for a specific feature, our work on the feature is done and we can move on to the next feature. Robert C. Martin summarised this connection between tests and production code in his Three Rules of TDD:
The Three Rules of TDD
Over the years I have come to describe test driven development in terms of three simple rules. They are:
|--Robert C. Martin|
This approach may seem too strict or radical at first. Additional tests are often written after the first version of the code has been developed. Some people write the code first, then do the tests and modify the code until all the tests pass. This technique also works, as long as the tests are written in the same development cycle as the code, and extra care is taken to focus on the user stories, not the code.
If you are just getting to know TDD, I strongly recommend doing it first by the book, as described above. Once you get accustomed to practices and underlying principles and see how they fit into your particular way of working, you can optimise and adjust the process.
Although the three rules of TDD deal with unit tests, they apply equally to tests that operate on a much larger scale. Iteration goals and business rules can also be translated into automated tests and used to help programmers focus their development effort. Tests that describe these goals are known as acceptance tests or customer-oriented tests. They do not focus on the code, but on the customer's expectations. Unit tests check whether the code does what programmers wanted it to do. Acceptance tests check whether the product satisfies the customer's requirements. In the same way that unit tests act as a target for code functionality, acceptance tests act as a target for the whole project. So, the first step of development is to ask business analysts or customers how we can verify that the code we are about to write works correctly. Having the acceptance criteria written down helps to flush out any inconsistencies and unclear requirements. Having the criteria in the form of an automated test makes it easy to check that we are on the right track.
The name “acceptance test” is a bit misleading. When used properly, an acceptance test is more a specification for development than a test. Naresh Jain suggested that I should use “executable specification” instead of “acceptance test” in this book. I like that name much more because it truly reflects the role of acceptance tests in modern programming, and clears up a lot of questions that I often get. In spite of being misleading, “acceptance test” is an established name for this concept, and I decided to stay with it in this book. However, keep in mind that we are actually talking more about specifications than tests.
Acceptance tests should reflect the customers' perception of when the system meets requirements, so they must be defined by a customer or a business analyst. This still leaves us with the question of who should translate this definition into FitNesse tables. There was an interesting discussion on this topic at the XPDay 2007 conference in London, during a workshop called “Working With Customers towards Shared Understanding”. Several participants noted that if developers are left to write acceptance tests on their own, the tests turn out too technical and task-oriented. Acceptance tests are more effective if they are focused on larger activities and expressed in the language of the business domain. Although FitNesse allows customers and business analysts to write tests directly without involving developers, that may be a step too far. Customers often forget about edge cases and focus only on general rules. Antony Marcano, one of the maintainers of TestingReflections.com, pointed out that discussions between developers and customers during test writing help a lot to clarify the domain and enable developers to understand the problem better. If tests are written by customers on their own, then the value of these discussions is lost. So, ideally, a developer and a customer representative, or a business analyst, should write the tests together.
Looking at the bigger picture, James Shore offers his “Describe-Demonstrate-Develop” best practice for using FitNesse:
Now I draw a very sharp distinction between FIT and TDD. I use FIT at a higher level than TDD. If TDD is “red-green-refactor”, then my use of FIT (“Example-Driven Requirements”) is “describe-demonstrate-develop“. The “develop” step includes all of TDD.
As you expand the FIT document, you should see opportunities to reorganize, change, and improve sections. Please do. You'll end up with a much better result.
TDD brings into the software world a lot of ideas from zero quality control (ZQC), Toyota’s approach to achieving product quality. Understanding the principles of ZQC and applying them while writing tests can significantly improve the effectiveness of TDD.
The basic idea of zero quality control is that quality has to be built into the product, and does not come from controlling and sorting out defects at the end. Toyota’s solution consists of a design approach that aims to create mistake-proof products and uses successive inexpensive tests to detect problems at the source.
Poka-Yoke, or mistake-proofing, is one of the most important principles of zero quality control. It is an approach to manufacturing that aims to prevent problems by either making products error-proof by design or by providing early warning signals for problems. Although Toyota made these practices famous, other designers have been applying them for quite a while. For example, any average elevator checks the load before departing and stops working if it is overcrowded. Some also give a clear warning using a flashing light or sound. This is how an elevator stops a potential problem by design.
Checking at the source, rather than at the end, was one of the most important ideas described by Shigeo Shingo (1909-1990) in his book on zero quality control . Mary Poppendieck often comments on the idea that “inspection to find defects is waste, inspection to prevent defects is essential”.
On production lines, the mistake-proofing principles are applied using Poka-Yoke devices: test tools used to check, inexpensively, whether a produced item is defective. Poka-Yoke devices enable the workers to identify problems straightaway on the manufacturing line. They allow quick and cheap checking, so that they can be used often to verify the quality at different stages.
Software tests and testing tools are our Poka-Yoke devices, allowing us to check quickly whether procedures and classes are defective, straight after we write them. Tests are automated so that they can be quickly and effectively executed later to confirm that the code is still working.
 see http://gojko.net/2007/05/09/the-poka-yoke-principle-and-how-to-write-better-software/ for a more detailed discussion of how Poka-Yoke applies to programming.