During the Test Driven Development: Ten Years Later talk today at QCon 2009 in London, Steve Freeman and Michael Feathers presented a “random walk through the history of TDD” and key lessons learned by the community in that period.
Feathers and Freeman started the talk by saying that it has been around 10 years since TDD went public and started gaining wider adoption, but that it has origins a lot earlier than most people would consider. Gerald Weinberg, working on punch card machines in the 60s, had to get it right quickly so he articulated requirements as tests and guided developments with those tests. Feathers quoted Weinberg that “It was just assumed that any pro would do a damn good job of this activity”. In late seventies, Ward Cunningham was working on a Wyatt Software spread-sheet system, where he short circuited the development process by reading spreadsheets into a test framework he wrote in a day. That helped his team produce software quicker but also benefited with external audits verifying the correctness of the software. Feathers quoted Cunningham on “We sailed through a big five audit when an auditor admitted that he’d necver seen anything like out test browser and passed us before lunch”. So people were doing things similar to TDD before, but did not really advertise it so it did not get wider adoption.
However, during the last ten years, TDD has transitioned from a black-belt technique known to a few early practitioners to something that became a norm. Going through a series of events such as the C3 project, publishing Extreme Programming explained and starting up with mock objects during the mid and late nineties to BDD style tests and frameworks, Feathers and Freeman introduced different changes in the community and culture which led to modern TDD, where books like Head-First software development introduce TDD to newbies as the way software should be produced, not as something that only real pros do. Freeman said that “TDD is built into the culture, it is now becoming a working assumption”. Feathers quoted an article from a IEEE software special issue on TDD in 2007, which summaries of various researches stating that TDD brings:
- more effort (10-30 % longer), better quality (40-90% reduced bug counts)
- effects more visible on real projects
- very hard to get meaningful results
- Some dissent about the results, particularly from people concerned with high level architecture
As the key lessons the community learned during the last 10 years, the presenters pointed out
- professional test their code
- separate what from how [the key to efficient TDD is to describe what a system should do, not how]
- automatic tests confirm features
- it's a change in culture
- working isn't good enough [writing testable and tested code is now the norm]
- listen to the tests [if the system is hard to test revisit the design]
- a working system provides feedback
- focus on intent
- when you're lost, slow down [working in small steps helps build better software]
- it's not only about testing [TDD tests describe what the software should do, not how to verify quality]
- legacy code is code without tests
- understand the principles behind practices