Two new BDD workshops available

I’m launching two new Behaviour-Driven Development workshops in late January.

Introduction to BDD is an intensive one day workshop which introduces behaviour-driven development to developers, business analysts and testers. The optional programming module is offered in Java or .NET, with Cucumber to automate BDD scenarios.

Hands-on BDD with Cucumber is a three day workshop which immerses the participants into a project driven by Specification by Example and Behaviour-Driven Development. The workshop is adjusted to fit your business domain and particular needs, so that the participants get real-world experience and instant benefits. It is combines the Introduction to BDD, a day of working on a realistic domain example taken from your recent project or a future phase of a project, and a day of programming exercises for test automation developers. During the workshop, we use Cucumber to manage BDD Scenarios. The programming modules can be either in Java or .NET.

These two workshops are offered at a 20% discount for all bookings made before 31st January 2010. For more information on these and my other workshops, check out the training page.

Both workshops are offered as on-site training only (there are no public workshops scheduled at this time). Contact my company, Neuri Ltd for more information and to book a workshop.

Improving testing practices at Google

Mark Striebeck from Google opened XPDay 2009 today with a talk titled “Developer testing, from the dark ages to the age of enlightenment”. Suggesting that software testing is today in a renaissance stage, Striebeck said that the community is now rediscovering “ancient” practices. Most things we use in testing today were invented a long time ago, and then forgotten, said Striebeck. In the last fifteen years, the community started rediscovering these practices and people were focused on advancing the art, not teaching. As a result, there are many good testing practices out there but having testable code is still more an art than science, according to Striebeck.

Google had a team of Test Mercenaries, who joined different teams for a short period of time to help them with testing. In most cases, they could see what was wrong after a few days and started helping the teams, but the effort wasn’t a success. When they left, teams would not improve significantly. Striebeck said that testing “wasn’t effective to teach”. Knowing what makes a good test often relied on personal opinion and gut-feel. Doing what they often do in similar situations, Striebeck said that they decided to collect data. The things that they were interested in were figuring out the characteristics of good tests and testable code and how to know in the first place that a test is effective. They decided to use a return-on-investment criteria: low investment (easy to write, easy to maintain), high return (alert to problems when it fails). According to Striebeck, Google spends $100M per year on test automation, and wanted an answer whether they are actually getting a good return on that investment. They estimated that a bug found during TDD costs $5 to fix, which surges to $50 for tests during a full build and $500 during an integration test. It goes to $5000 during a system test. Fixing bugs earlier would save them an estimated $160M per year.

To collect data, they set up a code-metrics storage to put all test execution analytics in a single place. Striebeck pointed out that Google has a single code repository, which is completely open to all of the 10000 developers. Although all systems are released independently (with release release cycles randing from a week to a month), everything is built from HEAD without any binary releases, and the repository receives several thousand changes per day with spikes of 20+ changes per minute. This resulted in 40+ millions of tests executed per day from a continuous integration service plus IDE and command line runs, they collected test results, coverage, buld time, binary size, static analysis and complexity analysis. Instead of anyone deciding whether a test is good or not, the system observed what people do with tests to rank them. They looked into what a developer does after a test fails. If the code was changed or added, the test was marked as good. If people changes the test code when it fails, it was marked as a bad test (especially if everyone is changing it). This means that the test was brittle and has a high maintenance cost. They also measured which tests are ignored in releases and which tests often failed inthe continuous build and weren’t executed during development.

The first step was to provide developers reactive feedback on tests. For example, the system suggested deleting tests that teams spent loads of time maintaining. They then collected metrics on whether the people actually acted on suggestions or not. The system also provided metrics to tech leads and managers to show how teams are doing with tests.

The second step, which is in progress at the moment, is to find patterns and indicators. As they now have identified lots of good and bad tests, the system is looking for common characteristics among them. Once these patterns are collected, algorithms will be designed to identify good and bad tests, and manually calibrated by experts.

The third step will be to provide constructive feedback to developers, telling developers how to improve tests, what tests to write an dhow to make the code more testable.

The fourth step in this effort will be to provide prognostic feedback, analysing code evolution patterns and warn developers that their change might result in a particular problem later on.

I will be covering XpDay 2009 on this blog in detail. Subscribe to my RSS feed to get notified when I post new articles.

FitNesse book now free online

As of now, the second edition of Test Driven .NET Development with FitNesse is free online. You can download the full PDF version or read the book online in HTML at http://gojko.net/fitnesse.

What’s new in this version?

Since the book was originally released, both FitNesse and the .NET FIT test runner were improved significantly. All the examples in this book are now updated to be compatible with the latest releases of FitNesse (20091121) and FitSharp (1.4). I re-wrote parts that are no longer applicable to the new FitSharp test runner, especially around Cell Operators. In a classic example of self-inflicted scope creep, I also wrote a new chapter on using domain objects directly.

I also changed the tool used for assembling the book. Instead of Apache FOP, I used XEP which will hopefully make the layout a bit better. Fonts (especially the code font) were also changed to make the book easier to read.

What about the paperback

I will make the paperback available soon. At the moment, the second edition is only available online.

Dark arts of TDD explained

Growing Object Oriented Software, Guided by Tests, by Steve Freeman and Nat Pryce is a TDD book, but unlike any other on the market today. First of all, the book deals mostly with advanced unit testing topics, such as designing tests for readability and mocking, and addresses many common stumbling points that people experience with unit testing a few years after they started their journey, such as applying unit testing in multi-threaded and asynchronous environments. Second, it explains and demonstrates in practice the dynamics of designing software through TDD, which is still a dark art for many programmers. And third, it gives the reader insight into Freeman’s and Pryce’s brains, which is why this book is a must-read for anyone serious about unit testing, even to people that have been doing it in the last century. Continue reading