Elisabeth Hendrikson presented a keynote titled ‘Agile testing, uncertainty, risk and why it all works’ today at the Agile Testing Days conference in Berlin, focusing on key practices for agile testing success.

Hendrikson started by quoting Jerry Weinberg: “It’s easy to build a product you can’t sell and it’s easy to sell a product you can’t build. It’s building a product you can sell and selling a product you can build that’s difficult”. She added: we’re here because we need to build products that can be sold as cheaply and as efficiently as possible.

The only way to measure agility is with results. Hendrickson said

Agile software teams deliver value in the form of releasable software at frequent regular intervals (at least monthly), at a sustainable pace, while adapting to the changing needs of the business. […] Bottom line is delivering releasable software at least monthly. Releasable in this case means that we actually know that it is going what we wanted it to do. Delivering frequently is not let’s ship it and see what happens. It’s not releasable until it’s Done. Done means both implemented and tested. Tested means checked [for expectations] and explored [for risks].

Talking about risks, she listed four big sources of technical risks:

  • Ambiguity – we don’t know what we’re building or haven’t expressed it clearly
  • Assumptions – we presume that something isn’t going to go wrong but not actually ensuring it
  • Dependencies – Moving parts make it harder to understand the system, require a lot of assumptions about connections and make lots of potential for things to go wrong
  • Capacity – we might not have the velocity or capability to build this

She then talked about seven key testing practices in agile that mitigate these risks:

  • Acceptance Test Driven Development – collaboratively define expectations to reduce ambiguity. As a side effect, we get automated system tests.
  • Test Driven Development – begin development with the effect that we want our code to exhibit in mind. It reduces the risk of ambiguity. As a side-effect, we get automated unit tests.
  • Exploratory Testing – Simultaneously learning about software while designing tests and executing tests, using feedback from the last test to inform the next. (see Jon and James Bach’s work on Session-Based Exploratory Testing). This reduces the risk of assumptions that there are no bugs and that our expectations cover all functionality.
  • Automated system tests – automatically check the system from a business perspective. If at any point the system doesn’t meet the expectations, we’ll get a quick warning. They reduce risk of assumptions that everything is constantly fine.
  • Automated unit tests – automatically check system from a technical perspective. If at any point the system doesn’t meet technical expectations, we’ll get a quick warning. They reducerisk of assumptions that everything is constantly fine.
  • Collective test ownership – tests shouldn’t be handled separately from code and specifications. Treating them separately incurs a lot of management overhead and promotes isolation in teams. Collective test ownership reduces capacity risks.
  • Continuous integration – provides visibility to potential issues. This reduces risk of assumptions that everything is fine.

These practices work together to reduce sources of risk and allow teams to deliver releasable software at a sustainable pace, said Hendrikson.