Focus on key examples

It can be tempting to add a ton of scenarios and test cases to acceptance criteria for a story, and look at all possible variations for the sake of completeness. Teams who automate most of their acceptance testing frequently end up doing that. Although it might seem counter-intuitive, this is a sure way to destroy a good user story. Continue reading

Throw user stories away after they are delivered

This is an excerpt from my upcoming book 50 Quick Ideas to Improve your User Stories. If you want to try this idea in practice, I’ll be running a workshop on improving user stories at the Product Owner Survival Camp

Many teams get stuck by using previous stories as documentation. They assign test cases, design diagrams or specifications to stories in digital task tracking tools. Such tasks become a reference for future regression testing plans or impact analysis. The problem with this approach is that it is unsustainable for even moderately complex products.

A user story will often change several functional aspects of a software system, and the same functionality will be impacted by many stories over a longer period of time. One story might put some feature in, another story might modify it later or take it out completely. In order to understand the current situation, someone has to discover all relevant stories, put them in reverse chronological order, find out about any potential conflicts and changes, and then come up with a complete picture.

Designs, specifications and tests explain how a system works currently – not how it changes over time. Using previous stories as a reference is similar to looking at a history of credit card purchases instead of the current balance to find out how much money is available. It is an error-prone, time consuming and labour intensive way of getting important information. Continue reading

Kanban in Action – book review

This wonderful little book is a gentle introduction to Kanban by Marcus Hammarberg and Joakim Sunden. It explains the theory behind flow-based processes and provides a ton of practical implementation tips on everything from visualising work to how to properly take out a sticky note.

The first part deals with the basic principles of Kanban, using visual boards to show and manage work in progress, managing queues and bottlenecks and distributing and limiting work across team members. The second part explains how to manage continuous process improvement, how to deal with estimation and planning and how to define and implement different classes of service.

My impression is that this book will be most useful to people completely new to Kanban, who are investigating the concepts or starting to adopt this process. If you already use Kanban, you might find the chapters on managing bottlenecks and process metrics interesting.

Compared to David Anderson’s book, Kanban in Action is more approachable for beginners. Each important concept is described with lots of small, concrete examples, which will help readers new to Kanban put things into perspective, but also reinforce the message that there is no one-size-fits-all solution. Anderson’s book goes in more depth to explain the theory behind the practice, and this book has more practical information and concrete advice on topics such as setting work in progress limits, managing different types of items on a visualisation board and choosing workflow metrics. If you’re researching this topic or starting to implement Kanban, it’s worth reading both books.

Budget instead of estimating

This is an excerpt from my upcoming book 50 Quick Ideas to Improve your User Stories. If you want to try this idea in practice, I’ll be running a workshop on improving user stories at the Product Owner Survival Camp

Detailed estimation works against the whole idea of flexible scope, but many companies I worked with fall into a common trap when they insist on work duration estimates. The typical story is that someone wants to know the rough delivery date for a large piece of work. Scope gets broken down into small, detailed stories, which are then discussed and estimated and the estimates are added up. Dan North at Oredev in 2011 said “We are terrified of uncertainty – we would rather be wrong than uncertain”, and this is where the problem starts. A nice precise number feels good, it feels as if someone is in control. The premise of this process is deeply flawed, because any estimates come with a margin of error, that is rarely considered. Just adding things up does not take into account compound effects of those errors, so the end result is precise, but horribly wrong. There are several popular error reduction techniques, such as estimating with intervals of confidence and estimating based on statistical averages, but in many contexts this is actually not the right problem to solve. Long-term estimates give the wrong impression of precision and promote long-term commitment on scope, which eliminates the biggest benefit businesses can get from agile delivery – flexible planning.

Instead of estimating, try an experiment and start with a budget for a bigger piece of work, both in terms of operational costs and time. This budget can then become a design constraint for the delivery team, similar to scalability or performance. Essentially, instead of asking “how long will it take?”, more useful questions to ask are “when do you need this by?” and “how much can you afford to pay for it?” Continue reading

User stories should be about behaviour changes

This is an excerpt from my upcoming book 50 Quick Ideas to Improve your User Stories. If you want to try this idea in practice, I’ll be running a workshop on improving user stories at the Booster conference in Bergen, NO next month. I’m also participating in the Product Owner Survival Camp in Zurich in March and we’ll be playing around with hierarchical backlogs and behaviour changes then as well.

Bill Wake’s INVEST set of user story characteristics has two conflicting forces. Independent and Valuable are often difficult to reconcile with Small. Value of software is a vague and esoteric concept in the domain of business users, but task size is under the control of a delivery team, so many teams end up choosing size over value. The result are “technical stories”, things that don’t really produce any outcome and a disconnect between what the team is pushing out and what the business sponsors really care about.

Many delivery teams also implicitly assume that something has value just because business users asked for it, so it’s difficult to argue about that. Robert Brinkerhoff, in Systems Thinking in Human Resource Development, argues that valuable initiatives produce an observable change in someone’s way of working. This principle is a great way to start a conversation on the value of stories or to unblock a sticky situation. In essence, translating Brinkerhoff’s idea to software means that it’s not enough to describe just someone’s behaviour, but we should aim to describe a change in that behaviour instead. This trick is particularly useful with user stories that have an overly generic value statement, or where the value statement is missing. Continue reading