This week at the XPDay 09 conference in London, I facilitated a discussion on practices, ideas and tools that help us focus on building software that matters. We started by quickly going over the conclusions from a similar workshop held in august during the Alt.NET UK conference. We then started an open discussion on new ideas. Unlike the Alt.NET workshop where most people in the room seemed to be server-side developers, this time web developers were the majority, so the discussion was often centered around mass-market software. The main theme of this workshop turned out to be feedback as a tool for focusing projects on things that matter.
Quality over speed
Although most agile literature focuses on fast feedback as the primary idea to align software with customers’ needs, it seems to me from the discussion that the quality of feedback is more important than the speed. A good idea to improve quality of feedback was to invite passionate users to join in, giving them early access and allowing them to influence the product. Another good idea was to observe how the software is actually used (I proposed that as the new year’s resolution in 2008 on this blog, and considering that the new year is coming soon again, maybe it’s time to repeat that proposal).
Release and then measure
Of course, this doesn’t mean that quick feedback is not important. Finding whether we’re going in the right direction quickly is key to successful software development. One idea to improve this is to focus early deliveries on the smallest chunks of software that could provoke and provide meaningful feedback. Expanding on that idea, we discussed rapid prototyping instead of analysis. I must admit, I’m not completely in favour of that idea (and I recently wrote against releasing unpolished software), but this might work better for mass-market than in-house software which I mostly do.
This led to a discussion on the idea of releasing a bunch of features quickly and then asking the users to choose which ones they like and which ones should be binned. 37 Signals and Twitter were mentioned as examples for this, releasing and then dropping many features that didn’t work. The key to do this successfully for mass-market applications such as web sites is to have good monitoring on feature usage. Tealeaf customer experience monitoring was mentioned as a tool to support this.
Another idea on how to improve the quality of feedback is to build feature partitioning into the system, enabling features to be turned on and off on demand. This supports measuring effects of individual features. It also allows the system to gracefully degrade services if it starts running out of capacity. Mixcloud was mentioned as a good example of this in practice.
The right sample
As an idea how to improve the amount of feedback we can get from end-users, we discussed giving something in exchange for feedback. This is probably most applicable to web sites where you can give users small gifts or virtual goods in exchange for filling in a form.
We also discussed how choosing the right sample was crucial to get good feedback. An idea that I especially liked was to align the sample with user personae chosen for the project. Clearly defining the target market before a project starts and basing the feedback sample on that is one of those things that seem common sense but are yet to be adopted by software community at large. Another key idea for mass-market software was gradually growing the sample. An example of what can happen if you only listen to early feedback of a small group is the ISnack 2.0 blunder. Kraft foods chose ISnack 2.0 as a new name of their Cheesybite product based on positive feedback from focus groups, which provoked “Facebook hate groups, blogs and angry Tweets [..] and T-shirts trashing the name” when it went public and finally causing the company to admit defeat and bin the name.
Here’s the updated mind map with all the ideas from both workshops:
Some more food for thought
Adewale Oshineye spoke about one of his earlier projects (from what I remember, for Dixons) where an uncaught security bug turned out to be a very important feature allowing stores to share information. This reminded me of one of my projects where a feature essentially developed for easier testing of a network flow algorithm proved to be a key selling point for a system (I wrote about this already, see Blinded by the user interface). Jeff Patton said that both these stories are examples of value produced by accident, and then raised probably the most provoking question of the session: How do we intentionally discover features that users have not even thought of but that could bring a lot of value? This question was, unfortunately, left without an appropriate answer. Round 3 of this discussion is scheduled for the SPA conference next year in May, and this will surely be one of the major themes of that workshop.
See other articles from XP Day 09