A more polished version of this article is in my book Fifty Quick Ideas To Improve Your Tests
Exploratory testing requires a clear mission. The mission statement provides focus and enables teams to triage what is important and what is out of scope. A clear mission prevents exploratory testing sessions turning into unstructured playing with the system. As software features are implemented, and user stories get ready for exploratory testing, it’s only logical to set the mission for exploratory testing sessions around new stories or changed features. Although it might sound counter-intuitive, story oriented missions lead to tunnel-vision and prevent teams from getting the most out of their testing sessions.
Stories and features are a solid starting point for coming up with good deterministic checks. However, they aren’t so good for exploratory testing missions. When exploratory testing is focused on a feature, or a set of changes delivered by a user story, people end up evaluating if the feature works, and rarely stray off the path. In a sense, teams end up proving what they expect to see. However, exploratory testing is most powerful when it deals with unexpected and unknown. For that, we need to allow tangential observations and insights, and design new tests around unexpected discoveries. To achieve that, the mission teams set for exploratory testing can’t be focused purely on features.
Good exploratory testing deals with unexpected risks, and for that we need to look beyond the current piece of work. On the other hand, we can’t cast the net too widely, because testing will lack focus. A good perspective to investigate, that balances wider scope and still provides focus, is around user capabilities. Features provide capabilities to users to do something useful, or take away user capabilities to do something dangerous or damaging. A good way to look for unexpected risks is to avoid exploring features, but explore related capabilities instead.
Key benefits
Focusing exploratory testing on capabilities instead of features leads to better insights and prevents tunnel vision.
A nice example of that is the contact form we built for MindMup last year. The related software feature was sending support requests when users fill in the form. We could have explored that feature using multiple vectors, such as field content length, e-mail formats, international character sets in the name or the message, but ultimately this would only focus on proving that the form works. Casting the net a bit wider, we identified two capabilities related to the contact form. People should be able to contact us for support easily in case of trouble. We should be able to support them easily, and solve their problems. Likewise, there is a capability we wanted to prevent. Nobody should be able to block or break the contact channels for other users through intentional or unintentional misuse. We set those capabilities as the mission of our exploratory testing session, and that led us to look at the accessibility of the contact form in case of trouble, and the ease of reporting typical problem scenarios. We discovered two critically important insights.
The first one is that a major cause of trouble would not be covered by the initial solution. Flaky and unreliable network access was responsible for a lot of incoming support requests. But when the internet connection for users goes down randomly, even though the form is filled correctly, the browser might fail to connect to our servers. If someone suddenly goes completely offline, the contact form won’t actually help at all. People might fill in the form, but lack of reliable network access will still disrupt their capability to contact us. The same goes for our servers suddenly dropping offline. None of those situations should happen in an ideal world, but when they do, that’s when users actually need support. So the feature was implemented correctly, but there was still a big capability risk. This led us to offer an alternative contact channel when the network is not accessible. We displayed the contact e-mail prominently on the form, and also repeated it in the error message if the form submission failed.
The second big insight was that people might be able to contact us, but without knowing the internals of the application, they wouldn’t be able to provide information for troubleshooting in case of data corruption or software bugs. That would pretty much leave us in the dark, and disrupt our capability to provide support. As a result, we decided to not even ask for common troubleshooting info (such as browser and operating system version), but instead read it and send automatically in the background. We also pulled out the last 1000 events that happened in the user interface, and sent them automatically with the support request, so that we could replay and investigate what exactly happened.
How to make it work
To get to good capabilities for exploring, brainstorm what a feature allows users to do, or what it prevents them from doing. When exploring user stories, try to focus on the user value part (‘In order to…’) rather than the feature description (‘I want …’).
If you use impact maps for planning work, the third level of the map (actor impacts) are a good starting point for discussing capabilities. Impacts will typically be changes to a capability. If you use user story maps, then the top-level item in the user story map spine related to the current user story is a nice starting point for discussion.