Maaret Pyhäjärvi posted the quote above on LinkedIn a few weeks ago. It speaks strongly to me so I asked Maaret if she'd written more (because she's written a lot) on this specific point. She hasn't, and the sentence keeps coming back into my head, so I thought I'd try to say what I take from it.
I think it's easy to skim read as some kind of definition of exploratory testing but that would be a mistake in my eyes. Testing by Exploring summarises how I felt last time I went into the definition in any depth and, for me, Maaret's quote is concerned with the why but says nothing of the what or how.
But let's say we have a shared definition of exploratory testing, would I make this statement this baldly generally? No, I probably would not. Why? First, it's written in very personal terms ("my time", "the best testing I could") and, second, as a context-driven tester I find it hard to assert anything as "best" without caveats, either explicit or understood in the context.
There is a caveat or two here, of course: best within the allowed time, and best that the tester themselves could do. Even so, I still have a degree of tension with it: can we be sure we couldn't have done anything differently that might have moved the needle a touch more in a positive direction? Really sure? Really?
Having said that, even if we agree on a definition of exploratory testing and assume that the quote can be applied across testers and scenarios, there's still the risk that we don't do good testing. We've all sometimes made poor choices, planned one thing and done another, and just missed the obvious or those things that someone else with different knowledge would spot immediately. Or is that just me?
Perhaps we could argue that a poor outcome was the best we could do in the end, given the choices that were made along the way. Yes, perhaps. Key, for me, though is that exploratory testing, the practice, doesn't make guarantees.
If I'm sounding too down on the quote at this point let me be clear: I am not down on the quote. I read it as naturally related to Cem Kaner's description of exploratory testing which I've hacked about here for brevity to make my point:
[Exploratory testing] emphasizes the ... responsibility of the ... tester to continually optimize the quality of her work ...
Any technique, according to my interpretation, is in play here. The important things are the intent, the the specific actions performed, and how the result is used. For example, running a set of scripted test cases by hand can be "valid" exploratory testing if that's what you think makes sense given what you know, what resources are available to you, and the question you have right now.
A very obvious way to optimise your testing is to think about how you're prioritising from the options you have, and where those options come from. I might gloss this as something like:
Prioritise what's important and, when you don't know that, prioritise finding out.This might sound like risk-based testing. And it is like risk-based testing in the sense that perceived risk is one metric by which we can understand what is important. But it's not the only one, others include the available time vs the time to run particular tests, the level of expertise or privilege required to run specific tests, the resources available for this round of testing, and so on.
If I only have one day left to test, should I start a test run that takes two days? I don't have time to get the results before testing should end, so perhaps I should do something else instead. That might be reasonable if this run addresses a perceived low risk, or you think no action will be taken on the result anyway. But if it addresses a perceived high risk, perhaps you should start the run. If it finds nothing, then all good, but if not then you have some valuable information even if a little later than stakeholders would prefer.
Or perhaps the most important thing in this situation is to lobby for an additional day to run the test, and you'll do this by explaining what you want to try, and why, and how this could be a worthwhile investment on the part of whoever is controlling the testing budget.
These kind of considerations do not sit comfortably inside prescriptive frameworks. They require judgement, situational awareness, and thinking at multiple levels across systems. Understanding the pieces in these contexts requires a range of skills, not least technical, intellectual, and social.
This is for nothing without action: getting to reasonable outcomes for the people who matter requires us to act, to share our results in consumable ways, to inform those who need to know, to choose the important tests to run to answer the important questions to answer.
And that brings me back to Maaret's quote and why I find it so powerful. If I don't read it as an outcome but rather as my goal, I feel like a hand inside its glove. This is what I strive to achieve with my exploratory testing:
The day my time to test runs out, I have done the best testing I could with the time I was given.
And now reflecting on where I've ended up I think that it's the "so that" in Maaret's original that triggers my other thoughts. I naturally want to read it as "to ensure that" but in my practice I want it to be "that strives for an outcome where."
Comments
Post a Comment