Skip to main content

The Best Testing I Could

Maaret Pyhäjärvi posted the quote above on LinkedIn a few weeks ago. It speaks strongly to me so I asked Maaret if she'd written more (because she's written a lot) on this specific point. She hasn't, and the sentence keeps coming back into my head, so I thought I'd try to say what I take from it.

I think it's easy to skim read as some kind of definition of exploratory testing but that would be a mistake in my eyes. Testing by Exploring summarises how I felt last time I went into the definition in any depth and, for me, Maaret's quote is concerned with the why but says nothing of the what or how.

But let's say we have a shared definition of exploratory testing, would I make this statement this baldly generally? No, I probably would not. Why? First, it's written in very personal terms ("my time", "the best testing I could") and, second, as a context-driven tester I find it hard to assert anything as "best" without caveats, either explicit or understood in the context. 

There is a caveat or two here, of course: best within the allowed time, and best that the tester themselves could do. Even so, I still have a degree of tension with it: can we be sure we couldn't have done anything differently that might have moved the needle a touch more in a positive direction? Really sure? Really?

Having said that, even if we agree on a definition of exploratory testing and assume that the quote can be applied across testers and scenarios, there's still the risk that we don't do good testing. We've all sometimes made poor choices, planned one thing and done another, and just missed the obvious or those things that someone else with different knowledge would spot immediately. Or is that just me?

Perhaps we could argue that a poor outcome was the best we could do in the end, given the choices that were made along the way. Yes, perhaps. Key, for me, though is that exploratory testing, the practice, doesn't make guarantees. 

If I'm sounding too down on the quote at this point let me be clear: I am not down on the quote. I read it as naturally related to Cem Kaner's description of exploratory testing which I've hacked about here for brevity to make my point:

[Exploratory testing] emphasizes the ... responsibility of the ... tester to continually optimize the quality of her work ...

Any technique, according to my interpretation, is in play here. The important things are the intent, the the specific actions performed, and how the result is used. For example, running a set of scripted test cases by hand can be "valid" exploratory testing if that's what you think makes sense given what you know, what resources are available to you, and the question you have right now.

A very obvious way to optimise your testing is to think about how you're prioritising from the options you have, and where those options come from. I might gloss this as something like:

Prioritise what's important and, when you don't know that, prioritise finding out.
This might sound like risk-based testing. And it is like risk-based testing in the sense that perceived risk is one metric by which we can understand what is important. But it's not the only one, others include the available time vs the time to run particular tests, the level of expertise or privilege required to run specific tests, the resources available for this round of testing, and so on. 

If I only have one day left to test, should I start a test run that takes two days? I don't have time to get the results before testing should end, so perhaps I should do something else instead. That might be reasonable if this run addresses a perceived low risk, or you think no action will be taken on the result anyway. But if it addresses a perceived high risk, perhaps you should start the run. If it finds nothing, then all good, but if not then you have some valuable information even if a little later than stakeholders would prefer.

Or perhaps the most important thing in this situation is to lobby for an additional day to run the test, and you'll do this by explaining what you want to try, and why, and how this could be a worthwhile investment on the part of whoever is controlling the testing budget.

These kind of considerations do not sit comfortably inside prescriptive frameworks. They require judgement, situational awareness, and thinking at multiple levels across systems. Understanding the pieces in these contexts requires a range of skills, not least technical, intellectual, and social.

This is for nothing without action: getting to reasonable outcomes for the people who matter requires us to act, to share our results in consumable ways, to inform those who need to know, to choose the important tests to run to answer the important questions to answer.

And that brings me back to Maaret's quote and why I find it so powerful. If I don't read it as an outcome but rather as my goal, I feel like a hand inside its glove. This is what I strive to achieve with my exploratory testing:

The day my time to test runs out, I have done the best testing I could with the time I was given.

And now reflecting on where I've ended up I think that it's the "so that" in Maaret's original that triggers my other thoughts. I naturally want to read it as "to ensure that" but in my practice I want it to be "that strives for an outcome where." 

Comments

Popular posts from this blog

Meet Me Halfway?

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Stop answering my questions with questions." Sure, I can do that. In return, please stop asking me questions so open to interpretation that any answ...

The Best Programmer Dan Knows

  I was pairing with my friend Vernon at work last week, on a tool I've been developing. He was smiling broadly as I talked him through what I'd done because we've been here before. The tool facilitates a task that's time-consuming, inefficient, error-prone, tiresome, and important to get right. Vern knows that those kinds of factors trigger me to change or build something, and that's why he was struggling not to laugh out loud. He held himself together and asked a bunch of sensible questions about the need, the desired outcome, and the approach I'd taken. Then he mentioned a talk by Daniel Terhorst-North, called The Best Programmer I Know, and said that much of it paralleled what he sees me doing. It was my turn to laugh then, because I am not a good programmer, and I thought he knew that already. What I do accept, though, is that I am focussed on the value that programs can give, and getting some of that value as early as possible. He sent me a link to the ta...

How do I Test AI?

  Recently a few people have asked me how I test AI. I'm happy to share my experiences, but I frame the question more broadly, perhaps something like this: what kinds of things do I consider when testing systems with artificial intelligence components .  I freestyled liberally the first time I answered but when the question came up again I thought I'd write a few bullets to help me remember key things. This post is the latest iteration of that list. Caveats: I'm not an expert; what you see below is a reminder of things to pick up on during conversations so it's quite minimal; it's also messy; it's absolutely not a guide or a set of best practices; each point should be applied in context; the categories are very rough; it's certainly not complete.  Also note that I work with teams who really know what they're doing on the domain, tech, and medical safety fronts and some of the things listed here are things they'd typically do some or all of. Testing ...

Beginning Sketchnoting

In September 2017 I attended  Ian Johnson 's visual note-taking workshop at  DDD East Anglia . For the rest of the day I made sketchnotes, including during Karo Stoltzenburg 's talk on exploratory testing for developers  (sketch below), and since then I've been doing it on a regular basis. Karo recently asked whether I'd do a Team Eating (the Linguamatics brown bag lunch thing) on sketchnoting. I did, and this post captures some of what I said. Beginning sketchnoting, then. There's two sides to that: I still regard myself as a beginner at it, and today I'll give you some encouragement and some tips based on my experience, to begin sketchnoting for yourselves. I spend an enormous amount of time in situations where I find it helpful to take notes: testing, talking to colleagues about a problem, reading, 1-1 meetings, project meetings, workshops, conferences, and, and, and, and I could go on. I've long been interested in the approaches I've evol...

Don't Know? Find Out!

In What We Know We Don't Know , Hillel Wayne crisply summarises a handful of research findings about software development, describes how the research is carried out and reviewed and how he explores it, and contrasts those evidence-based results with the pronouncements of charismatic thought leaders. He also notes how and why this kind of research is hard in the software world. I won't pull much from the talk because I want to encourage you to watch it. Go on, it's reasonably short, it's comprehensible for me at 1.25x, and you can skip the section on Domain-Driven Design (the talk was at DDD Europe) if that's not your bag. Let me just give the same example that he opens with: research shows that most code reviews focus more on the first file presented to reviewers rather than the most important file in the eye of the developer. What we should learn: flag the starting and other critical files to receive more productive reviews. You never even thought about that possi...

My Adidas

If you've met me anywhere outside of a wedding or funeral, a snowy day, or a muddy field in the last 20 years you'll have seen me in Adidas Superstar trainers. But why? This post is for April Cools' Club .  --00-- I'm the butt of many jokes in our house, but not having a good memory features prominently amongst them. See also being bald ("do you need a hat, Dad?"), wearing jeans that have elastane in them (they're very comfy but "oh look, he's got the jeggings on again!"), and finding joy in contorted puns ("no-one's laughing except you, you know that, right?") Which is why it's interesting that I have a very strong, if admittedly not complete, memory of the first time I heard Run DMC. Raising Hell , their third album, was released in the UK in May 1986 and I bought it pretty much immediately after hearing it on the evening show on Radio 1, probably presented by Janice Long, ...

Not a Happy Place

  A few months ago I stopped having therapy because I felt I had stabilised myself enough to navigate life without it. For the time being, anyway.  I'm sure the counselling helped me but I couldn't tell you how and I've chosen not to look deeply into it. For someone who is usually pretty analytical this is perhaps an interesting decision but I knew that I didn't want to be second-guessing my counsellor, Sue, or mentally cross-referencing stuff that I'd researched while we were talking. And talk was what we mostly did, with Sue suggesting hardly any specific tools for me to try. One that she did recommend was finding a happy place to visualise, somewhere that I could be out of the moment for a moment to calm disruptive thoughts. (Something like this .) Surprisingly, I found that I couldn't conjure anywhere up inside my head. That's when I realised that I've always had difficulty seeing with my mind's eye but never called it out. If I try to imagine ev...

Going Underground

The map is not the territory. You've heard this before and I've quoted it before . The longer quote (due to Alfred Korzybski) from which the snappy soundbite originated adds some valuable context: A map is not the territory it represents, but, if correct, it has a similar structure to the territory, which accounts for its usefulness. I was thinking about that this week as I came to a product new to me but quite mature with a very rich set of configuration options. When I say rich , I mean — without casting any shade, because I have been there and understand — it is set in multiple locations, has extensive potential effects, and is often difficult to understand.  For my current project I consider it crucial to get a non-shallow view of how this works and so I began to explore. While there is some limited documentation it is, as so often, not up to date so mostly I worked in the codebases. Yes, plural, because this product spans multiple r...

Notes on Testing Notes

Ben Dowen pinged me and others on Twitter last week , asking for "a nice concise resource to link to for a blog post - about taking good Testing notes." I didn't have one so I thought I'd write a few words on how I'm doing it at the moment for my work at Ada Health, alongside Ben. You may have read previously that I use a script to upload Markdown-based text files to Confluence . Here's the template that I start from: # Date + Title # Mission # Summary WIP! # Notes Then I fill out what I plan to do. The Mission can be as high or low level as I want it to be. Sometimes, if deeper context might be valuable I'll add a Background subsection to it. I don't fill in the Summary section until the end. It's a high-level overview of what I did, what I found, risks identified, value provided, and so on. Between the Mission and Summary I hope that a reader can see what I initially intended and what actually...

Around the Testing World in 28 Ways

The Association for Software Testing has been crowdsourcing a book, Navigating the World as a Context-Driven Tester , for the last three years. Over that time 28 questions or statements about testing have been posed to our community and the various answers collected and collapsed into a single reply. Lee Hawkins, the coordinator of the project, has just blogged about the experience in The wisdom of the crowd has created an awesome resource for context-driven testers . He pulled some statistics out of the records he's kept showing the level of interest in each question or statement, measured by the number of responses from the community. That's the red bars on the chart at the top, ranging in value from 4 to 28. I replied every single time Lee posted, with a very specific mission in mind: I've decided to contribute by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good fa...