Skip to main content

Better Averse?




What is testing? In one of the sessions at the Cambridge Software Testing Clinic the other night, the whole group collaborated on a mind map on that very question.

I find it interesting how the mind can latch on to one small aspect of a thing. I attend these kinds of events to find something new, perhaps a new idea, a different perspective on something I already know, or something about the way I act. In this case, under a node labelled mindset, one of the participants proposed risk-averse. I challenged that, and counter-proposed risk-aware. You can see them both on the map on this page, centre-left, near the bottom. And that's the thing I've been coming back to since: accepting that there is a testing mindset (with all the sociological and semantic challenges that might have) is it reasonable to say that it includes risk aversion or risk awareness?

Let's start here: why did I challenge? I challenged because the interpretation that I took in the moment was that of testers saying "no, let's not ship because we know there are problems." And that's not something that I want to hear testers who work for me saying. At least not generally, because I can think of contexts in which that kind of phrase is entirely appropriate. But I won't follow that train of thought just now.

But do I even have a decent idea what risk-aversion is? I think so: according to Wikipedia's opening sentence on the topic risk-aversion "is a preference for a sure outcome over a gamble with higher or equal expected value." In my scenario, not shipping is a more certain outcome (in terms of risk that those known problems will be found in the product by customers) than shipping. But I can think of other aspects of shipping that might be more certain than in the case of not shipping. Perhaps I'll come back to them.

But then, do I have a decent idea what the person who suggested risk-aversion meant by it? In all honesty, no. I have some evidence, because they were keen on risk-awareness when I suggested it, that they overlap with me here, but I can't be sure. Even if they were with me right now, and we could spend hours talking about it, could we be certain that we view this particular conceptual space in the same way? A good question for another time.

So what did I mean by risk-aware? I meant that I want testers who work for me to be alert to the risks that might be present in any piece of work they are doing. In fact, I want them to be actively looking for those risks. And I want them to be able to characterise the risks in various ways.

For example, if they have found an issue in the product I'd like them to be able to talk to stakeholders about it, what the associated risks are, why this matters and to who, and ideally something about the likelihood of it occurring and something about impact if the issue was to occur. I'd also like my testers to be able to talk to stakeholders in the same way about risk that they observe in the processes we're using to build our product, and also in the approach taken to testing, and in the results of testing. If I thought harder could I add more to this? Undoubtedly and perhaps I will, one of these days.

While we're here, this kind of risk assessment is inherently part of testing for me. (Perceived) risk is one of the data points that should be be in the mix when deciding what to test at any given time. Actually, I might elaborate on that: risk and uncertainty are two related data points that should be considered when deciding what to test. But I don't really want to open up another front in this discussion, so see Not Sure About Uncertainty for further thoughts on that.

Would it be fair to say that a tester engaging in risk-based testing is risk-averse to some extent or another? Try this: through their testing they are trying to obtain the surest outcome (in terms of understanding of the product) for the stakeholders by exploring those areas of the product that are most risky. So, well, yes, that would seem to satisfy the definition from Wikipedia wouldn't it?

Permission to smirk granted. You are observing me wondering whether I am arguing that a tester who is risk-aware, and uses that risk-awareness to engage in risk-based testing, must necessarily be risk-averse. And now I'm also thinking that perhaps this is possible because the things at risk might be different, but I time-boxed this post (to limit the risk of not getting to my other commitments, naturally) and I've run out of time.

So here's the thing that I enjoy so much: one thought blown up into a big claggy mess of conceptual strands, each of which themselves could lead to other interesting places, but with the knowledge that following one means that I won't be following another, with an end goal that can't be easily characterised, alert for synergy, overlap, contradiction, ambiguity and other relationships, and which might simply reflect my personal biases, with the intellectual challenge of disentanglement and value-seeking, all limited by time.

Hmm. What is testing?
Image: Cambridge Software Testing Clinic

Comments

Anonymous said…
If you are in an environment where the decision on whether to ship or not rests with your testers, then testers WILL be risk-averse!

In my view, the decision on whether to ship or not always rests with business managers. The testers can advise on detected issues; they should be able to articulate clearly what risks those issues pose; and that advice should be listened to and acted on by business managers. That's risk awareness, and that's a good thing.

A couple of times, I've identified bugs in new features under development; but I'm also aware that my personal policy is to record all the bugs I find, so that they can be fixed at a time that the business determines that they are worth fixing. Show-stoppers are show-stoppers, no doubt about that. But below that in the hierarchy of issues are bugs that won't affect performance or functionality, and it should the be business that decides if those are worth expending time, effort and (ultimately) money on fixing. (That's not to say that minor presentational bugs, such as typos, aren't worth fixing. I've seen and heard of instances where typos in an application reflect badly on an organisation, such as an egregious typo in a graphic put over the top of the CEO's audio message in an introduction to an on-line tool, or the company name misspelt in its own logo - and those I would argue strongly with any business manager need fixing NOW and pretty much qualify as show-stoppers as much as any failure of functionality.)

But if testers are actually being given the responsibility of "signing off" releases, that's not a good corporate practice in my book.

Popular posts from this blog

Meet Me Halfway?

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Stop answering my questions with questions." Sure, I can do that. In return, please stop asking me questions so open to interpretation that any answ...

The Best Programmer Dan Knows

  I was pairing with my friend Vernon at work last week, on a tool I've been developing. He was smiling broadly as I talked him through what I'd done because we've been here before. The tool facilitates a task that's time-consuming, inefficient, error-prone, tiresome, and important to get right. Vern knows that those kinds of factors trigger me to change or build something, and that's why he was struggling not to laugh out loud. He held himself together and asked a bunch of sensible questions about the need, the desired outcome, and the approach I'd taken. Then he mentioned a talk by Daniel Terhorst-North, called The Best Programmer I Know, and said that much of it paralleled what he sees me doing. It was my turn to laugh then, because I am not a good programmer, and I thought he knew that already. What I do accept, though, is that I am focussed on the value that programs can give, and getting some of that value as early as possible. He sent me a link to the ta...

Beginning Sketchnoting

In September 2017 I attended  Ian Johnson 's visual note-taking workshop at  DDD East Anglia . For the rest of the day I made sketchnotes, including during Karo Stoltzenburg 's talk on exploratory testing for developers  (sketch below), and since then I've been doing it on a regular basis. Karo recently asked whether I'd do a Team Eating (the Linguamatics brown bag lunch thing) on sketchnoting. I did, and this post captures some of what I said. Beginning sketchnoting, then. There's two sides to that: I still regard myself as a beginner at it, and today I'll give you some encouragement and some tips based on my experience, to begin sketchnoting for yourselves. I spend an enormous amount of time in situations where I find it helpful to take notes: testing, talking to colleagues about a problem, reading, 1-1 meetings, project meetings, workshops, conferences, and, and, and, and I could go on. I've long been interested in the approaches I've evol...

How do I Test AI?

  Recently a few people have asked me how I test AI. I'm happy to share my experiences, but I frame the question more broadly, perhaps something like this: what kinds of things do I consider when testing systems with artificial intelligence components .  I freestyled liberally the first time I answered but when the question came up again I thought I'd write a few bullets to help me remember key things. This post is the latest iteration of that list. Caveats: I'm not an expert; what you see below is a reminder of things to pick up on during conversations so it's quite minimal; it's also messy; it's absolutely not a guide or a set of best practices; each point should be applied in context; the categories are very rough; it's certainly not complete.  Also note that I work with teams who really know what they're doing on the domain, tech, and medical safety fronts and some of the things listed here are things they'd typically do some or all of. Testing ...

Don't Know? Find Out!

In What We Know We Don't Know , Hillel Wayne crisply summarises a handful of research findings about software development, describes how the research is carried out and reviewed and how he explores it, and contrasts those evidence-based results with the pronouncements of charismatic thought leaders. He also notes how and why this kind of research is hard in the software world. I won't pull much from the talk because I want to encourage you to watch it. Go on, it's reasonably short, it's comprehensible for me at 1.25x, and you can skip the section on Domain-Driven Design (the talk was at DDD Europe) if that's not your bag. Let me just give the same example that he opens with: research shows that most code reviews focus more on the first file presented to reviewers rather than the most important file in the eye of the developer. What we should learn: flag the starting and other critical files to receive more productive reviews. You never even thought about that possi...

My Adidas

If you've met me anywhere outside of a wedding or funeral, a snowy day, or a muddy field in the last 20 years you'll have seen me in Adidas Superstar trainers. But why? This post is for April Cools' Club .  --00-- I'm the butt of many jokes in our house, but not having a good memory features prominently amongst them. See also being bald ("do you need a hat, Dad?"), wearing jeans that have elastane in them (they're very comfy but "oh look, he's got the jeggings on again!"), and finding joy in contorted puns ("no-one's laughing except you, you know that, right?") Which is why it's interesting that I have a very strong, if admittedly not complete, memory of the first time I heard Run DMC. Raising Hell , their third album, was released in the UK in May 1986 and I bought it pretty much immediately after hearing it on the evening show on Radio 1, probably presented by Janice Long, ...

Not a Happy Place

  A few months ago I stopped having therapy because I felt I had stabilised myself enough to navigate life without it. For the time being, anyway.  I'm sure the counselling helped me but I couldn't tell you how and I've chosen not to look deeply into it. For someone who is usually pretty analytical this is perhaps an interesting decision but I knew that I didn't want to be second-guessing my counsellor, Sue, or mentally cross-referencing stuff that I'd researched while we were talking. And talk was what we mostly did, with Sue suggesting hardly any specific tools for me to try. One that she did recommend was finding a happy place to visualise, somewhere that I could be out of the moment for a moment to calm disruptive thoughts. (Something like this .) Surprisingly, I found that I couldn't conjure anywhere up inside my head. That's when I realised that I've always had difficulty seeing with my mind's eye but never called it out. If I try to imagine ev...

Going Underground

The map is not the territory. You've heard this before and I've quoted it before . The longer quote (due to Alfred Korzybski) from which the snappy soundbite originated adds some valuable context: A map is not the territory it represents, but, if correct, it has a similar structure to the territory, which accounts for its usefulness. I was thinking about that this week as I came to a product new to me but quite mature with a very rich set of configuration options. When I say rich , I mean — without casting any shade, because I have been there and understand — it is set in multiple locations, has extensive potential effects, and is often difficult to understand.  For my current project I consider it crucial to get a non-shallow view of how this works and so I began to explore. While there is some limited documentation it is, as so often, not up to date so mostly I worked in the codebases. Yes, plural, because this product spans multiple r...

Notes on Testing Notes

Ben Dowen pinged me and others on Twitter last week , asking for "a nice concise resource to link to for a blog post - about taking good Testing notes." I didn't have one so I thought I'd write a few words on how I'm doing it at the moment for my work at Ada Health, alongside Ben. You may have read previously that I use a script to upload Markdown-based text files to Confluence . Here's the template that I start from: # Date + Title # Mission # Summary WIP! # Notes Then I fill out what I plan to do. The Mission can be as high or low level as I want it to be. Sometimes, if deeper context might be valuable I'll add a Background subsection to it. I don't fill in the Summary section until the end. It's a high-level overview of what I did, what I found, risks identified, value provided, and so on. Between the Mission and Summary I hope that a reader can see what I initially intended and what actually...

Around the Testing World in 28 Ways

The Association for Software Testing has been crowdsourcing a book, Navigating the World as a Context-Driven Tester , for the last three years. Over that time 28 questions or statements about testing have been posed to our community and the various answers collected and collapsed into a single reply. Lee Hawkins, the coordinator of the project, has just blogged about the experience in The wisdom of the crowd has created an awesome resource for context-driven testers . He pulled some statistics out of the records he's kept showing the level of interest in each question or statement, measured by the number of responses from the community. That's the red bars on the chart at the top, ranging in value from 4 to 28. I replied every single time Lee posted, with a very specific mission in mind: I've decided to contribute by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good fa...