Skip to main content

We Don't Know?


The topic at CEWT #7 last weekend was Dirty Testing Secrets. I decided to present something reasonably provocative as a conversation starter. I think it worked. The essay below is a pretty version of the notes I prepared in advance.


--00--

Quality Assurance. QA. It's getting less common, but it's still not unusual for people in software to talk about getting something into QA or to asking us to QA their stuff.

I've worked hard over the years at our place to spread the word that I don't think of my team in that way. I do an induction for all new employees and explain how testing is a creative and intellectual activity, not a checkbox ticking drudge.

Sadly, I still encounter career testers who think that their role is to confirm that requirements are met and no more. But my sense is that that's an open secret rather than a dirty one. This isn't a dirty secret either, although it might be a surprise to some:


That's not to say that we are gatekeepers  or owners or creators of quality and we certainly can't test the quality in.  But, for me, we're in the quality business because we're in a team that builds things for people to use and it's part of our role to help our stakeholders gauge the level of quality of those things.

Given that, I think it's reasonable for stakeholders to expect that we have some kind of handle on what quality is. I looked in the literature ... and also Twitter:
  • Quality is value to some person. (Weinberg
  • Quality is conformance to requirements. (Crosby
  • The quality of software product is not in how many bugs are found and fixed before release. It is in how the team responds to bugs found in production. (Sussman

While the first two are well-known, the latter is less so, although serviceability is amongst the factors that Garvin lists in Competing on the Eight Dimensions of Quality.

I asked the CEWT participants for their definitions of quality too. Here's a selection, and the rest can be found in In One Sentence, Define Quality, Bug, Testing:
  • How well someone perceives something works and meets a set of requirements.
  • External quality is a positive characteristic of software encompassing robustness, correctness, and lack of bugs.
  • An outcome that satisfies all stakeholders + customers.

So what is quality, then? Let's be honest, we don't know.


But our stakeholders are intelligent people in general, I'd say. They'll probably cut us a bit of slack here, recognising that context is crucial and that quality is a relationship between a product, a person, a time, and a task.

But what about bugs? Many of our colleagues see our bug reports, those by-products of our testing work that help them to understand the quality of their thing. Surely if we're writing them we must understand what bugs are. Mustn't we? Again, there are definitions to be found. Here's three:
  • Anything about the product that threatens its value. (Rapid Software Testing 3.0)
  • Something that bugs someone. (James Bach)
  • Anything that causes an unnecessary or unreasonable reduction of the quality of a software product. (BBST Bug Advocacy)

Perhaps unlike the varied definitions of quality, these three exist in a reasonably confined space, one where something about a product matters or might matter to someone for some reason. Taking the CEWT participants' definitions as well, that space broadens out:
  • The difference between something as desired and something as perceived that's an unwanted incongruity.
  • A bug is a piece of unintended behaviour in software that negatively affects a user - or will do so, when the software is released.
  • A perceived failure to meet an expectation.

So what is a bug? We don't know.


But I think our colleagues would recognise that, again, context is in play. They well know that product managers can deem something "not P1" or "not a bug" as easily as snapping their fingers, particularly as the release date gets closer. (Even if some testers I've known find this hard to accept.)

So they'll again be lenient and not push us too hard here. Bugs are a relationship between a product, a person, and a task, and a time.

However, even if we can't agree on what a problem is, or how good something is, our colleagues will likely be less forgiving when we can't explain what we do all day, what testing is.

There's no shortage of thought on this topic amongst our peers and CEWTees. These are quoted from What is Software Testing? and, as before, In One Sentence, Define Quality, Bug, Testing:
  • Testing is the process of executing a program with the intent of finding errors. (Meyers)
  • Testing is done to find information. Critical decisions about the project or the product are made on the basis of that information. (Kaner, Bach, Pettichord)
  • ... interact with the software or system, observe its actual behavior, and compare that to your expectations. (Hendrickson)
  • Assessing the integrity of specific functionality using as many perspectives as any any potential user that might ever utilise it.
  • Uncovering unknowns, experimentation and problem solving.
  • A verb! The activity of assessing an object/person (thing) to determine the quality of one or more of its attributes.

Hmm. So what is testing? We don't know.


Testing, like quality and bugs is contextual. It's a relationship between a product, a person, and a task, and a time.

I've said "we don't know" a lot here. But whether that matters is arguable. The fact that I'm talking about it certainly reflects my bias towards understanding the semantics of the area I am working in. I find that the theory helps to guide my practice, but others are able to get on and do testing work without ever considering that there might be subtleties beyond "breaking the product".

For me, testing is done in a constant state of not knowing. Testing is about shining light into the dark, trying to make the best sense we can of the situation we find ourselves in. We don't know whether what we did is right, we don't know whether what we'll do next will help, we don't know whether the data we gathered is usable, or the conclusions we drew from it acceptable, nor whether the next thing we do will invalidate everything we have done so far.


In order to be a great tester you have to embrace that not knowing. You have to be able to work within uncertainty, without being confident that anything will stand still, taking into account that your lack of knowledge of something might be the key issue. Here's another dirty testing secret for you:


To summarise, then:  we don't know what it is we're looking for, we can't tell what we've found, we don't know how we do it and we can't have confidence in it in any case.

I might say that we don't know what we're doing. But, ironically, if there's one thing we do know, it's that most people would rather not hear that. So let's keep it between ourselves, eh?

Here's my slides:


Photo: Neil Younger

Comments

Popular posts from this blog

Can Code, Can't Code, Is Useful

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "If testers can’t code, they’re of no use to us" My first reaction is to wonder what you expect from your testers. I am immediately interested in your working context and the way

Meet Me Halfway?

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Stop answering my questions with questions." Sure, I can do that. In return, please stop asking me questions so open to interpretation that any answer would be almost meaningless and certa

Testing (AI) is Testing

Last November I gave a talk, Random Exploration of a Chatbot API , at the BCS Testing, Diversity, AI Conference .  It was a nice surprise afterwards to be offered a book from their catalogue and I chose Artificial Intelligence and Software Testing by Rex Black, James Davenport, Joanna Olszewska, Jeremias Rößler, Adam Leon Smith, and Jonathon Wright.  This week, on a couple of train journeys around East Anglia, I read it and made sketchnotes. As someone not deeply into this field, but who has been experimenting with AI as a testing tool at work, I found the landscape view provided by the book interesting, particularly the lists: of challenges in testing AI, of approaches to testing AI, and of quality aspects to consider when evaluating AI.  Despite the hype around the area right now there's much that any competent tester will be familiar with, and skills that translate directly. Where there's likely to be novelty is in the technology, and the technical domain, and the effect of

Testers are Gate-Crashers

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Testers are the gatekeepers of quality" Instinctively I don't like the sound of that, but I wonder what you mean by it. Perhaps one or more of these? Testers set the quality sta

Postman Curlections

My team has been building a new service over the last few months. Until recently all the data it needs has been ingested at startup and our focus has been on the logic that processes the data, architecture, and infrastructure. This week we introduced a couple of new endpoints that enable the creation (through an HTTP POST) and update (PUT) of the fundamental data type (we call it a definition ) that the service operates on. I picked up the task of smoke testing the first implementations. I started out by asking the system under test to show me what it can do by using Postman to submit requests and inspecting the results. It was the kinds of things you'd imagine, including: submit some definitions (of various structure, size, intent, name, identifiers, etc) resubmit the same definitions (identical, sharing keys, with variations, etc) retrieve the submitted definitions (using whatever endpoints exist to show some view of them) compare definitions I submitted fro

Build Quality

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "When the build is green, the product is of sufficient quality to release" An interesting take, and one I wouldn't agree with in general. That surprises you? Well, ho

Make, Fix, and Test

A few weeks ago, in A Good Tester is All Over the Place , Joep Schuurkes described a model of testing work based on three axes: do testing yourself or support testing by others be embedded in a team or be part of a separate team do your job or improve the system It resonated with me and the other testers I shared it with at work, and it resurfaced in my mind while I was reflecting on some of the tasks I've picked up recently and what they have involved, at least in the way I've chosen to address them. Here's three examples: Documentation Generation We have an internal tool that generates documentation in Confluence by extracting and combining images and text from a handful of sources. Although useful, it ran very slowly or not at all so one of the developers performed major surgery on it. Up to that point, I had never taken much interest in the tool and I could have safely ignored this piece of work too because it would have been tested by

Am I Wrong?

I happened across Exploratory Testing: Why Is It Not Ideal for Agile Projects? by Vitaly Prus this week and I was triggered. But why? I took a few minutes to think that through. Partly, I guess, I feel directly challenged. I work on an agile project (by the definition in the article) and I would say that I use exclusively exploratory testing. Naturally, I like to think I'm doing a good job. Am I wrong? After calming down, and re-reading the article a couple of times, I don't think so. 😸 From the start, even the title makes me tense. The ideal solution is a perfect solution, the best solution. My context-driven instincts are reluctant to accept the premise, and I wonder what the author thinks is an ideal solution for an agile project, or any project. I notice also that I slid so easily from "an approach is not ideal" into "I am not doing a good job" and, in retrospect, that makes me smile. It doesn't do any harm to be reminded that your cognitive bias

Test Now

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "When is the best time to test?" Twenty posts in , I hope you're not expecting an answer without nuance? You are? Well, I'll do my best. For me, the best time to test is when there

Vanilla Flavour Testing

I have been pairing with a new developer colleague recently. In our last session he asked me "is this normal testing?" saying that he'd never seen anything like it anywhere else that he'd worked. We finished the task we were on and then chatted about his question for a few minutes. This is a short summary of what I said. I would describe myself as context-driven . I don't take the same approach to testing every time, except in a meta way. I try to understand the important questions, who they are important to, and what the constraints on the work are. With that knowledge I look for productive, pragmatic, ways to explore whatever we're looking at to uncover valuable information or find a way to move on. I write test notes as I work in a format that I have found to be useful to me, colleagues, and stakeholders. For me, the notes should clearly state the mission and give a tl;dr summary of the findings and I like them to be public while I'm working not just w