Skip to main content

What We Found Not Looking for Bugs

This post is a conversation and a collaboration between Anders Dinsen and me. Aside from a little commentary at the top and edits to remove repetition and side topics, to add links, and to clarify, the content is as it came out in the moment, over the course of a couple of days.

A question I asked about not looking for bugs at Lean Coffee in Cambridge last month initiated a fun discussion. The discussion suggested it’d be worth posing the question again in a tweet. The tweet in turn prompted a dialogue.

Some of the dialogue happened on public Twitter, some via DM, and on Skype, and yet more in a Google doc, at first with staggered participation and then in a tight synchronous loop where we were simultaneously editing different parts of the same document, asking questions and answering them in a continuous flow. It was at once exhilarating, educational and energising.

The dialogue exposes some different perspectives on testing and we decided to put it together in a way that shows how it could have taken place between two respectful, but different, testers.

--00--

James: Testing can’t find all the bugs, so which ones shouldn’t we look for? How?

Anders: My brain just blew up. If we know which bugs not to look for, why test?

James: Do you think the question implies bugs are known? Could they be expected? Suspected?

Anders: No, but you appear to know some bugs not to find.

James: I don't think I'm making any claims about what I know, am I?

Anders: Au contraire, "which bugs" seems quite specific, doesn't it?

James: By asking "which" I don't believe I am claiming any knowledge of possible answers.

Anders: I think this is a valid point.

Testing takes place in time, and there is a before and an after. Before, things are fundamentally uncertain, so if we know bugs specifically to look for, uncertainty is an illusion.

That testing takes place in time is obvious, but still easily forgotten like most other things that relates to time.

In our minds, time does not seem as real as it is. Remember, that we can just as vividly imagine the future and remember the past as we can experience the current. In our thoughts, we jump back and forth between imagination, the current and memory of the past, often without even realizing that we are in fact jumping.

When I test, I hope an outcome of testing will be test results which will give me certainty so that I can communicate clearly to decision makers and help them achieve certainty about things they need to be certain about to take decisions. This happens in time.

So before testing, there is uncertainty. After testing, some kind of certainty exists in someone (e.g. me, the tester) about the thing I am testing.

Considering that, testing is simple, but it follows that, expecting and even suspecting bugs implies some certainty, which will mislead our testing away from the uncertain.

James: I find it problematic to agree that testing is simple here - and I’ve had that conversation with many people now. Perhaps part of it is that "testing" is ambiguous in at least two interesting senses, or at least at two different resolutions:
  • the specific actions of the tester
  • a black box into which stakeholders put requirements and from which they receive reports

These are micro and macro views. In The Shape of Actions, Harry Collins talks about how tasks are ripe for automation when the actors have become indifferent to the details of them. I wrote on this in Auto Did Act, noting that the perspective of the actor is significant.

I would want to ask this: from whose perspective is testing simple? Maybe the stakeholder can view testing as simple, because they are indifferent to the details: it could be off-shore workers, monkeys, robots, or whatever doing the work so long as it is "tested".

I am also a little uncomfortable with the idea of certainty as you expressed it. Are we talking about certainty in the behaviour of the product under test, or some factor(s) of the testing that has been done, or something else?

I think I would be prepared to go this far:
  • Some testing, t, has been performed
  • Before t there was an information state i
  • After t there is an information state j
  • It is never the case that i is equal to j (or, perhaps, if i is equal to j then t was not testing)
  • It is not the case that only t can provide a change from i to j. For example, other simultaneous work on the system under test may contribute to a shared information state.
  • The aim of testing is that j is a better state than i for the relevant people to use as the basis for decision making

Anders: But certainty is important, as it links to someone, a stakeholder, a human. Certainty connotes a state of knowledge in something that has a soul, not just a mathematical or mechanical entity.

This leads me to say that we cannot have human testing without judgement.
Aside: It’s funny that the word checking, which we usually associate with automatic testing, might actually better describe at least part of human testing, as the roots of ‘check’ are the same as the game of chess, the Persian word for king. The check is therefore the king’s judgement, a verdict of truth, gamified in chess, but in the real world always something that requires judgement. But that was a stray thought ...
What’s important here is that some way or another testing is not only about information.

I accept that as testers, we produce information, even streams of tacit and explicit knowledge testing and some of that can be mechanistically or algorithmically produced, but if we are to use it as humans and not only leave it to the machines to process, we must not only accept what we observe in our testing, we must judge it. At the end of the day (or the test) at least we must judge whether to keep what we have observed to ourselves, or if we should report it.

James: I did not define what I mean by an information state. If you pushed me to say something formal, I might propose it’s something like a set of assertions about the state of the world that is relevant to the system under test, with associated confidence scores. I might argue that much of it is tacitly understood by the participants in testing and the consumption of test results. I might argue that there is the potential for different participants to have different views of it - it is a model, after all. I might argue that it is part of the dialogue between the participants to get a mutual understanding of the parts of j that are important to any decisions.

This last sentence is critical. While there will (hopefully) be some shared understanding between the actors involved, there will also be areas that are not shared. Those producing the information for the decision-maker may not share everything that they could. But even if they were operating in such a way as to attempt to share everything that was relevant to the decision, their judgement is involved and so they could miss something that later turns out to be important.
Aside: I wonder whether it is also interesting to consider that they could over-share and so cloud the decision with irrelevant data. It is a real practical problem but I don’t know whether it helps here. If it does, then the way in which information is presented is also likely to be a factor.
Similarly, the decision-maker may have access to information from other sources. These may be contemporary or historical, from within the problem domain or not, ...

So, really, I think that having two information states - pre and post t - is an oversimplification. In reality, each actor will have information states taking input from a variety of sources, changing asynchronously. The states i and j should be considered (in a hand-wavy way) the shared states. But we must remember that useful information can reside elsewhere.

Anders: I feel this is too much PRINCE2, where people on the shop floor attach tuples of likelihood and consequence-scores to enumerated risks, but essentially thereby hiding important information needed to make good, open-eyed decisions about risks.

James: Perhaps. I have been coy about exactly what this would look like because I don't have a very well-formed or well-informed story. In Your Testing is a Joke, I reference Daniel Dennett who proposes that our mental models are somewhat like the information state I've described. But I don't think it's possible or desirable to attempt to do this in practice for all pieces of information, if it were even possible to enumerate all pieces of information

Anders:I have witnessed such systems in operation and had to live with consequences of them. I have probably developed a very sceptical attitude due to that.

But we should not forget that testing is a human activity in a context and it is my human capacity to judge what I observe in testing and convey messages about it to stakeholders.

James: I’m still not comfortable with the term "certainty".

I might speculate that certainty as you are trying to use it could be a function of the person and the information states I’m proposing. Maybe humans have some shared feeling about what this function is, but it can differ by person. So perhaps a dimension of the humanity in this kind of story is in the way we "code" the function that produces certainty from any given information state.

The data in the information state can be produced by any actor, including a machine, but the interpretation of that information to provide confidence (a term I'm more comfortable with, but see e.g. this discussion) is of course a human activity. (But recent advances in AI suggest that perhaps it won’t necessarily always be so, for at least some classes of problem.)

Anders: Can I please ask you to join "team human", i.e. that all relevant actors (except the tools we use and the item under test) are humans with human capabilities, i.e. real thoughts and perhaps most importantly gut feelings?

Can you accept that fundamentally, a test result produced by a human is not produced by mechanistically, but human interpretation of what the human senses (e.g. sees), experience, imagination, and ultimately judgement?

James: Think of statistics. There are numerous tools that take information and turn it into summaries of the information. Some of them are named to suggest that they give confidence. (Confidence intervals, for example, or significance.) Those tools are things that humans can drive without thought (so essentially machines.)

Anders: I fundamentally cannot follow you there. Nassim Taleb is probably the most notable critic of statistics interpreted as something that can give confidence. His point (and mine) is that confidence as a mathematical term should not be confused with real confidence, that which a person has.

James: I think we are agreeing. Although the terms are named in that way, and may be viewed in that way by some - particularly those with a distant perspective - the results of running those statistical methods on data must inherently be interpreted by a human in context to be meaningful, valuable.

Anders: Ethically, decisions should be taken on the basis of an understanding of information. Defining "understanding" is difficult though, but there must be some sort of judgement involved, and then I’m back at square one: I use all my knowledge, experience and connect to my values, but by the end of the day, what I do is in the hands of my gut feeling.

James: Perhaps another angle is that data can get into (my notion of) an information state from any source. This can include gut, experiment, hearsay, lies. I want each of the items of data to have some level of confidence attached to them (in some hand-wavy way, again).

The humanistic aspect that you desire can be modelled here. It’s just not the only or even necessarily the most important factor, until the last step where judgement is called for.

Anders: This leads me to think about kairos: That there is a moment in which testing takes place, the point in time where the future turns to become the past. Imagine your computer clock shows 10.24 am and you know you have found a bug. When is the right time to tell it to the devs? They are in a meeting now about future features. Let’s tell them after lunch.

Kairos for communicating the bug seems to be "after lunch".

But it is not just about communication, there could even be a supreme moment for performing a test. It could be one that I have just had the idea for, one I have sketched out yesterday in a mind map, noted on a post-it, or prepared in a script months ago.

Kairos in testing could be the moment when our minds are open to the knowledge stream of testing so we can let it help us reach certainty about the test performed.

James: I am interested in the extent to which you can prepare the ground for kairos. What can someone do to make kairos more likely? As a tester, I want to find the important issues. Kairos would be a point at which I could execute the right action to identify an important issue. How to get to that moment, with the capacity to perform that action?

Anders: There is, to me, no doubt that kairos is a "thing" in testing in the human-to-human relating parts of what we do: communication, particularly; but also in leadership. A sense of kairos involves having an intuition of what is opportune to communicate in a given moment, and when is an opportune moment to communicate what seems important to you, but of course it could also be about having a sense of some testing to carry out at a particular moment to cause a good effect on the project.

Whether kairos is a thing in what is happening only between the tester and the object being tested (and possibly other machines), I would doubt, or if it was, we would certainly reach far beyond of the original meanings of kairos.

James: I think this is tied to your desire for a dialogue to be only between two souls, as we discussed on Skype. We agreed then that it is possible for one person to have an internal dialogue, and so two souls need not be necessary in at least that circumstance. I’d argue it's also not necessary in general. (Or we have to agree some different definition of dialogue.)

Anders: I do appreciate that some testers have a "technical 6th sense", e.g. when people experience themselves as "bug magnets". I think, however, that that comes from creative talents, imagination, technical understanding, and understanding of the types of mistakes programmers make, more than about human relations or "relations" to machines. I think it would then be better to talk about "opportune conditions", which, I think, would then probably be the same as "good heuristics".

James: From Wikipedia: In rhetoric, kairos is "a passing instant when an opening appears which must be driven through with force if success is to be achieved."

Whether at a time or under given conditions (and I'm not sure the distinction helps), it seems that kairos requires the speaker and listener (to give the roles anthropomorphic names for a moment) to both be in particular states:
  • the speaker must be in a position to capitalise on whatever opportunity is there, but also to recognise that it is there to be acted upon.
  • the listener must (appear to the speaker to) be in a state that is compatible with whatever the speaker wants to do.

Whether or not the opportunity is acted upon, I think these are true. Notice that they include both time and conditions. Time can exist (forgetting metaphysical concerns) without conditions being true, but the conditions must necessarily exist in a time. So I argue that if you want to tie to conditions you are necessarily tying to time also. If I follow your reasoning, then I think this means you might be open to kairos existing in human-machine interactions?

A difference that is apparent at several points in our dialogue here, I think, is that I want to make (software) testing be about more than interaction of a human with a product. I want it to include human-human interactions around the product. (See e.g. Testing All the Way Down and The Anatomy of a Definition of Testing.)

It’s my intuition that many useful techniques of testing cross over between interactions with humans and those with machines. And so I am interested in seeing what happens when you try to capture them in the same model of testing. And in the course of our discussion I’ve realised that I’ve been thinking along these lines for a while - see Going Postel or even Special Offers, for example.

I think that you want to separate these two worlds more distinctly than I do, and reserve more concepts, approaches and so on for humans only. But I think we have a shared desire to recognise the humanity at the heart of testing and to expect that human judgement is important to contextualise the outcomes of testing.

Anders: Yes you are right, I want to separate the two worlds, and I realise now that the reason is that I hope testers will more actively recognise humanity and especially what it means being human. Too often, testers try to model humanity using terminology and understandings which are fundamentally tied to the technical domain.

This leads to a lot of (hopefully only unconscious) reductionism in the testing world. It’s probably caused by leading thinkers in testing having very technical, not humanistic backgrounds.

So I am passionate that we do not confuse the technical moment in time in which I hit the key on my keyboard to start automatic test suite thereby altering the states of the system under test and the testing tools used, but not yet influencing any humans with the kairos of testing which is only tied to the human relations we have, including those we have with ourselves, and not to any machines.

Kairos happens when we let it happen.

Kairos is when we look down on the computer screen, sense what is on the screen, allow it to enter our minds, and start figuring out what happened and what that might mean.

...

Comments

Unknown said…
Interesting read. Two thoughts that occurred to me while reading:

-
Perhaps the term "less uncertainty" might be more appropriate?

-
A new term, to me. After looking it up, it reminded me of the "Today is the Day" concept in improv. In improv, (generally) you shouldn't talk about the past or future. The action and scene should be now, in the moment. "Today is the day" things change for your character…a scene should be about a life-altering experience. Scenes that follow will be inherently interesting because we see the character in a new light. … We want to see a character finally stand up to their boss, declare their love, get a divorce, get a job, get fired…anything to break the routine. Don't relate thigns that happened in the past, or imagine things that might happen in the future. Perform them now! The scene and characters should be in a moment of profound importance! Kairos isn't exactly the same concept, but it seemed similar and related, to me.
James Thomas said…
Hi Damian, can you prepare for "Today is the Day", to give increased likelihood of it happening when you want it to?
Unknown said…
It appears parts of my comments (between and including the less-than & greater-than characters) were stripped. I originally quoted: 'James: I’m still not comfortable with the term "certainty".' (with regards to my first comment) AND 'Anders: This leads me to think about kairos' (with regards to my second comment).

'can you prepare for "Today is the Day", to give increased likelihood of it happening when you want it to?'
I think so. One way to become accustomed to the “today is the day” concept is to practice “starting in the middle” (of the action/scene), instead of “starting at the start”.

Typically, less interesting scenes and less experienced improvers might begin scenes with (something like):
“Hey son, how are you?”
“Fine, dad! How are you?”
“Great! It sure is cold today.”
“Yes, it is cold, and we should build a fire.”
“Yes, we should build a fire, and I’ll get some matches.”

These opening lines are not particularly interesting. It would take a while to slowly build to the “interesting” part of the scene.

However, more interesting and experienced improvers might begin scenes “in the middle”, with (something like):
“Son, I don’t care how cold you are, our fireplace is for wood, not your mothers wedding dress!”

This single, opening line immediately raises the stakes of the scene, and also establishes who, what, and where. It “starts in the middle”. Like “Kairos”, it is “a moment of indeterminate time in which an event of significance happen”

To prepare for this - In improv training, when trying to practice for "today is the day", students will be given a situation (characters, environment, etc.) and then try to “start in the middle” (think of opening lines that not only establish who, what, where, when, etc. but also join the scene with the action already taking place). They will do this over and over and over. Through this practice, I think that improv performers can “prepare for Today is the Day”.
James Thomas said…
And can you map that back to testing?

Popular posts from this blog

Meet Me Halfway?

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Stop answering my questions with questions." Sure, I can do that. In return, please stop asking me questions so open to interpretation that any answer would be almost meaningless and certa

Can Code, Can't Code, Is Useful

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "If testers can’t code, they’re of no use to us" My first reaction is to wonder what you expect from your testers. I am immediately interested in your working context and the way

The Best Programmer Dan Knows

  I was pairing with my friend Vernon at work last week, on a tool I've been developing. He was smiling broadly as I talked him through what I'd done because we've been here before. The tool facilitates a task that's time-consuming, inefficient, error-prone, tiresome, and important to get right. Vern knows that those kinds of factors trigger me to change or build something, and that's why he was struggling not to laugh out loud. He held himself together and asked a bunch of sensible questions about the need, the desired outcome, and the approach I'd taken. Then he mentioned a talk by Daniel Terhorst-North, called The Best Programmer I Know, and said that much of it paralleled what he sees me doing. It was my turn to laugh then, because I am not a good programmer, and I thought he knew that already. What I do accept, though, is that I am focussed on the value that programs can give, and getting some of that value as early as possible. He sent me a link to the ta

Beginning Sketchnoting

In September 2017 I attended  Ian Johnson 's visual note-taking workshop at  DDD East Anglia . For the rest of the day I made sketchnotes, including during Karo Stoltzenburg 's talk on exploratory testing for developers  (sketch below), and since then I've been doing it on a regular basis. Karo recently asked whether I'd do a Team Eating (the Linguamatics brown bag lunch thing) on sketchnoting. I did, and this post captures some of what I said. Beginning sketchnoting, then. There's two sides to that: I still regard myself as a beginner at it, and today I'll give you some encouragement and some tips based on my experience, to begin sketchnoting for yourselves. I spend an enormous amount of time in situations where I find it helpful to take notes: testing, talking to colleagues about a problem, reading, 1-1 meetings, project meetings, workshops, conferences, and, and, and, and I could go on. I've long been interested in the approaches I've evol

Not Strictly for the Birds

  One of my chores takes me outside early in the morning and, if I time it right, I get to hear a charming chorus of birdsong from the trees in the gardens down our road, a relaxing layered soundscape of tuneful calls, chatter, and chirrupping. Interestingly, although I can tell from the number and variety of trills that there must be a large number of birds around, they are tricky to spot. I have found that by staring loosely at something, such as the silhouette of a tree's crown against the slowly brightening sky, I see more birds out of the corner of my eye than if I scan to look for them. The reason seems to be that my peripheral vision picks up movement against the wider background that direct inspection can miss. An optometrist I am not, but I do find myself staring at data a great deal, seeking relationships, patterns, or gaps. I idly wondered whether, if I filled my visual field with data, I might be able to exploit my peripheral vision in that quest. I have a wide monito

ChatGPTesters

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00--  "Why don’t we replace the testers with AI?" We have a good relationship so I feel safe telling you that my instinctive reaction, as a member of the Tester's Union, is to ask why we don&

Postman Curlections

My team has been building a new service over the last few months. Until recently all the data it needs has been ingested at startup and our focus has been on the logic that processes the data, architecture, and infrastructure. This week we introduced a couple of new endpoints that enable the creation (through an HTTP POST) and update (PUT) of the fundamental data type (we call it a definition ) that the service operates on. I picked up the task of smoke testing the first implementations. I started out by asking the system under test to show me what it can do by using Postman to submit requests and inspecting the results. It was the kinds of things you'd imagine, including: submit some definitions (of various structure, size, intent, name, identifiers, etc) resubmit the same definitions (identical, sharing keys, with variations, etc) retrieve the submitted definitions (using whatever endpoints exist to show some view of them) compare definitions I submitted fro

Vanilla Flavour Testing

I have been pairing with a new developer colleague recently. In our last session he asked me "is this normal testing?" saying that he'd never seen anything like it anywhere else that he'd worked. We finished the task we were on and then chatted about his question for a few minutes. This is a short summary of what I said. I would describe myself as context-driven . I don't take the same approach to testing every time, except in a meta way. I try to understand the important questions, who they are important to, and what the constraints on the work are. With that knowledge I look for productive, pragmatic, ways to explore whatever we're looking at to uncover valuable information or find a way to move on. I write test notes as I work in a format that I have found to be useful to me, colleagues, and stakeholders. For me, the notes should clearly state the mission and give a tl;dr summary of the findings and I like them to be public while I'm working not just w

Make, Fix, and Test

A few weeks ago, in A Good Tester is All Over the Place , Joep Schuurkes described a model of testing work based on three axes: do testing yourself or support testing by others be embedded in a team or be part of a separate team do your job or improve the system It resonated with me and the other testers I shared it with at work, and it resurfaced in my mind while I was reflecting on some of the tasks I've picked up recently and what they have involved, at least in the way I've chosen to address them. Here's three examples: Documentation Generation We have an internal tool that generates documentation in Confluence by extracting and combining images and text from a handful of sources. Although useful, it ran very slowly or not at all so one of the developers performed major surgery on it. Up to that point, I had never taken much interest in the tool and I could have safely ignored this piece of work too because it would have been tested by

Build Quality

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "When the build is green, the product is of sufficient quality to release" An interesting take, and one I wouldn't agree with in general. That surprises you? Well, ho