Skip to main content

Context Driven Answering

To the context-driven tester there are no best practices, merely practices to be applied in contexts where they are appropriate on missions to which they contribute. Context-driven is differentiated from context-aware and other similar-sounding terms by virtue of the total freedom it gives to (and requires of) the tester to approach each situation afresh, driving the choice of practice from the context and not vice versa.

That's not to say that expertise and experience can't play a part - we'd hope that knowledge of the range of practices that could be applied will mean a more productive selection - merely that the organisation, strategy, reporting and so on of the project is considered part of the project and not a predetermined factor.

All options being open, and context being the ultimate arbiter of the value of an activity to a project, it's interesting to wonder whether there is anything that is indisputably never appropriate. Perhaps burning our test materials? But what if the context was that we're testing fire extinguishers for efficacy on paper fires and no-one ever reads those 1000-page test plans we got that bored contractor in another country to write before we even began coding (and they're backed up on disk in any case)?

Shooting the Dev team, then? While this would probably help with the bug count, and could be appealing in other ways too, illegal and immoral acts need exceptionally exceptional contexts (and viewpoints). How about this: you're the despot of a small country who wants to evaluate the efficiency of the new Dev team against the despicable bunch of lazy unskilled revolutionary treasonistas that coded v1.0 of your population subjugation software. In order to prevent contamination of the new team, you eliminate any scope for interaction with the old. You might consider this to be test setup (although actually you are most interested in execution).

Joking aside, a testing role should not be restricted to the act of testing. Reporting is a significant part of a tester's responsibility and, if neglected, can negate everything else. And reporting is not just about writing reports. A significant element of reporting is answering questions. Which leads to the the tweets that kicked off this train of thought.

Ilari Henrik Aegerter posted this on Twitter in December 2012:
@ilarihenrik: If after a horrible project somebody asks 'How could we've found this  bug?', then it's the wrong question being asked
The short thread that followed concentrated on the idea that the key discussion to have was the one about the dysfunctional project. And I agree that in this context that's a reasonable thing to want to do. But if we can agree that - in a believable world - there are contexts in which many practices can be argued for then, even in the wake of a horrible project and taking account of the bluntness required by tweet length, there should be contexts in which it's a fair question for a tester to be asked, and in which they should answer it straight and honestly.

Here's one: the test team lobbied for some expensive new infrastructure changes after the last horrible project. They were implemented, but the following project was horrible too. Management will wonder what the value of the infrastructure change was.

Here's another: your most valuable customer encountered an error dialog containing the string "puleeze just fix this fucken shite" in red characters, with flashing green background, 80 points tall, spilling from the mouth of a gurning Super Mario who is also flipping you an 8-bit pixelated bird just five minutes after installing the latest release. The Dev manager is likely getting a beating with a rubber truncheon right now, but your boss is going to be getting some heat from their boss and they'll surely in turn feel entitled to ask why you didn't prevent this misjudged in-joke from shipping.

Coincidentally, on the same day as Ilari's tweet, Paul Holland and Louise Perold had an exchange that went:
 @lerpold: U get email asking "please  explain what was covered in regression & why this was missed in  testing" after prod incident - response is? 
 @PaulHolland_TWN: If u  would like us 2 test more thoroughly then we will need more time and  resources. Even then we cannot catch all bugs. Lets talk.
This response - again restricted in scope by its length - didn't admit the possibility of contexts in which the test team was at fault, apparently assuming that the test team were sufficiently thorough, didn't have enough time and didn't have enough resources (to test whatever was under test to whatever level was agreed).

We testers have no divine right to be right (although we mostly are right, right?) and in any case we should not be waiting until the end of a horrible project to attempt to engage with the rest of the team about the way the project is going. Of course, attempts to do this may not be successful, but that would form part of the answer to later questions about bugs found in production.

In any case, whatever we think about it, to some stakeholders, asking why a bug was not found in testing is always going to be a reasonable question. There's something of a parallel in metrics where stakeholder may request metrics that we see as having low or even negative value. Some testers, including Cem Kaner, feel that we should ultimately provide them, regardless of our view of them, with caveats and discussion if needed.

Kaner also talks about construct validity: a notion of whether a metric is actually measuring the attribute it is intended to. I wonder whether there's an analogous (if less rigorously defined)  question validity interested in whether the question being asked actually represents a request for the information desired (see the Five Whys) and whether part of the skill of the tester in this scenario is to address both the direct question and any other underlying concern in a single answer, making clear which is which.

I'm making an analogy between test practices and questions here. In the former, the tester is context-driven by admitting that no (reasonable) practices are inappropriate in all contexts. In the latter, the answerer might be seen as context-driven by admitting that no questions are inappropriate in all contexts. I've also related questions and metrics to try to justify a position in which direct questions should be answered directly.

However, another take on context-driven approaches to question-answering could be that the answerer should regard the question as a mission and use appropriate practice (such as style or content of answer) to fulfil it. This would likely involve meta discussion on the intent of the mission ("question validity" is still interesting here) and might ultimately mean that the direct interpretation of the question would not be answered.

Maybe the two notions collapse into more or less the same approach: answer the question in such a way as to provide the best value to the person asking it. In practice, though, the apparent inability or lack of desire to answer a direct question, or the apparent need to always ask more questions before providing an answer at all, can be seen as prevarication on the part of the tester and be irritating to the questioner. We shouldn't forget that the psychology of the participants is an important part of the context.

Are you a context-driven tester? Are you a context-driven answerer?
Image: http://flic.kr/p/6nCmik

As I'd quoted them, I asked Ilari and Paul if they'd like to respond to a draft of this post. I'm grateful to them both for their suggestions on the earlier version and these comments:

Ilari said:
Reading my tweet a couple of months later, I would probably replace 'wrong' with 'not the most valuable'. Asking how a bug could have been found is not wrong, there are - however - moments, where this question fits better. When I wrote this tweet, my underlying thinking was: When you ask questions, try to go to the root of things and do not spend too much time on looking for solutions that only mitigate the symptoms. 
Re "question validity": another dimension would be the time a question is asked. There might be moments when asking a question is more appropriate than others. E.g. as long the general mood is heated, some of the questions only lead to the situation becoming more heated.
Paul said:
I agree with your assessment of my brief tweet that it did not allow for the instances where the test team was at fault. I have a story about that from my time as a test manager at Alcatel-Lucent. My group had just delivered a new patch to a release on our DSL gear to one of our main customers. I'll call them Bel Canada instead of their "real name" to protect their anonymity.

Within a few hours of them receiving this new build they were on the phone with our support team asking why there was a 20% drop in their max attainable line rates. As I was the manager of the team that should have tested that I was immediately called by the R&D director to ask what was going on. It only took about 10 minutes to recreate the issue. I asked my team why they hadn't seen this very obvious issue in the 2 weeks of testing we did on this minor patch. They informed me that all of their testing had been done as if the patch was being delivered to a different customer that I'll call AT&TT (again not their real name). Bel and AT&TT use very different modems in their setups. There was no problem with the AT&TT modem but the Bel modem had an interoperability bug which caused the performance issue. Apparently, I had neglected to inform my team that the patch was destined to Bel Canada and not AT&TT. In this context, the blame fell very clearly on my shoulders and I accepted responsibility. I created a new policy which made it very clear to the team which customers were targeted to receive any patch or release.

I like how you point out that as context-driven testers we are not only responsible for asking questions to determine our own context but also answering questions that others ask us. It is  also important to assess the validity of these questions and for us to ask clarifying questions when needed.

My stance on providing metrics I disagree with differs with your claim of Cem Kaner's stance. I am not claiming that you are misquoting Cem, but I am stating that I disagree with the approach - as do Michael Bolton and James Bach. We may eventually provide bad metrics but only with many caveats as to their uselessness. As I have heard both Michael and James claim, "we are not in the business of misleading our customers." We will offer different ways of measuring our test progress that are less flawed and provide better information to decision holders.

Finally, I really like your final paragraph where you indicated that sometimes context-driven testers should actually just answer the damn question and not point out all the alternatives. The same goes with safety language. There are times to be cautious and cover your butt ("We have not found any critical issues so far - after executing a subset of sessions that we had previously prioritized and realizing that in the time we had available we have only executed roughly 50% of our planned sessions") and other times where the situation calls for just answering the question ("No problems so far").

Comments

Popular posts from this blog

Can Code, Can't Code, Is Useful

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "If testers can’t code, they’re of no use to us" My first reaction is to wonder what you expect from your testers. I am immediately interested in your working context and the way

Meet Me Halfway?

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Stop answering my questions with questions." Sure, I can do that. In return, please stop asking me questions so open to interpretation that any answer would be almost meaningless and certa

Not Strictly for the Birds

  One of my chores takes me outside early in the morning and, if I time it right, I get to hear a charming chorus of birdsong from the trees in the gardens down our road, a relaxing layered soundscape of tuneful calls, chatter, and chirrupping. Interestingly, although I can tell from the number and variety of trills that there must be a large number of birds around, they are tricky to spot. I have found that by staring loosely at something, such as the silhouette of a tree's crown against the slowly brightening sky, I see more birds out of the corner of my eye than if I scan to look for them. The reason seems to be that my peripheral vision picks up movement against the wider background that direct inspection can miss. An optometrist I am not, but I do find myself staring at data a great deal, seeking relationships, patterns, or gaps. I idly wondered whether, if I filled my visual field with data, I might be able to exploit my peripheral vision in that quest. I have a wide monito

Postman Curlections

My team has been building a new service over the last few months. Until recently all the data it needs has been ingested at startup and our focus has been on the logic that processes the data, architecture, and infrastructure. This week we introduced a couple of new endpoints that enable the creation (through an HTTP POST) and update (PUT) of the fundamental data type (we call it a definition ) that the service operates on. I picked up the task of smoke testing the first implementations. I started out by asking the system under test to show me what it can do by using Postman to submit requests and inspecting the results. It was the kinds of things you'd imagine, including: submit some definitions (of various structure, size, intent, name, identifiers, etc) resubmit the same definitions (identical, sharing keys, with variations, etc) retrieve the submitted definitions (using whatever endpoints exist to show some view of them) compare definitions I submitted fro

Testers are Gate-Crashers

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Testers are the gatekeepers of quality" Instinctively I don't like the sound of that, but I wonder what you mean by it. Perhaps one or more of these? Testers set the quality sta

Vanilla Flavour Testing

I have been pairing with a new developer colleague recently. In our last session he asked me "is this normal testing?" saying that he'd never seen anything like it anywhere else that he'd worked. We finished the task we were on and then chatted about his question for a few minutes. This is a short summary of what I said. I would describe myself as context-driven . I don't take the same approach to testing every time, except in a meta way. I try to understand the important questions, who they are important to, and what the constraints on the work are. With that knowledge I look for productive, pragmatic, ways to explore whatever we're looking at to uncover valuable information or find a way to move on. I write test notes as I work in a format that I have found to be useful to me, colleagues, and stakeholders. For me, the notes should clearly state the mission and give a tl;dr summary of the findings and I like them to be public while I'm working not just w

Build Quality

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "When the build is green, the product is of sufficient quality to release" An interesting take, and one I wouldn't agree with in general. That surprises you? Well, ho

Make, Fix, and Test

A few weeks ago, in A Good Tester is All Over the Place , Joep Schuurkes described a model of testing work based on three axes: do testing yourself or support testing by others be embedded in a team or be part of a separate team do your job or improve the system It resonated with me and the other testers I shared it with at work, and it resurfaced in my mind while I was reflecting on some of the tasks I've picked up recently and what they have involved, at least in the way I've chosen to address them. Here's three examples: Documentation Generation We have an internal tool that generates documentation in Confluence by extracting and combining images and text from a handful of sources. Although useful, it ran very slowly or not at all so one of the developers performed major surgery on it. Up to that point, I had never taken much interest in the tool and I could have safely ignored this piece of work too because it would have been tested by

The Best Laid Test Plans

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "What's the best format for a test plan?" I'll side-step the conversation about what a test plan is and just say that the format you should use is one that works for you, your coll

Test Now

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "When is the best time to test?" Twenty posts in , I hope you're not expecting an answer without nuance? You are? Well, I'll do my best. For me, the best time to test is when there