Skip to main content

Context Driven Answering

To the context-driven tester there are no best practices, merely practices to be applied in contexts where they are appropriate on missions to which they contribute. Context-driven is differentiated from context-aware and other similar-sounding terms by virtue of the total freedom it gives to (and requires of) the tester to approach each situation afresh, driving the choice of practice from the context and not vice versa.

That's not to say that expertise and experience can't play a part - we'd hope that knowledge of the range of practices that could be applied will mean a more productive selection - merely that the organisation, strategy, reporting and so on of the project is considered part of the project and not a predetermined factor.

All options being open, and context being the ultimate arbiter of the value of an activity to a project, it's interesting to wonder whether there is anything that is indisputably never appropriate. Perhaps burning our test materials? But what if the context was that we're testing fire extinguishers for efficacy on paper fires and no-one ever reads those 1000-page test plans we got that bored contractor in another country to write before we even began coding (and they're backed up on disk in any case)?

Shooting the Dev team, then? While this would probably help with the bug count, and could be appealing in other ways too, illegal and immoral acts need exceptionally exceptional contexts (and viewpoints). How about this: you're the despot of a small country who wants to evaluate the efficiency of the new Dev team against the despicable bunch of lazy unskilled revolutionary treasonistas that coded v1.0 of your population subjugation software. In order to prevent contamination of the new team, you eliminate any scope for interaction with the old. You might consider this to be test setup (although actually you are most interested in execution).

Joking aside, a testing role should not be restricted to the act of testing. Reporting is a significant part of a tester's responsibility and, if neglected, can negate everything else. And reporting is not just about writing reports. A significant element of reporting is answering questions. Which leads to the the tweets that kicked off this train of thought.

Ilari Henrik Aegerter posted this on Twitter in December 2012:
@ilarihenrik: If after a horrible project somebody asks 'How could we've found this  bug?', then it's the wrong question being asked
The short thread that followed concentrated on the idea that the key discussion to have was the one about the dysfunctional project. And I agree that in this context that's a reasonable thing to want to do. But if we can agree that - in a believable world - there are contexts in which many practices can be argued for then, even in the wake of a horrible project and taking account of the bluntness required by tweet length, there should be contexts in which it's a fair question for a tester to be asked, and in which they should answer it straight and honestly.

Here's one: the test team lobbied for some expensive new infrastructure changes after the last horrible project. They were implemented, but the following project was horrible too. Management will wonder what the value of the infrastructure change was.

Here's another: your most valuable customer encountered an error dialog containing the string "puleeze just fix this fucken shite" in red characters, with flashing green background, 80 points tall, spilling from the mouth of a gurning Super Mario who is also flipping you an 8-bit pixelated bird just five minutes after installing the latest release. The Dev manager is likely getting a beating with a rubber truncheon right now, but your boss is going to be getting some heat from their boss and they'll surely in turn feel entitled to ask why you didn't prevent this misjudged in-joke from shipping.

Coincidentally, on the same day as Ilari's tweet, Paul Holland and Louise Perold had an exchange that went:
 @lerpold: U get email asking "please  explain what was covered in regression & why this was missed in  testing" after prod incident - response is? 
 @PaulHolland_TWN: If u  would like us 2 test more thoroughly then we will need more time and  resources. Even then we cannot catch all bugs. Lets talk.
This response - again restricted in scope by its length - didn't admit the possibility of contexts in which the test team was at fault, apparently assuming that the test team were sufficiently thorough, didn't have enough time and didn't have enough resources (to test whatever was under test to whatever level was agreed).

We testers have no divine right to be right (although we mostly are right, right?) and in any case we should not be waiting until the end of a horrible project to attempt to engage with the rest of the team about the way the project is going. Of course, attempts to do this may not be successful, but that would form part of the answer to later questions about bugs found in production.

In any case, whatever we think about it, to some stakeholders, asking why a bug was not found in testing is always going to be a reasonable question. There's something of a parallel in metrics where stakeholder may request metrics that we see as having low or even negative value. Some testers, including Cem Kaner, feel that we should ultimately provide them, regardless of our view of them, with caveats and discussion if needed.

Kaner also talks about construct validity: a notion of whether a metric is actually measuring the attribute it is intended to. I wonder whether there's an analogous (if less rigorously defined)  question validity interested in whether the question being asked actually represents a request for the information desired (see the Five Whys) and whether part of the skill of the tester in this scenario is to address both the direct question and any other underlying concern in a single answer, making clear which is which.

I'm making an analogy between test practices and questions here. In the former, the tester is context-driven by admitting that no (reasonable) practices are inappropriate in all contexts. In the latter, the answerer might be seen as context-driven by admitting that no questions are inappropriate in all contexts. I've also related questions and metrics to try to justify a position in which direct questions should be answered directly.

However, another take on context-driven approaches to question-answering could be that the answerer should regard the question as a mission and use appropriate practice (such as style or content of answer) to fulfil it. This would likely involve meta discussion on the intent of the mission ("question validity" is still interesting here) and might ultimately mean that the direct interpretation of the question would not be answered.

Maybe the two notions collapse into more or less the same approach: answer the question in such a way as to provide the best value to the person asking it. In practice, though, the apparent inability or lack of desire to answer a direct question, or the apparent need to always ask more questions before providing an answer at all, can be seen as prevarication on the part of the tester and be irritating to the questioner. We shouldn't forget that the psychology of the participants is an important part of the context.

Are you a context-driven tester? Are you a context-driven answerer?
Image: http://flic.kr/p/6nCmik

As I'd quoted them, I asked Ilari and Paul if they'd like to respond to a draft of this post. I'm grateful to them both for their suggestions on the earlier version and these comments:

Ilari said:
Reading my tweet a couple of months later, I would probably replace 'wrong' with 'not the most valuable'. Asking how a bug could have been found is not wrong, there are - however - moments, where this question fits better. When I wrote this tweet, my underlying thinking was: When you ask questions, try to go to the root of things and do not spend too much time on looking for solutions that only mitigate the symptoms. 
Re "question validity": another dimension would be the time a question is asked. There might be moments when asking a question is more appropriate than others. E.g. as long the general mood is heated, some of the questions only lead to the situation becoming more heated.
Paul said:
I agree with your assessment of my brief tweet that it did not allow for the instances where the test team was at fault. I have a story about that from my time as a test manager at Alcatel-Lucent. My group had just delivered a new patch to a release on our DSL gear to one of our main customers. I'll call them Bel Canada instead of their "real name" to protect their anonymity.

Within a few hours of them receiving this new build they were on the phone with our support team asking why there was a 20% drop in their max attainable line rates. As I was the manager of the team that should have tested that I was immediately called by the R&D director to ask what was going on. It only took about 10 minutes to recreate the issue. I asked my team why they hadn't seen this very obvious issue in the 2 weeks of testing we did on this minor patch. They informed me that all of their testing had been done as if the patch was being delivered to a different customer that I'll call AT&TT (again not their real name). Bel and AT&TT use very different modems in their setups. There was no problem with the AT&TT modem but the Bel modem had an interoperability bug which caused the performance issue. Apparently, I had neglected to inform my team that the patch was destined to Bel Canada and not AT&TT. In this context, the blame fell very clearly on my shoulders and I accepted responsibility. I created a new policy which made it very clear to the team which customers were targeted to receive any patch or release.

I like how you point out that as context-driven testers we are not only responsible for asking questions to determine our own context but also answering questions that others ask us. It is  also important to assess the validity of these questions and for us to ask clarifying questions when needed.

My stance on providing metrics I disagree with differs with your claim of Cem Kaner's stance. I am not claiming that you are misquoting Cem, but I am stating that I disagree with the approach - as do Michael Bolton and James Bach. We may eventually provide bad metrics but only with many caveats as to their uselessness. As I have heard both Michael and James claim, "we are not in the business of misleading our customers." We will offer different ways of measuring our test progress that are less flawed and provide better information to decision holders.

Finally, I really like your final paragraph where you indicated that sometimes context-driven testers should actually just answer the damn question and not point out all the alternatives. The same goes with safety language. There are times to be cautious and cover your butt ("We have not found any critical issues so far - after executing a subset of sessions that we had previously prioritized and realizing that in the time we had available we have only executed roughly 50% of our planned sessions") and other times where the situation calls for just answering the question ("No problems so far").

Comments

Popular posts from this blog

Can Code, Can't Code, Is Useful

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "If testers can’t code, they’re of no use to us" My first reaction is to wonder what you expect from your testers. I am immediately interested in your working context and the way

Testing (AI) is Testing

Last November I gave a talk, Random Exploration of a Chatbot API , at the BCS Testing, Diversity, AI Conference .  It was a nice surprise afterwards to be offered a book from their catalogue and I chose Artificial Intelligence and Software Testing by Rex Black, James Davenport, Joanna Olszewska, Jeremias Rößler, Adam Leon Smith, and Jonathon Wright.  This week, on a couple of train journeys around East Anglia, I read it and made sketchnotes. As someone not deeply into this field, but who has been experimenting with AI as a testing tool at work, I found the landscape view provided by the book interesting, particularly the lists: of challenges in testing AI, of approaches to testing AI, and of quality aspects to consider when evaluating AI.  Despite the hype around the area right now there's much that any competent tester will be familiar with, and skills that translate directly. Where there's likely to be novelty is in the technology, and the technical domain, and the effect of

Testers are Gate-Crashers

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Testers are the gatekeepers of quality" Instinctively I don't like the sound of that, but I wonder what you mean by it. Perhaps one or more of these? Testers set the quality sta

Am I Wrong?

I happened across Exploratory Testing: Why Is It Not Ideal for Agile Projects? by Vitaly Prus this week and I was triggered. But why? I took a few minutes to think that through. Partly, I guess, I feel directly challenged. I work on an agile project (by the definition in the article) and I would say that I use exclusively exploratory testing. Naturally, I like to think I'm doing a good job. Am I wrong? After calming down, and re-reading the article a couple of times, I don't think so. 😸 From the start, even the title makes me tense. The ideal solution is a perfect solution, the best solution. My context-driven instincts are reluctant to accept the premise, and I wonder what the author thinks is an ideal solution for an agile project, or any project. I notice also that I slid so easily from "an approach is not ideal" into "I am not doing a good job" and, in retrospect, that makes me smile. It doesn't do any harm to be reminded that your cognitive bias

Play to Play

I'm reading Rick Rubin's The Creative Act: A Way of Being . It's spiritual without being religious, simultaneously vague and specific, and unerring positive about the power and ubiquity of creativity.  We artists — and we are all artists he says — can boost our creativity by being open and welcoming to knowledge and experiences and layering them with past knowledge and experiences to create new knowledge and experiences.  If that sounds a little New Age to you, well it does to me too, yet also fits with how I think about how I work. This is in part due to that vagueness, in part due to the human tendency to pattern-match, and in part because it's true. I'm only about a quarter of the way through the book but already I am making connections to things that I think and that I have thought in the past. For example, in some ways it resembles essay-format Oblique Strategy cards and I wrote about the potential value of them to testers 12 years ago. This week I found the f

Meet Me Halfway?

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Stop answering my questions with questions." Sure, I can do that. In return, please stop asking me questions so open to interpretation that any answer would be almost meaningless and certa

Test Now

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "When is the best time to test?" Twenty posts in , I hope you're not expecting an answer without nuance? You are? Well, I'll do my best. For me, the best time to test is when there

Rage Against the Machinery

  I often review and collaborate on unit tests at work. One of the patterns I see a lot is this: there are a handful of tests, each about a page long the tests share a lot of functionality, copy-pasted the test data is a complex object, created inside the test the test data varies little from test to test. In Kotlin-ish pseudocode, each unit test might look something like this: @Test fun `test input against response for endpoint` () { setupMocks() setupTestContext() ... val input = Object(a, OtherObject(b, c), AnotherObject(d)) ... val response = someHttpCall(endPoint, method, headers, createBodyFromInput(input) ) ... val expected = Object(w, OtherObject(x, y), AnotherObject (z)) val output = Object(process(response.getField()), otherProcess(response.getOtherField()), response.getLastField()) assertEquals(expected, output) } ... While these tests are generally functional, and I rarely have reason to doubt that they

A Qualified Answer

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn ,   Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Whenever possible, you should hire testers with testing certifications"  Interesting. Which would you value more? (a) a candidate who was sent on loads of courses approved by some organisation you don't know and ru

README

    This week at work my team attended a Myers Briggs Type Indicator workshop. Beforehand we each completed a questionnaire which assigned us a personality type based on our position on five behavioural preference axes. For what it's worth, this time I was labelled INFJ-A and roughly at the mid-point on every axis.  I am sceptical about the value of such labels . In my less charitable moments, I imagine that the MBTI exercise gives us each a box and, later when work shows up, we try to force the work into the box regardless of any compatiblity in size and shape. On the other hand, I am not sceptical about the value of having conversations with those I work with about how we each like to work or, if you prefer it, what shape our boxes are, how much they flex, and how eager we are to chop problems up so that they fit into our boxes. Wondering how to stretch the workshop's conversational value into something ongoing I decided to write a README for me and