To the context-driven tester there are no best practices, merely practices to be applied in contexts where they are appropriate on missions to which they contribute. Context-driven is differentiated from context-aware and other similar-sounding terms by virtue of the total freedom it gives to (and requires of) the tester to approach each situation afresh, driving the choice of practice from the context and not vice versa.
That's not to say that expertise and experience can't play a part - we'd hope that knowledge of the range of practices that could be applied will mean a more productive selection - merely that the organisation, strategy, reporting and so on of the project is considered part of the project and not a predetermined factor.
All options being open, and context being the ultimate arbiter of the value of an activity to a project, it's interesting to wonder whether there is anything that is indisputably never appropriate. Perhaps burning our test materials? But what if the context was that we're testing fire extinguishers for efficacy on paper fires and no-one ever reads those 1000-page test plans we got that bored contractor in another country to write before we even began coding (and they're backed up on disk in any case)?
Shooting the Dev team, then? While this would probably help with the bug count, and could be appealing in other ways too, illegal and immoral acts need exceptionally exceptional contexts (and viewpoints). How about this: you're the despot of a small country who wants to evaluate the efficiency of the new Dev team against the despicable bunch of lazy unskilled revolutionary treasonistas that coded v1.0 of your population subjugation software. In order to prevent contamination of the new team, you eliminate any scope for interaction with the old. You might consider this to be test setup (although actually you are most interested in execution).
Joking aside, a testing role should not be restricted to the act of testing. Reporting is a significant part of a tester's responsibility and, if neglected, can negate everything else. And reporting is not just about writing reports. A significant element of reporting is answering questions. Which leads to the the tweets that kicked off this train of thought.
Ilari Henrik Aegerter posted this on Twitter in December 2012:
@ilarihenrik: If after a horrible project somebody asks 'How could we've found this bug?', then it's the wrong question being askedThe short thread that followed concentrated on the idea that the key discussion to have was the one about the dysfunctional project. And I agree that in this context that's a reasonable thing to want to do. But if we can agree that - in a believable world - there are contexts in which many practices can be argued for then, even in the wake of a horrible project and taking account of the bluntness required by tweet length, there should be contexts in which it's a fair question for a tester to be asked, and in which they should answer it straight and honestly.
Here's one: the test team lobbied for some expensive new infrastructure changes after the last horrible project. They were implemented, but the following project was horrible too. Management will wonder what the value of the infrastructure change was.
Here's another: your most valuable customer encountered an error dialog containing the string "puleeze just fix this fucken shite" in red characters, with flashing green background, 80 points tall, spilling from the mouth of a gurning Super Mario who is also flipping you an 8-bit pixelated bird just five minutes after installing the latest release. The Dev manager is likely getting a beating with a rubber truncheon right now, but your boss is going to be getting some heat from their boss and they'll surely in turn feel entitled to ask why you didn't prevent this misjudged in-joke from shipping.
Coincidentally, on the same day as Ilari's tweet, Paul Holland and Louise Perold had an exchange that went:
@lerpold: U get email asking "please explain what was covered in regression & why this was missed in testing" after prod incident - response is?
@PaulHolland_TWN: If u would like us 2 test more thoroughly then we will need more time and resources. Even then we cannot catch all bugs. Lets talk.This response - again restricted in scope by its length - didn't admit the possibility of contexts in which the test team was at fault, apparently assuming that the test team were sufficiently thorough, didn't have enough time and didn't have enough resources (to test whatever was under test to whatever level was agreed).
We testers have no divine right to be right (although we mostly are right, right?) and in any case we should not be waiting until the end of a horrible project to attempt to engage with the rest of the team about the way the project is going. Of course, attempts to do this may not be successful, but that would form part of the answer to later questions about bugs found in production.
In any case, whatever we think about it, to some stakeholders, asking why a bug was not found in testing is always going to be a reasonable question. There's something of a parallel in metrics where stakeholder may request metrics that we see as having low or even negative value. Some testers, including Cem Kaner, feel that we should ultimately provide them, regardless of our view of them, with caveats and discussion if needed.
Kaner also talks about construct validity: a notion of whether a metric is actually measuring the attribute it is intended to. I wonder whether there's an analogous (if less rigorously defined) question validity interested in whether the question being asked actually represents a request for the information desired (see the Five Whys) and whether part of the skill of the tester in this scenario is to address both the direct question and any other underlying concern in a single answer, making clear which is which.
I'm making an analogy between test practices and questions here. In the former, the tester is context-driven by admitting that no (reasonable) practices are inappropriate in all contexts. In the latter, the answerer might be seen as context-driven by admitting that no questions are inappropriate in all contexts. I've also related questions and metrics to try to justify a position in which direct questions should be answered directly.
However, another take on context-driven approaches to question-answering could be that the answerer should regard the question as a mission and use appropriate practice (such as style or content of answer) to fulfil it. This would likely involve meta discussion on the intent of the mission ("question validity" is still interesting here) and might ultimately mean that the direct interpretation of the question would not be answered.
Maybe the two notions collapse into more or less the same approach: answer the question in such a way as to provide the best value to the person asking it. In practice, though, the apparent inability or lack of desire to answer a direct question, or the apparent need to always ask more questions before providing an answer at all, can be seen as prevarication on the part of the tester and be irritating to the questioner. We shouldn't forget that the psychology of the participants is an important part of the context.
Are you a context-driven tester? Are you a context-driven answerer?
Image: http://flic.kr/p/6nCmik
As I'd quoted them, I asked Ilari and Paul if they'd like to respond to a draft of this post. I'm grateful to them both for their suggestions on the earlier version and these comments:
Ilari said:
Reading my tweet a couple of months later, I would probably replace 'wrong' with 'not the most valuable'. Asking how a bug could have been found is not wrong, there are - however - moments, where this question fits better. When I wrote this tweet, my underlying thinking was: When you ask questions, try to go to the root of things and do not spend too much time on looking for solutions that only mitigate the symptoms.
Re "question validity": another dimension would be the time a question is asked. There might be moments when asking a question is more appropriate than others. E.g. as long the general mood is heated, some of the questions only lead to the situation becoming more heated.Paul said:
I agree with your assessment of my brief tweet that it did not allow for the instances where the test team was at fault. I have a story about that from my time as a test manager at Alcatel-Lucent. My group had just delivered a new patch to a release on our DSL gear to one of our main customers. I'll call them Bel Canada instead of their "real name" to protect their anonymity.
Within a few hours of them receiving this new build they were on the phone with our support team asking why there was a 20% drop in their max attainable line rates. As I was the manager of the team that should have tested that I was immediately called by the R&D director to ask what was going on. It only took about 10 minutes to recreate the issue. I asked my team why they hadn't seen this very obvious issue in the 2 weeks of testing we did on this minor patch. They informed me that all of their testing had been done as if the patch was being delivered to a different customer that I'll call AT&TT (again not their real name). Bel and AT&TT use very different modems in their setups. There was no problem with the AT&TT modem but the Bel modem had an interoperability bug which caused the performance issue. Apparently, I had neglected to inform my team that the patch was destined to Bel Canada and not AT&TT. In this context, the blame fell very clearly on my shoulders and I accepted responsibility. I created a new policy which made it very clear to the team which customers were targeted to receive any patch or release.
I like how you point out that as context-driven testers we are not only responsible for asking questions to determine our own context but also answering questions that others ask us. It is also important to assess the validity of these questions and for us to ask clarifying questions when needed.
My stance on providing metrics I disagree with differs with your claim of Cem Kaner's stance. I am not claiming that you are misquoting Cem, but I am stating that I disagree with the approach - as do Michael Bolton and James Bach. We may eventually provide bad metrics but only with many caveats as to their uselessness. As I have heard both Michael and James claim, "we are not in the business of misleading our customers." We will offer different ways of measuring our test progress that are less flawed and provide better information to decision holders.
Finally, I really like your final paragraph where you indicated that sometimes context-driven testers should actually just answer the damn question and not point out all the alternatives. The same goes with safety language. There are times to be cautious and cover your butt ("We have not found any critical issues so far - after executing a subset of sessions that we had previously prioritized and realizing that in the time we had available we have only executed roughly 50% of our planned sessions") and other times where the situation calls for just answering the question ("No problems so far").
Comments
Post a Comment