The topic at CEWT #7 last weekend was Dirty Testing Secrets. I decided to present something reasonably provocative as a conversation starter. I think it worked. The essay below is a pretty version of the notes I prepared in advance.
--00--
Quality Assurance. QA. It's getting less common, but it's still not unusual for people in software to talk about getting something into QA or to asking us to QA their stuff.
I've worked hard over the years at our place to spread the word that I don't think of my team in that way. I do an induction for all new employees and explain how testing is a creative and intellectual activity, not a checkbox ticking drudge.
Sadly, I still encounter career testers who think that their role is to confirm that requirements are met and no more. But my sense is that that's an open secret rather than a dirty one. This isn't a dirty secret either, although it might be a surprise to some:
That's not to say that we are gatekeepers or owners or creators of quality and we certainly can't test the quality in. But, for me, we're in the quality business because we're in a team that builds things for people to use and it's part of our role to help our stakeholders gauge the level of quality of those things.
Given that, I think it's reasonable for stakeholders to expect that we have some kind of handle on what quality is. I looked in the literature ... and also Twitter:
- Quality is value to some person. (Weinberg)
- Quality is conformance to requirements. (Crosby)
- The quality of software product is not in how many bugs are found and fixed before release. It is in how the team responds to bugs found in production. (Sussman)
While the first two are well-known, the latter is less so, although serviceability is amongst the factors that Garvin lists in Competing on the Eight Dimensions of Quality.
I asked the CEWT participants for their definitions of quality too. Here's a selection, and the rest can be found in In One Sentence, Define Quality, Bug, Testing:
- How well someone perceives something works and meets a set of requirements.
- External quality is a positive characteristic of software encompassing robustness, correctness, and lack of bugs.
- An outcome that satisfies all stakeholders + customers.
So what is quality, then? Let's be honest, we don't know.
But our stakeholders are intelligent people in general, I'd say. They'll probably cut us a bit of slack here, recognising that context is crucial and that quality is a relationship between a product, a person, a time, and a task.
But what about bugs? Many of our colleagues see our bug reports, those by-products of our testing work that help them to understand the quality of their thing. Surely if we're writing them we must understand what bugs are. Mustn't we? Again, there are definitions to be found. Here's three:
- Anything about the product that threatens its value. (Rapid Software Testing 3.0)
- Something that bugs someone. (James Bach)
- Anything that causes an unnecessary or unreasonable reduction of the quality of a software product. (BBST Bug Advocacy)
Perhaps unlike the varied definitions of quality, these three exist in a reasonably confined space, one where something about a product matters or might matter to someone for some reason. Taking the CEWT participants' definitions as well, that space broadens out:
- The difference between something as desired and something as perceived that's an unwanted incongruity.
- A bug is a piece of unintended behaviour in software that negatively affects a user - or will do so, when the software is released.
- A perceived failure to meet an expectation.
So what is a bug? We don't know.
But I think our colleagues would recognise that, again, context is in play. They well know that product managers can deem something "not P1" or "not a bug" as easily as snapping their fingers, particularly as the release date gets closer. (Even if some testers I've known find this hard to accept.)
So they'll again be lenient and not push us too hard here. Bugs are a relationship between a product, a person, and a task, and a time.
However, even if we can't agree on what a problem is, or how good something is, our colleagues will likely be less forgiving when we can't explain what we do all day, what testing is.
There's no shortage of thought on this topic amongst our peers and CEWTees. These are quoted from What is Software Testing? and, as before, In One Sentence, Define Quality, Bug, Testing:
- Testing is the process of executing a program with the intent of finding errors. (Meyers)
- Testing is done to find information. Critical decisions about the project or the product are made on the basis of that information. (Kaner, Bach, Pettichord)
- ... interact with the software or system, observe its actual behavior, and compare that to your expectations. (Hendrickson)
- Assessing the integrity of specific functionality using as many perspectives as any any potential user that might ever utilise it.
- Uncovering unknowns, experimentation and problem solving.
- A verb! The activity of assessing an object/person (thing) to determine the quality of one or more of its attributes.
Hmm. So what is testing? We don't know.
Testing, like quality and bugs is contextual. It's a relationship between a product, a person, and a task, and a time.
I've said "we don't know" a lot here. But whether that matters is arguable. The fact that I'm talking about it certainly reflects my bias towards understanding the semantics of the area I am working in. I find that the theory helps to guide my practice, but others are able to get on and do testing work without ever considering that there might be subtleties beyond "breaking the product".
For me, testing is done in a constant state of not knowing. Testing is about shining light into the dark, trying to make the best sense we can of the situation we find ourselves in. We don't know whether what we did is right, we don't know whether what we'll do next will help, we don't know whether the data we gathered is usable, or the conclusions we drew from it acceptable, nor whether the next thing we do will invalidate everything we have done so far.
In order to be a great tester you have to embrace that not knowing. You have to be able to work within uncertainty, without being confident that anything will stand still, taking into account that your lack of knowledge of something might be the key issue. Here's another dirty testing secret for you:
To summarise, then: we don't know what it is we're looking for, we can't tell what we've found, we don't know how we do it and we can't have confidence in it in any case.
I might say that we don't know what we're doing. But, ironically, if there's one thing we do know, it's that most people would rather not hear that. So let's keep it between ourselves, eh?
Here's my slides:
Photo: Neil Younger
Comments
Post a Comment