I'm enjoying The Vernon Richard Show a lot recently because the vibe Vern and Richard have created is one where two knowledgeable and experienced mates talk around a topic they are both interested in with curiosity and open minds.
Also, there's less football than there used to be.
This week's show is titled When Everything Sounds Like Testing… How Do You Explain What You Really Do? and sees the pair discussing the meanings of quality assurance, quality engineering, testing and other related terms.
There's no firm conclusions, and enough left unsaid that they're carrying the dialogue over into the next episode, but I was still interested to see these two models put forward:
In the first, proposed by Vernon, quality engineering consists of preventing bugs (QA) and detecting bugs (testing). In the second, floated by Richard almost in spite of his better judgement, quality assurance is the holistic term. In this version there's a journey which begins with testing a product, shifts left into quality engineering, and develops metrics that tell us whether "quality" is "assured."
I think that searching for the one true definition of concepts like these is unlikely to be successful. Language is like water: its path today is shaped by the environment and its movement in turn progressively shapes the environment, which then reshapes the path. A classic feedback loop and so a classic case for systems thinking rather than prescriptivism.
In a different time and place, or for different people, the same terms can always mean different things and the way we use them influences the way we and others use them in future. But even if we could freeze the flow and remove the possibility of change, it's still the case that categories are notoriously hard to define precisely because at the edges concepts are fuzzy.
To give just one example: the perspective we take on a task can alter our categorisation. A developer spends a day working on a feature. At a high-level granularity this was a day developing but zoom in and we find that the morning was spent co-ordinating with stakeholders to check requirements followed by thinking, and most of the afternoon she was checking what turned out to be a one-line code change in a variety of important scenarios. Now what was the work? What can development include? What about if we did the same kind of work outside of a "development task"?
Here's an alternative model of QA, QE, and testing that roughly sketches that thought:
In spite of any objections I might have, I do love this kind of conversation because it encourages us to think about our work, how we do it, and how we communicate both of those things. It reminds us that others may have a different perspective and that our own view can change and that it may even already have changed without us noticing.
Knowing that others draw the boundaries of our work differently should be a warning to check for shared understanding at the level of detail that is important at any given time. Stepping away from the terminology can be a good thing at that point. Instead of "I'll test that" you might say "I'll check for X, Y, and Z."
This is one of the reasons that context-driving testing appeals to me: I want to take account of where I am, with who, and under what conditions in order to communicate and contribute effectively.
I also don't feel constrained by my job title to only do particular kinds of work. I would self-identify as a tester but frequently do work that is not testing in order to move things forward. It's also why, when I came up with my definition of testing, I was clear that it's my definition for me.
All of which means that I'm interested to hear the where the next episode of the podcast goes but also that when I'm asked if I QA or QE I'll still respond Q what, mate?
Image:Wikimedia via
Pressbooks