Skip to main content

The Tester as Engineer?



Much of Definition of The Engineering Method by Billy Vaughn Koen chimes with what I've come to believe about testing. 

In part, I think, this is because thinkers who have influenced my thinking were themselves influenced by Koen's thoughts. In part, also, it's because some of my self-learned experience of testing is prefigured in the article. In part, finally, it's because I'm reading it through biased lenses, wanting to find positive analogy to something I care about. 

I recognise that this last one is dangerous. As Richard Feynman said in Surely You're Joking, Mr. Feynman!: "I could find a way of making up an analog with any subject ... I don’t consider such analogs meaningful.” 

This is a short series of posts which will take an aspect of Definition of The Engineering Method that I found interesting and explore why, taking care not to over-analogise.
--00--

In this series so far I've pulled out a couple of Koen's key concepts for attention: sotas and the Rule of Engineering. I find them both aesthetically pleasing and with practical applications. However, they are cast explicitly for engineers and I'm a tester. I wonder whether, by Koen's intention, they'd apply to me? Are testers engineers? Does testing overlap with engineering? If so, where? If not, why not?

The definition of an engineering problem and its derived definition of an engineer might help to judge the answer (p. 42-3):
If you desire change; if this change is to be the best available; if the situation is complex and poorly understood; and if the solution is constrained by limited resources, then you too are in the presence of an engineering problem ... If you cause this change by using the heuristics that you think represent the best available, then you too are an engineer ... the engineer is defined by a heuristic — all engineering is heuristic.
Let's take each of the criteria in turn:

  • change: it's the remit of testers to cause change, in the information state of a project if not directly in any deliverable.
  • best available: in Koen's world, "best" is conditional on context and the participants. It doesn't mean objectively maximal. So I intepret this as doing the perceived most important things in an attempt to uncover the most important information.
  • complex and poorly understood: looked at from an appropriate level of granularity, pretty much everything is complex and contains unknowns.
  • limited resources: there was never a project where the manager said "take all the time you like testing this, I don't care when it ships".
  • use heuristics: I would like to think that testers (consciously) use heuristics in their work.

I'm uncomfortable aligning testing and engineering by this route. If I was prepared to say that an activity is only testing when a problem is complex and poorly understood then I could define testers as people who take on complex and poorly understood problems. Unfortunately, I don't agree with the premise: I think it's possible to test something that is not complex, and I think it's possible to test something that's well understood (to whatever degree is relevant in context). In those circumstances, though, I'd suggest that the chances of provoking a change in the information state is likely to be reduced.

Is Koen saying that engineers can't work on trivial things? Or perhaps that they are not doing engineering when they do?

There's a long-running debate in the testing world about whether testing is a role or a job title. I've mused on it myself over the years and concluded that activities we might agree are testing are not the sole remit of people we might call testers. From #GoTesting:
To get to the desired (bigger picture) quality involves asking the (bigger picture) questions; that is, testing the customer's assumptions, testing the scope of the intended solution - you can think of many others - and indeed testing the need for any (small picture) testing, on this project, at this time.
Whether this is done by someone designated as a tester or not, it is done by a human and, as Rands said this week, I believe these are humans you want in the building. #GoTesting
You can play this the other way too: not everything someone with the role title tester does is necessarily what we might call testing.

I spent some time wondering what to make of this paragraph (p. 51):
We have noted that the engineer is different from other people ... The engineer is more inclined to give an answer when asked and to attempt to solve problems that are [non-trivial, but seem practically possible] ... The engineer is also generally optimistic ... and willing to contribute to a small part of a large project as a team member and receive only anonymous glory.
Although he's careful to caveat most of these attributes ("more inclined", "generally") I am allergic to all-encompassing assertions. With respect to testing, I wrote about it in You Shouldn't be a Tester If ...:
A belief that you should conform to a list of context-free statements about what a tester must be would concern me. I'd ask whether you really have testerly tendencies if you prefer that idea to a pragmatic attitude, to doing the best thing you can think of, for the task in hand, under the constraints that exist at that point.
This, to me, is closely allied with Koen's idea of what engineering is and only serves to enhance the dissonance I feel with his assertions about what an engineer is.

Koen does make role comparisons in his article, in particular the engineer and the scientist. He is not keen on the idea of engineering as applied science, apparently wanting instead to regard science as a tool within engineering (p. 63):
Misunderstanding the art of engineering, [some people] become mesmerised by the admittedly extensive use made of science by engineers and ... identify [science] with engineering [but] the engineer recognizes both science and its use as heuristics.
Tellingly, I don't recall him permitting scientists to use what he might call engineering methods. To me, it is simply not the case that all science proceeds by induction, hypothesis generation, and comparison to some natural state of affairs.

There's a sweet definition of a heuristic "in its purest form" (p. 48) that I thought might be relevant:
it does not guarantee an answer, it competes with other possible values, it reduces the effort needed to obtain a satisfactory answer to a problem and it depends on time and context for its choice.
Scientists conduct thought experiments. What are they if not heuristic by this definition? In fact, what are any experiments if not heuristic — hundreds of factors in any experimental setup and methodology could, unknowingly, invalidate the result. One the points of pride for committed scientists is that their findings, though valuable for a time, are likely to be shown wrong in some respect by a later scientist.

Koen also compares engineering with systems thinking and notes the crucial role of feedback (p. 56):
The success or failure of the engineer's effort is fed back to modify the heuristics in the engineer's sota
This seems natural to how I want to view testing. I like the idea of sotas and I really like the idea of overlapping and shared sotas in a given environment. On a project, for example, as we learn more about how the system under development behaves we modify our expectations of it and the way we engage with it. But we also take actions that we desire will provoke other changes. The sotas evolve based on feedback.

A few years ago I had a deep and wide-ranging landslide rush of a conversation with Anders Dinsen that we documented in What We Found Not Looking For Bugs. In trying to characterise what testing does in the abstract, I wrote:
  • Some testing, t, has been performed
  • Before t there was an information state i
  • After t there is an information state j
  • It is never the case that i is equal to j (or, perhaps, if i is equal to j then t was not testing)
  • It is not the case that only t can provide a change from i to j. For example, other simultaneous work on the system under test may contribute to a shared information state.
  • The aim of testing is that j is a better state than i for the relevant people to use as the basis for decision making
... I might propose [an information state is] something like a set of assertions about the state of the world that is relevant to the system under test, with associated confidence scores. I might argue that much of it is tacitly understood by the participants in testing and the consumption of test results. I might argue that there is the potential for different participants to have different views of it - it is a model, after all. I might argue that it is part of the dialogue between the participants to get a mutual understanding of the parts of j that are important to any decisions.
Casting around for non-heuristic definitions of engineers to contrast his ideas with, Koen explores the possibility of there being a recipe, a set of steps which, if followed, will lead to good engineering. He concludes (p. 62):
... more candid authors admit that engineers cannot simply work their way down a list of steps but must circulate freely within the proposed plan — iterating, backtracking and skipping stages almost at random. Soon structure degenerates into a set of heuristics badly in need of other heuristics to tell what to do when.
Again, this feels like what I do when I'm testing. I wrote about it in Testing All The Way Down, and Other Directions:
It's not uncommon to view testing as a recursive activity ... I feel like I follow that pattern while I'm testing. But ... testing can be done across, and around, and inside and outside, and above and below, and at meta levels of a system ... Sometimes multiple activities feed into another. Sometimes one activity feeds into multiple others. Activities can run in parallel, overlap, be serial. A single activity can have multiple intended or accidental outcomes, ... all the way down, and the other directions.
So, are testers engineers? Frankly I find myself bothered when Koen talks about engineers as a group, and about what they are like and not like. I have the same problem making generalisations about testers or pretty much any other set of people defined by a variable in common. I can't in good faith say that (all) testers are engineers.

But I don't think that matters. There's so much to like and exploit in what Koen writes about engineering methodology. I can see many parallels with the way that I like to think about testing, and the context in which testing tasks place.

But, and it's a big but, I find that also with science: the scientific method and the notion of mandated science are are useful tool and a useful lens through which to view my day job. And I also find it with design, and software development, and editing, and detective work, and ...

Again, I don't think that "but" matters. I  accept that the engineering method is heuristic and I can say that it's a tool I can, do, and will use in my testing.

Comments

Popular posts from this blog

Can Code, Can't Code, Is Useful

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "If testers can’t code, they’re of no use to us" My first reaction is to wonder what you expect from your testers. I am immediately interested in your working context and the way

Testing (AI) is Testing

Last November I gave a talk, Random Exploration of a Chatbot API , at the BCS Testing, Diversity, AI Conference .  It was a nice surprise afterwards to be offered a book from their catalogue and I chose Artificial Intelligence and Software Testing by Rex Black, James Davenport, Joanna Olszewska, Jeremias Rößler, Adam Leon Smith, and Jonathon Wright.  This week, on a couple of train journeys around East Anglia, I read it and made sketchnotes. As someone not deeply into this field, but who has been experimenting with AI as a testing tool at work, I found the landscape view provided by the book interesting, particularly the lists: of challenges in testing AI, of approaches to testing AI, and of quality aspects to consider when evaluating AI.  Despite the hype around the area right now there's much that any competent tester will be familiar with, and skills that translate directly. Where there's likely to be novelty is in the technology, and the technical domain, and the effect of

Testers are Gate-Crashers

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Testers are the gatekeepers of quality" Instinctively I don't like the sound of that, but I wonder what you mean by it. Perhaps one or more of these? Testers set the quality sta

Am I Wrong?

I happened across Exploratory Testing: Why Is It Not Ideal for Agile Projects? by Vitaly Prus this week and I was triggered. But why? I took a few minutes to think that through. Partly, I guess, I feel directly challenged. I work on an agile project (by the definition in the article) and I would say that I use exclusively exploratory testing. Naturally, I like to think I'm doing a good job. Am I wrong? After calming down, and re-reading the article a couple of times, I don't think so. 😸 From the start, even the title makes me tense. The ideal solution is a perfect solution, the best solution. My context-driven instincts are reluctant to accept the premise, and I wonder what the author thinks is an ideal solution for an agile project, or any project. I notice also that I slid so easily from "an approach is not ideal" into "I am not doing a good job" and, in retrospect, that makes me smile. It doesn't do any harm to be reminded that your cognitive bias

Play to Play

I'm reading Rick Rubin's The Creative Act: A Way of Being . It's spiritual without being religious, simultaneously vague and specific, and unerring positive about the power and ubiquity of creativity.  We artists — and we are all artists he says — can boost our creativity by being open and welcoming to knowledge and experiences and layering them with past knowledge and experiences to create new knowledge and experiences.  If that sounds a little New Age to you, well it does to me too, yet also fits with how I think about how I work. This is in part due to that vagueness, in part due to the human tendency to pattern-match, and in part because it's true. I'm only about a quarter of the way through the book but already I am making connections to things that I think and that I have thought in the past. For example, in some ways it resembles essay-format Oblique Strategy cards and I wrote about the potential value of them to testers 12 years ago. This week I found the f

Meet Me Halfway?

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Stop answering my questions with questions." Sure, I can do that. In return, please stop asking me questions so open to interpretation that any answer would be almost meaningless and certa

Test Now

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "When is the best time to test?" Twenty posts in , I hope you're not expecting an answer without nuance? You are? Well, I'll do my best. For me, the best time to test is when there

Rage Against the Machinery

  I often review and collaborate on unit tests at work. One of the patterns I see a lot is this: there are a handful of tests, each about a page long the tests share a lot of functionality, copy-pasted the test data is a complex object, created inside the test the test data varies little from test to test. In Kotlin-ish pseudocode, each unit test might look something like this: @Test fun `test input against response for endpoint` () { setupMocks() setupTestContext() ... val input = Object(a, OtherObject(b, c), AnotherObject(d)) ... val response = someHttpCall(endPoint, method, headers, createBodyFromInput(input) ) ... val expected = Object(w, OtherObject(x, y), AnotherObject (z)) val output = Object(process(response.getField()), otherProcess(response.getOtherField()), response.getLastField()) assertEquals(expected, output) } ... While these tests are generally functional, and I rarely have reason to doubt that they

A Qualified Answer

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn ,   Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Whenever possible, you should hire testers with testing certifications"  Interesting. Which would you value more? (a) a candidate who was sent on loads of courses approved by some organisation you don't know and ru

README

    This week at work my team attended a Myers Briggs Type Indicator workshop. Beforehand we each completed a questionnaire which assigned us a personality type based on our position on five behavioural preference axes. For what it's worth, this time I was labelled INFJ-A and roughly at the mid-point on every axis.  I am sceptical about the value of such labels . In my less charitable moments, I imagine that the MBTI exercise gives us each a box and, later when work shows up, we try to force the work into the box regardless of any compatiblity in size and shape. On the other hand, I am not sceptical about the value of having conversations with those I work with about how we each like to work or, if you prefer it, what shape our boxes are, how much they flex, and how eager we are to chop problems up so that they fit into our boxes. Wondering how to stretch the workshop's conversational value into something ongoing I decided to write a README for me and