Saturday, May 23, 2020

How to Test Anything

This post is a prettied-up version of the notes I made in advance of my talk, How To Test Anything, at the OnlineTestConf 2020 this week. Here's the abstract:

Sometimes you’re asked to start testing in a context that is not ideal: you’ve only just joined the project, the test environment is broken, the product is migrating to a new stack, the developer has left, no-one seems quite sure what’s being done or why, and there is not much time. 

Knowing where to begin and what to focus on can be difficult and so in this talk I’ll describe how I try to meet that challenge.

I’ll share a definition of testing which helps me to navigate uncertainty across contexts and decide on a starting point. I’ll catalogue tools that I use regularly such as conversation, modelling, and drawing; the rule of three, heuristics, and background knowledge; mission-setting, hypothesis generation, and comparison. I’ll show how they’ve helped me in my testing, and how I iterate over different approaches regularly to focus my testing.

The takeaways from this talk will be a distillation of hard-won, hands-on experience that has given me
    • an expansive, iterative view of testing
    • a comprehensive catalogue of testing tools
    • the confidence to start testing anything from anywhere

How to test anything, then. The title felt gooooood when I proposed it after being invited to speak at the conference, but not so much when I came to write the talk! 

I'm very much not an egotist and the message I want to convey here is not that you should test in the patented, certified, Thomas Way. Rather, I think that there are useful approaches to testing independent of the application and the context, and I want to share the ones that I use and how I use them.

Let's start with a thought experiment.

You are watching a robot and me — a tester —  interacting with the same system. Our actions are, to the extent that you can tell, identical and the system is in the same state at each point in the sequence of actions for both of us. The machine and me performed the same actions on the same system with the same visible outcomes.

If I told you I was testing, would you feel comfortable saying that the robot was testing too? I’d have a hard time saying that it was. Harry Collins and Martin Kusch, in The Shape of Actions, reckon that:

Automation of some task becomes tractable at the point where we become indifferent to the details of it.

I'm not bashing automation — I’m a regular user of automation as a tool in testing — but whatever complexity you put into your robot, my instinct is that it's not going to be as flexible as a human could be when encountering a given situation, particularly an unforeseen one.  Automation naturally can’t consider all the details, filtering out only the ones that seem interesting given knowledge of the context in the way that a human can.

For me, testing requires there to be intent, deliberate actions, agency, and responsiveness to observation on the part of the tester. I also have a strong idea of what is required for something to be tested.

In this post, I’ll describe what testing is for me, I’ll list some of the testing tools that I think are useful across contexts, and I'll give a simple heuristic for starting testing when you're stuck.

So what is testing? Arborosa has collected many definitions dating back to the 1950s and I spent some time a couple of years looking over them and reflecting on how I like to work, before coming up with a definition that works for me:

Boom! It’s a mouthful, but I can unpack it.

Incongruity: Oxford Dictionaries define this as "not in harmony or keeping with the surroundings". I interpret lack of harmony as a potential problem and lack of keeping as an actual problem, and those two interpretations are interesting and useful in testing.

Pursuit: Again, there are two senses that capture important aspects of testing for me. You can pursue something that you don't know is there and that you may never find, like a dream. Or you can pursue the solution to a problem that you know you have, that's right in front of you. 

Relevant: if this work doesn't matter to anyone, why are we doing it? Whoever that is can help us to understand whether any incongruities we identify are valuable to them, relevant to the project.

Which is great but, so what? Well, I can use it as a yardstick to gauge my activity against: I might want to be testing but realise I’m doing something else; I might be happy to do choose to do stuff that needs doing but isn’t testing.

Testing doesn't proceed in a linear fashion for me, either. I will typically choose to do something, get data from it, review that data, and then decide what to do next in a cycle.

To help me to test, I use tools. I've listed some of them here:

And what is a tool? For me it's simply a thing used to help perform a job and I've thought a lot about tools. (Take Your Pick: Part 1, Part 2, Part 3, Part 4.)

I have a toolbox that I carry with me, and I've taken care to become familiar with the tools so that I can reach for a tool that looks appropriate when I need it. My shed is organised the same way:

I like also to have a cache of stuff that isn't tools I'm familiar with and skilled at using, but which might come in handy, like this box of bits I've emptied out onto my bench. Sometimes the shape of the problem in front of you doesn't fit the shape of any of your tools, but there may be something in the box that can be offered up to it.

Perhaps I've used Selenium and have a grasp of its workings, its pros, and its cons. That's a tool and it's on my shelf. Let's say I've seen a webinar about Cypress and talked to a couple of team members who have experimented with it. That's in my box. If I see a problem that is similar to one I might use Selenium for, but isn't quite the right shape, I might reach for Cypress.

It's also important to practice with your tools. Learn when they apply well and when they don't. This tunes your intuition about when they'll be helpful or when they're actively working against your need. It also helps you to keep up to date and skilled with them. 

Here's a few of the tools I use all the time:
The best testers will be layering their activities. They’ll have a mission in mind but will be consciously trying to approach it in a way that gives them the chance to uncover other things. For example, they might be able to think of three ways to check some functionality and they’ll choose the one that exposes them to a bit of the product they haven’t seen much of, or has recently changed; maybe they’ll see a usability issue, or a performance problem that way.

The skilled tester might leave environments around when they’re finished with them so that some other later testing can be done in a dirty environment not in something that has been set up just for the test. Sometimes just coming back to a system that has been running by itself for a few days can show a problem.

I urge you to do this kind of conscious, intentful testing! Of course, a prerequisite for that  is starting and sometimes it's not easy.

Yes, it can be challenging because you don't want to make a mistake, to look foolish in front of new team mates, or set the project off down the wrong path. But I have a helpful heuristic:

You don’t necessarily need to wait for the requirements, or stability, or even a build of the application under test to start testing. Begin where you are!

Some factors that can help you to understand where you are:
  • constraints: budget, resources, time, ...
  • context: what is this product for, who is it for, what do they want to do, ...
  • value: who are your stakeholders, what are they looking for from you, ...
Choosing what to do next to deliver value to the project is setting your mission and I like to frame my missions using this slight variant of Elisabeth Hendrickson's charter template:

On a recent project I joined, I thought that the biggest challenge to customer and business value was (the way I saw it) disagreement amongst three stakeholders. In this case, I wrote a 2-page product description that crystallised what I thought we were building and importantly what I thought we were not building. When this was put in front of the team, and the stakeholders, we were able to have a conversation that squeezed out the differences.

You might reasonably ask whether I was testing. I think that in the main I was, yes. I was pursuing relevant incongruity.

I said three key things were needed for testing but in fact there's a fourth: something to test. I realised while I was writing the talk that I've encapsulated pretty much everything I've said so far in a page on my team's wiki. I pair with someone from my team every week. As a manager, ad hoc pairing is tricky for me to set up, but a recurring calendar appointment works. So I came up with some guidelines to help others help me to be involved:

I like to set the mission so that we are intentional and I like to reflect so that we have a chance to learn and change, but the key thing here is that I'm happy to start anywhere on anything completely from cold. I'm confident that I can bring something to the party wherever, whenever, and whatever that party is.  

So that's how I test anything: I have an idea what testing means for me, I find and practice with tools that help me to achieve it, and I'm not afraid to start from where I am and iterate.

Here's the full slides:

Wednesday, May 20, 2020

Down With OTC

After I'd done my talk at OnlineTestConf yesterday I stuck around to watch Conor Fitzgerald and Lena Wiberg speak. It's been a while since I practised my sketchnoting so I thought I'd give it a go again and I enjoyed borrowing my kids' colouring pencils rather than just using the biros I happen to have in my bag like I usually do. 

You can't tell, of course, but for some reason these particular pencils were scented (!) so if you can imagine cinnamon, cola, grape, and raspberry as you're looking at this post you'll get the more authentic experience.

Conor Fitzgerald presented an evolution of the talk he gave at SoftTest Ireland a couple of years ago. Back then it was a catalogue of tools for working as a tester, curated from an exploration of other industries. In this iteration it focused primarily on learnings from the aviation industry, gave examples of healthcare practitioners applying them, and suggestions for how we can do the same. The Checklist Manifesto and Black Box Thinking both feature heavily and I can recommend them both too. (Slides

Lena Wiberg is a fellow member of the board over at the Association For Software Testing, and really knows her onions. In this case, the situation bringing her to tears was a new job with a set of persistently failing tests and a team that seemed resigned to the fact that that was just how things were. The daily grind of arriving at work to investigate test failures from overnight runs and then go home seemed to be expected and no attempt was made to step back and take a broader perspective. 

Enter Lena! In her talk she stepped through a sequence of data gathering, analysis, review of the stakeholders of the results of these suites, and the generation of strategies for making the tests more reliable to run and more likely to deliver the desired data. 

Wednesday, May 13, 2020

Is it Good Enough?

The other day, the Ministry of Testing tweeted this:

Great question from Cassandra:  "Could you share any tips on how to let go of that idea of personal perfection, when part of our job as testers is to aim for perfection?".  Is this something you have advice on?

I've certainly had this conversation with testers in the past but I've had it with teams from other disciplines that I've managed too. This was the answer I gave to the tweet:

Reframe "success" from being the pursuit of perfection to something like getting a good solution at the right time for a reasonable cost.  

"Good", "right", and "reasonable" are context-dependent and stakeholders should be able to guide the team on what they mean and when.

The more general version is that I've found that people with skills in a particular area can tend to feel compromised if they haven't utilised their skills to fullest possible extent on a piece of work. This is especially true if they align their role with those skills, such as a tester and testing, a technical author and writing, or a developer and the production of code. 

I have been here myself: I might not be walking around with a bag of chisels, covered in sawdust but I can be precious about what I see as a craft, as my craft.

The more general reframing that I have found helpful is this: 

A working definition of a good job is the one that helps the project to achieve its goals. Your perfect result is unhelpful if it delays the project so that a market opportunity is missed, or if its pursuit puts you under immense pressure and unbearable stress.

Try to see your role as exercising your skills, your judgement, and, yes, your craft to facilitate the successful outcome of the project, given what you know, at this time. How can you find the right compromise given all the constraints in play? How can your expertise and experience find a productive path through the space of all possible options to a reasonable outcome without breaking the bank? 

Having said that, you are part of the context and I recommend that you try to find, in every project, something that is satisfying for you. Sometimes it'll be using a new tool, or working in a new area, or collaborating with someone you've never worked with before. Sometimes it'll be inventing an approach which cuts corners in the right places, or doing analysis to inform the appropriate compromise, or proposing alternative ideas to the stakeholders that you believe achieves their aims in a different way.

"Perfect is the enemy of good" they say. Perfect is the enemy of good enough is more where I'm coming from. That's not to say I'm looking to do the bare minimum, more that the pleasure of the role, the game if you like, is in using our knowledge and craft to understand what standard is required by the relevant set of people this time around and then achieving it at the right cost.

Thursday, April 23, 2020

In Test State

The State of Testing report for 2020 has just been released. I continue to support the collection and publication of this kind of analysis because it can help us to see ourselves from a different perspective and identify changes to our work and context over time. 

This time around, the authors comment:
We are seeing many indications reinforcing the increasing  collaboration of test and dev, showing how the lines between our teams are getting blurrier with time.

We are also seeing how the responsibility of testers is expanding, and the additional tasks that are being required from us in different areas of the team's tasks and challenges.  We were also able to see some low level changes and expansions in the technologies tested, the technologies used to test, and the technologies that may be relevant in the future
I'd love to see more longer-term trend data published and, while I doubt it's easy to generate, something about approaches and outcomes similar to State of DevOps surveys that led to the Accelerate book.

Monday, April 13, 2020

Unsung Love Song

Back when I had hair and an East German army surplus jacket and carried a record bag with me everywhere, we wrote a fanzine and released records together. Our first contact was twenty years ago this year. Twenty years since he sent me a copy of his debut 7" single, the first release on his Kitchen Records label. Twenty years since I gushed this review onto the poorly-photocopied pages of my zine, Robots and Electronic Brains:
The Fabulous Nobody, Love and the City (Kitchen) 7"
Some days I just feel like I'm getting old and other days I know it's so. Given the kick I'm getting out of the three cuts on this limited-edition 7", today must be one of the latter. Three dreamsongs of naive romance for the big city lights that could've been written for a 1930s stage play and revived for a 1940s screen adaptation starring Fred Astaire who'd do a slow soft shoe routine to the whistle solo and lean against a lampost smoking a fag for the rest.
I asked to interview him, and he agreed — but only by letter. Those were simpler times but even then paper was an oddly retro choice given that Mr. Watson had been summoned by Alexander Graham Bell over a hundred years before. Oddly retro, perhaps, but fitting.

We established a rapport with the written word and then eventually talked on the telephone too, creating a bond that led to him becoming my partner at Robots and Electronic Brains and instrumental in the production of the vinyl and CDs we gave away with it over the years. He is godfather to my daughters. He is my friend.

He is Laurence Dillon and he's written a book, Unsung Love Song published by Zuleika, about his life as a eunuch.

Laurence's testicles were surgically removed when he was still a young man, after a cancer diagnosis. Agonisingly, the tiny shred of self-respect he possessed was excised at the same time. The book, a collection of quick-shot thoughts and potted essays, gives us an insight into the person he was before the trauma of his operations, the numerous and varied pains he's suffered since, and the ways in which he has tried come to terms with both.

His writing is laden with melancholy which, despite the low regard in which he has clearly always held himself, cannot hide the essential goodness within. He opens the gates to his head wide, inviting the reader to leaf through the hinterlands of his mind. Now describing the vicarious joy of seeing two lovers hold hands or share a brief kiss, now forgiving anyone who has ever done him ill, now reflecting on the depths of his hatred for himself and his weakness for being the kind of person who lets hate into their life that way, now theorising that had he been brave enough to express his emotions earlier he could have fashioned a different outcome for himself.

Regular punctuation is provided by descriptions of eunuchs from history. Occasionally one will gain riches or power but any victory is typically Pyrrhic and the overriding sense is that societies down the ages have viewed eunuchs as people to abuse and denigrate. To castrate a man is to dominate him, to deprive him of whatever his society deems manhood to be, and to replace it with some kind of gender limbo. The parallel to Laurence's own stories is sharp and not much blunted by the fact that his castration was thought to be medically necessary.

Less regularly, there are stories of good citizens who frequent suicide hot spots in the hope of dissuading those clutching their ticket to oblivion from using it. It's shocking if not surprising when Laurence talks about his own self-destructive thoughts and actions, and saddening if not surprising to find that he feels he must be lacking in some way for not being able to step off the ledge when he finds himself standing at it.

Lawrence is my friend, yet he mentioned almost nothing from this book to me for almost the entire time that that we've know each other. It breaks my heart to know that he was living in torment, consciously suppressing his feelings with make-busy displacement activities such as running sporting clubs, and trying to fill the black emotional hole at his centre with escorts, phone sex lines, and pornography. His words are raw and true and depressing and it seems impossible not to feel enormous empathy for him, although to add another layer of awfulness to his situation he describes how he was taunted and abused by others after his operation.

What prevents the book descending into maudlin navel-gazing and self-pity is the strength Laurence shows, although doesn't credit himself with, in being able to see that energy spent that way is energy wasted. It may feel cosy, he says, being coddled by a thick blanket of resentment and spite, but those who indulge should be aware that the comfort eventually turns to restriction and then to suffocation, and eventually to the death of an outside perspective.

It's that perspective that he has somehow found his way into and, near the end of the book, he asks the reader to promise him something:
I very much wish that you will have a better life than I did, that you will not allow yourself to be a loser as I was ... I hope that I have shown you what not to do, and that you have learnt something from this sad, old eunuch. Promise me that you will not waste the opportunities for happiness that this world offers ... Please take good care of yourself.
If you're wondering what relevance Unsung Love Song has to software development, it's right there in that quote. Software is made by and for people and, while we might be fortunate not to be such desperate straits as Laurence, for the sake of us all we should act with empathy, have internal respect, reflect on our choices and feelings, be assertive about our needs, and, yes, take good care of ourselves.
Image: Amazon

P.S. Laurence wrote a short piece for The Guardian's My Life In Sex column in 2018: The Eunuch.

Saturday, April 4, 2020

The Tester as Engineer?

Much of Definition of The Engineering Method by Billy Vaughn Koen chimes with what I've come to believe about testing. 

In part, I think, this is because thinkers who have influenced my thinking were themselves influenced by Koen's thoughts. In part, also, it's because some of my self-learned experience of testing is prefigured in the article. In part, finally, it's because I'm reading it through biased lenses, wanting to find positive analogy to something I care about. 

I recognise that this last one is dangerous. As Richard Feynman said in Surely You're Joking, Mr. Feynman!: "I could find a way of making up an analog with any subject ... I don’t consider such analogs meaningful.” 

This is a short series of posts which will take an aspect of Definition of The Engineering Method that I found interesting and explore why, taking care not to over-analogise.

In this series so far I've pulled out a couple of Koen's key concepts for attention: sotas and the Rule of Engineering. I find them both aesthetically pleasing and with practical applications. However, they are cast explicitly for engineers and I'm a tester. I wonder whether, by Koen's intention, they'd apply to me? Are testers engineers? Does testing overlap with engineering? If so, where? If not, why not?

The definition of an engineering problem and its derived definition of an engineer might help to judge the answer (p. 42-3):
If you desire change; if this change is to be the best available; if the situation is complex and poorly understood; and if the solution is constrained by limited resources, then you too are in the presence of an engineering problem ... If you cause this change by using the heuristics that you think represent the best available, then you too are an engineer ... the engineer is defined by a heuristic — all engineering is heuristic.
Let's take each of the criteria in turn:

  • change: it's the remit of testers to cause change, in the information state of a project if not directly in any deliverable.
  • best available: in Koen's world, "best" is conditional on context and the participants. It doesn't mean objectively maximal. So I intepret this as doing the perceived most important things in an attempt to uncover the most important information.
  • complex and poorly understood: looked at from an appropriate level of granularity, pretty much everything is complex and contains unknowns.
  • limited resources: there was never a project where the manager said "take all the time you like testing this, I don't care when it ships".
  • use heuristics: I would like to think that testers (consciously) use heuristics in their work.

I'm uncomfortable aligning testing and engineering by this route. If I was prepared to say that an activity is only testing when a problem is complex and poorly understood then I could define testers as people who take on complex and poorly understood problems. Unfortunately, I don't agree with the premise: I think it's possible to test something that is not complex, and I think it's possible to test something that's well understood (to whatever degree is relevant in context). In those circumstances, though, I'd suggest that the chances of provoking a change in the information state is likely to be reduced.

Is Koen saying that engineers can't work on trivial things? Or perhaps that they are not doing engineering when they do?

There's a long-running debate in the testing world about whether testing is a role or a job title. I've mused on it myself over the years and concluded that activities we might agree are testing are not the sole remit of people we might call testers. From #GoTesting:
To get to the desired (bigger picture) quality involves asking the (bigger picture) questions; that is, testing the customer's assumptions, testing the scope of the intended solution - you can think of many others - and indeed testing the need for any (small picture) testing, on this project, at this time.
Whether this is done by someone designated as a tester or not, it is done by a human and, as Rands said this week, I believe these are humans you want in the building. #GoTesting
You can play this the other way too: not everything someone with the role title tester does is necessarily what we might call testing.

I spent some time wondering what to make of this paragraph (p. 51):
We have noted that the engineer is different from other people ... The engineer is more inclined to give an answer when asked and to attempt to solve problems that are [non-trivial, but seem practically possible] ... The engineer is also generally optimistic ... and willing to contribute to a small part of a large project as a team member and receive only anonymous glory.
Although he's careful to caveat most of these attributes ("more inclined", "generally") I am allergic to all-encompassing assertions. With respect to testing, I wrote about it in You Shouldn't be a Tester If ...:
A belief that you should conform to a list of context-free statements about what a tester must be would concern me. I'd ask whether you really have testerly tendencies if you prefer that idea to a pragmatic attitude, to doing the best thing you can think of, for the task in hand, under the constraints that exist at that point.
This, to me, is closely allied with Koen's idea of what engineering is and only serves to enhance the dissonance I feel with his assertions about what an engineer is.

Koen does make role comparisons in his article, in particular the engineer and the scientist. He is not keen on the idea of engineering as applied science, apparently wanting instead to regard science as a tool within engineering (p. 63):
Misunderstanding the art of engineering, [some people] become mesmerised by the admittedly extensive use made of science by engineers and ... identify [science] with engineering [but] the engineer recognizes both science and its use as heuristics.
Tellingly, I don't recall him permitting scientists to use what he might call engineering methods. To me, it is simply not the case that all science proceeds by induction, hypothesis generation, and comparison to some natural state of affairs.

There's a sweet definition of a heuristic "in its purest form" (p. 48) that I thought might be relevant:
it does not guarantee an answer, it competes with other possible values, it reduces the effort needed to obtain a satisfactory answer to a problem and it depends on time and context for its choice.
Scientists conduct thought experiments. What are they if not heuristic by this definition? In fact, what are any experiments if not heuristic — hundreds of factors in any experimental setup and methodology could, unknowingly, invalidate the result. One the points of pride for committed scientists is that their findings, though valuable for a time, are likely to be shown wrong in some respect by a later scientist.

Koen also compares engineering with systems thinking and notes the crucial role of feedback (p. 56):
The success or failure of the engineer's effort is fed back to modify the heuristics in the engineer's sota
This seems natural to how I want to view testing. I like the idea of sotas and I really like the idea of overlapping and shared sotas in a given environment. On a project, for example, as we learn more about how the system under development behaves we modify our expectations of it and the way we engage with it. But we also take actions that we desire will provoke other changes. The sotas evolve based on feedback.

A few years ago I had a deep and wide-ranging landslide rush of a conversation with Anders Dinsen that we documented in What We Found Not Looking For Bugs. In trying to characterise what testing does in the abstract, I wrote:
  • Some testing, t, has been performed
  • Before t there was an information state i
  • After t there is an information state j
  • It is never the case that i is equal to j (or, perhaps, if i is equal to j then t was not testing)
  • It is not the case that only t can provide a change from i to j. For example, other simultaneous work on the system under test may contribute to a shared information state.
  • The aim of testing is that j is a better state than i for the relevant people to use as the basis for decision making
... I might propose [an information state is] something like a set of assertions about the state of the world that is relevant to the system under test, with associated confidence scores. I might argue that much of it is tacitly understood by the participants in testing and the consumption of test results. I might argue that there is the potential for different participants to have different views of it - it is a model, after all. I might argue that it is part of the dialogue between the participants to get a mutual understanding of the parts of j that are important to any decisions.
Casting around for non-heuristic definitions of engineers to contrast his ideas with, Koen explores the possibility of there being a recipe, a set of steps which, if followed, will lead to good engineering. He concludes (p. 62):
... more candid authors admit that engineers cannot simply work their way down a list of steps but must circulate freely within the proposed plan — iterating, backtracking and skipping stages almost at random. Soon structure degenerates into a set of heuristics badly in need of other heuristics to tell what to do when.
Again, this feels like what I do when I'm testing. I wrote about it in Testing All The Way Down, and Other Directions:
It's not uncommon to view testing as a recursive activity ... I feel like I follow that pattern while I'm testing. But ... testing can be done across, and around, and inside and outside, and above and below, and at meta levels of a system ... Sometimes multiple activities feed into another. Sometimes one activity feeds into multiple others. Activities can run in parallel, overlap, be serial. A single activity can have multiple intended or accidental outcomes, ... all the way down, and the other directions.
So, are testers engineers? Frankly I find myself bothered when Koen talks about engineers as a group, and about what they are like and not like. I have the same problem making generalisations about testers or pretty much any other set of people defined by a variable in common. I can't in good faith say that (all) testers are engineers.

But I don't think that matters. There's so much to like and exploit in what Koen writes about engineering methodology. I can see many parallels with the way that I like to think about testing, and the context in which testing tasks place.

But, and it's a big but, I find that also with science: the scientific method and the notion of mandated science are are useful tool and a useful lens through which to view my day job. And I also find it with design, and software development, and editing, and detective work, and ...

Again, I don't think that "but" matters. I  accept that the engineering method is heuristic and I can say that it's a tool I can, do, and will use in my testing.

Tuesday, March 31, 2020

Meta is Better

Much of Definition of The Engineering Method by Billy Vaughn Koen chimes with what I've come to believe about testing. 

In part, I think, this is because thinkers who have influenced my thinking were themselves influenced by Koen's thoughts. In part, also, it's because some of my self-learned experience of testing is prefigured in the article. In part, finally, it's because I'm reading it through biased lenses, wanting to find positive analogy to something I care about. 

I recognise that this last one is dangerous. As Richard Feynman said in Surely You're Joking, Mr. Feynman!: "I could find a way of making up an analog with any subject ... I don’t consider such analogs meaningful.” 

This is a short series of posts which will take an aspect of Definition of The Engineering Method that I found interesting and explore why, taking care not to over-analogise.

It is Koen's contention that engineering is applied heuristics. He takes the term sota to be the set of heuristics known by an individual or a group at a specific time (see Sota so Good) and expects that an outcome will be motivated by heuristics from the sotas of the parties involved in the problem definition and solution.

Having determined that, he looks for a general rule for engineering (p. 41):
Since every specific implementation of the engineering method is completely defined by the heuristic it uses, [the quest for a rule to implement the engineering method] is reduced to finding a heuristic that will tell the individual engineer what to do and when to do it.
For those of you worrying quietly at the back, by this point Koen has acknowledged that heuristics are fallible rules of thumb. However, I'm worrying with you when wondering quite what it means for an implementation to be defined by a heuristic. My  current interpretation is something like this: "choosing and applying a relevant engineering heuristic is an instance of the engineering method (for Koen)".

I am categorically not worrying about Where he goes next, though. Just look at this (p. 42):
My Rule of Engineering is in every instance to choose the heuristic to use from what my personal sota takes to be the sota representing the best engineering practice at the time I am required to choose.

I covered "best practice" in Sota so Good so I'll ignore it here and move on to what I love about this rule:

  • it reminds the engineer to check their personal biases
  • it admits time (and so broader context) to the equation
  • it shows us that we can use heuristics to choose a suitable heuristic

Koen stretches the concept further by saying that the intersection of the sotas of all engineers across all times will contain only one heuristic (p. 42):
The Rule of Engineering is: Do what you think represents best practice at the time you must decide, and only this rule must be present.
Worriers, if you're concerned that this has a whiff of the No True Scotsman about it, then so am I: is the heuristic defined by being in the sota of all engineers, or is the set of engineers defined by their having this heuristic in their sota?

I'm prepared to put this quibble to one side, though. I find Koen's formulation beautiful. A heuristic which helps us decide which approach to take is a useful step back from any situation and implicitly admits context into its interpretation and application.

When defining testing for myself I strove to achieve that kind of generality, to pack layers of nuance into a simple principle, to provide tools for decision-making at the point of use. This is what I came up with:
Testing the is the pursuit of relevant incongruity.
I like it and find it valuable, but still have concerns that the terms I've used hinder easy interpretation by others. I have always admired how Jerry Weinberg managed turn that kind of trick so well. These three lines have been part of my daily life for years:

  • A problem is the difference between what is perceived and what is desired.
  • Quality is value to some person.
  • Things are the way they are because they got that way.

I think Koen's Rule of Engineering might be joining them.