Skip to main content

Reasonable Doubt


In Your job is to deliver code you have proven to work Simon Willison writes:

As software engineers we ... need to deliver code that works — and we need to include proof that it works as well. 

He is coming at this from the perspective of LLM-assisted coding, but most of what he says applies in general. I think this is a reasonable consise summary of his requirements for developers:

  • Manual happy paths: get the system into an initial state, exercise the code, check that it has the desired effect on the state.
  • Manual edge cases: no advice given, just a note that skill here is a sign of a senior engineer. 
  • Automated tests: should demonstrate the change like Manual happy paths but also fail if the change is reverted. 

He notes that, even though LLM tooling can write automated tests, it's humans who are accountable for the code and it's on us to "include evidence that it works as it should."

Coincidentally, just the week before I read his post I told one of my colleagues that I love her testing. Her PRs often come with screen recordings, test notes, and automated tests and I have confidence that she will have thought of and looked at the obvious stuff.

But does this prove that her code works? 

Well, no, not really.

--00-- 

Informally I think testing explores how the code CAN work, but not that it always DOES. More formally (and I'm not a logician so still with some hand-waving) it's analogous to proof by induction rather than proof by deduction.

Under induction, a general conclusion is drawn from specific data. Famously, black swan events are failures of inductive reasoning but the strength of it in any given scenario depends on what cases form the sample set, how well the context is understood, and whether the results were correctly interpreted.

Under deduction, axiomatic rules are used to derive certainty that a given claim is true: for example that the code is correct in every possible scenario. This is plausible in maths, but much harder in the messier world of software development although type systems do exploit it in a limited way and there is research and tooling in formal methods to apply it to whole programs.

--00-- 

Manual happy paths has no consideration of side effects. For sure we want the desired change to happen, but we also don't want undesirable changes. Simple confirmatory tests have a narrow focus which blinds them to black swans.

For any non-trivial code the state space is effectively infinite, so applying testing effort efficiently and effectively to cover the important parts is crucial. This might mean, in different scenarios, for example:

  • carefully researching end-user needs and behaviours in order to choose an appropriate sample set.
  • exhaustively testing a function because it is on the critical path. 
  • deciding not to test X, because risks there are already well understood, and putting the available time into Y instead.

Manual edge cases offered no suggestions to developers to help them go beyond the happy path. So here are a handful of generic tips.

Ask yourself questions such as what would wrong look like? Under what circumstances might that happen? How could I set that up? How could I easily identify that it had gone wrong?

Don't test the same path every time when you run the code. Deliberately give different inputs if you think the input shouldn't matter. You will find things this way over time.

Do write unit tests that clearly separate data from test machinery, whose overall coverage can be guaged by reading the tests, and whose intent is clear. This will help others to understand what's being tested and so where there might still be risk worth reviewing.

Property testing is an interesting tool for straddling CAN vs DOES. By defining the space of valid inputs and outputs along with properties that must hold whichever input is chosen, and then running multiple times, it attempts to broaden the inductive impact. If the inputs are available afterwards, they can also be assessed for coverage, and inspire further testing. This is an approach I use regularly myself in regression tests and in the model-based walkers I've built.

The Test Heuristics Cheat Sheet is a concise reminder of numerous things that might be worth checking around inputs, outputs, variable types, execution environments, and so on. 

--00-- 

A significant skill in the art of testing is choosing how, when, and where to invest your time so that, even if you can't prove correctness, you can at least remove reasonable doubt
Image: Google Gemini 

P.S. Simon's free weekly newsletter is a treasure trove fire hose of notes, insights, and experiments around software and, in a wonderfully Pascalian move, you can sponsor him to get a monthly short version.

P.P.S. This is the sequence of prompts I used to create the image at the top. 

  1. How about making a rubber-stamp image of "QED?" I want it on a transparent background. 
  2. No, I want the question mark on the stamp as well. 
  3. Good. Now make it green.
  4. This doesn't have a transparent background. You've just made a grid on the image. 

Naturally, I got another non-transparent image with a wonky grid at this point, but it made me smile: how far should I go to check that what was produced is what I asked for? On a simple happy path check it looked OK.

Popular posts from this blog

Meet Me Halfway?

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Stop answering my questions with questions." Sure, I can do that. In return, please stop asking me questions so open to interpretation that any answ...

The Best Programmer Dan Knows

  I was pairing with my friend Vernon at work last week, on a tool I've been developing. He was smiling broadly as I talked him through what I'd done because we've been here before. The tool facilitates a task that's time-consuming, inefficient, error-prone, tiresome, and important to get right. Vern knows that those kinds of factors trigger me to change or build something, and that's why he was struggling not to laugh out loud. He held himself together and asked a bunch of sensible questions about the need, the desired outcome, and the approach I'd taken. Then he mentioned a talk by Daniel Terhorst-North, called The Best Programmer I Know, and said that much of it paralleled what he sees me doing. It was my turn to laugh then, because I am not a good programmer, and I thought he knew that already. What I do accept, though, is that I am focussed on the value that programs can give, and getting some of that value as early as possible. He sent me a link to the ta...

How do I Test AI?

  Recently a few people have asked me how I test AI. I'm happy to share my experiences, but I frame the question more broadly, perhaps something like this: what kinds of things do I consider when testing systems with artificial intelligence components .  I freestyled liberally the first time I answered but when the question came up again I thought I'd write a few bullets to help me remember key things. This post is the latest iteration of that list. Caveats: I'm not an expert; what you see below is a reminder of things to pick up on during conversations so it's quite minimal; it's also messy; it's absolutely not a guide or a set of best practices; each point should be applied in context; the categories are very rough; it's certainly not complete.  Also note that I work with teams who really know what they're doing on the domain, tech, and medical safety fronts and some of the things listed here are things they'd typically do some or all of. Testing ...

My Adidas

If you've met me anywhere outside of a wedding or funeral, a snowy day, or a muddy field in the last 20 years you'll have seen me in Adidas Superstar trainers. But why? This post is for April Cools' Club .  --00-- I'm the butt of many jokes in our house, but not having a good memory features prominently amongst them. See also being bald ("do you need a hat, Dad?"), wearing jeans that have elastane in them (they're very comfy but "oh look, he's got the jeggings on again!"), and finding joy in contorted puns ("no-one's laughing except you, you know that, right?") Which is why it's interesting that I have a very strong, if admittedly not complete, memory of the first time I heard Run DMC. Raising Hell , their third album, was released in the UK in May 1986 and I bought it pretty much immediately after hearing it on the evening show on Radio 1, probably presented by Janice Long, ...

Notes on Testing Notes

Ben Dowen pinged me and others on Twitter last week , asking for "a nice concise resource to link to for a blog post - about taking good Testing notes." I didn't have one so I thought I'd write a few words on how I'm doing it at the moment for my work at Ada Health, alongside Ben. You may have read previously that I use a script to upload Markdown-based text files to Confluence . Here's the template that I start from: # Date + Title # Mission # Summary WIP! # Notes Then I fill out what I plan to do. The Mission can be as high or low level as I want it to be. Sometimes, if deeper context might be valuable I'll add a Background subsection to it. I don't fill in the Summary section until the end. It's a high-level overview of what I did, what I found, risks identified, value provided, and so on. Between the Mission and Summary I hope that a reader can see what I initially intended and what actually...

Going Underground

The map is not the territory. You've heard this before and I've quoted it before . The longer quote (due to Alfred Korzybski) from which the snappy soundbite originated adds some valuable context: A map is not the territory it represents, but, if correct, it has a similar structure to the territory, which accounts for its usefulness. I was thinking about that this week as I came to a product new to me but quite mature with a very rich set of configuration options. When I say rich , I mean — without casting any shade, because I have been there and understand — it is set in multiple locations, has extensive potential effects, and is often difficult to understand.  For my current project I consider it crucial to get a non-shallow view of how this works and so I began to explore. While there is some limited documentation it is, as so often, not up to date so mostly I worked in the codebases. Yes, plural, because this product spans multiple r...

On Herding Cats

Last night I was at the Cambridge Tester meetup for a workshop on leadership. It was a two-parter with Drew Pontikis facilitating conversation about workplace scenarios followed by an AMA with a group of experienced managers. I can't come to work this week, my cat died. Drew opened by asking us what our first thoughts would be as managers on seeing that sentence. Naturally, sadness and sympathy,  followed by a week ? for a cat ? and I only got a day for my gran! Then practicalities such as maybe there's company policy that covers that , and then the acknowledgement that it's contextual: perhaps this was a long-time emotional support animal . Having established that management decisions are a mixture of emotion, logic, and contingency Drew noted that most of us don't get training in management or leadership then split us into small groups and confronted us with three situations to talk through: Setting personal development goals for others. Dropping a clange...

Not a Happy Place

  A few months ago I stopped having therapy because I felt I had stabilised myself enough to navigate life without it. For the time being, anyway.  I'm sure the counselling helped me but I couldn't tell you how and I've chosen not to look deeply into it. For someone who is usually pretty analytical this is perhaps an interesting decision but I knew that I didn't want to be second-guessing my counsellor, Sue, or mentally cross-referencing stuff that I'd researched while we were talking. And talk was what we mostly did, with Sue suggesting hardly any specific tools for me to try. One that she did recommend was finding a happy place to visualise, somewhere that I could be out of the moment for a moment to calm disruptive thoughts. (Something like this .) Surprisingly, I found that I couldn't conjure anywhere up inside my head. That's when I realised that I've always had difficulty seeing with my mind's eye but never called it out. If I try to imagine ev...

Bottom-up or Top-down?

The theme at  LLEWT this year was Rules and constraints to ensure better quality.   My experience report concerned a team I'd been on for several years which developed (bottom-up) a set of working practices that we called team agreements.   The agreements survived "natural" variation such as people leaving and joining and even some structural reorganisation which preserved most of the team members but changed the team's responsibilities or merged in a few people from a disbanded team. The agreements did not, however, persist through a significant round of (top-down) redundancies where the team was merged with two others.  I'm interested in thinking about the ways in which constraints on how people work affect the work and whether there are patterns that could help us to apply the right kinds of constraints at times they are likely to be useful.  I'm going to use this post to dump my thoughts. My starting po...

Heads Up

I tell you what: of all the things I might've expected to see on the first slide at Quality Jam London 2017 , my own professional-work-photo-grinning, shiny-pated, blue-tinted face peering back down at me from behind a massive Thank You! wasn't it. Expectations are grist to the working tester's mill, yet also often the bane of their lives. Tony Bruce , in  Manual Testing is Dead. Long Live Manual Testing , called for testers to set the expectations of the people that they interact with. The term "manual testing" undersells what testing is, or can be, with its connotations of manual labour, unthinking monotony, apparent separation from (woo! sexy!) automation and the historical association with scripted test cases. For Bruce, testing is "the pursuit of information" but he doesn't necessarily rush into meetings spouting from that kind of lexicon (although he's singing my kind of song right there). Instead he promotes the use of PAC (purpos...