In Your job is to deliver code you have proven to work Simon Willison writes:
As software engineers we ... need to deliver code that works — and we need to include proof that it works as well.
He is coming at this from the perspective of LLM-assisted coding, but most of what he says applies in general. I think this is a reasonable consise summary of his requirements for developers:
- Manual happy paths: get the system into an initial state, exercise the code, check that it has the desired effect on the state.
- Manual edge cases: no advice given, just a note that skill here is a sign of a senior engineer.
- Automated tests: should demonstrate the change like Manual happy paths but also fail if the change is reverted.
He notes that, even though LLM tooling can write automated tests, it's humans who are accountable for the code and it's on us to "include evidence that it works as it should."
Coincidentally, just the week before I read his post I told one of my colleagues that I love her testing. Her PRs often come with screen recordings, test notes, and automated tests and I have confidence that she will have thought of and looked at the obvious stuff.
But does this prove that her code works?
Well, no, not really.
--00--
Informally I think testing explores how the code CAN work, but not that it always DOES. More formally (and I'm not a logician so still with some hand-waving) it's analogous to proof by induction rather than proof by deduction.
Under induction, a general conclusion is drawn from specific data. Famously, black swan events are failures of inductive reasoning but the strength of it in any given scenario depends on what cases form the sample set, how well the context is understood, and whether the results were correctly interpreted.
Under deduction, axiomatic rules are used to derive certainty that a given claim is true: for example that the code is correct in every possible scenario. This is plausible in maths, but much harder in the messier world of software development although type systems do exploit it in a limited way and there is research and tooling in formal methods to apply it to whole programs.
--00--
Manual happy paths has no consideration of side effects. For sure we want the desired change to happen, but we also don't want undesirable changes. Simple confirmatory tests have a narrow focus which blinds them to black swans.
For any non-trivial code the state space is effectively infinite, so applying testing effort efficiently and effectively to cover the important parts is crucial. This might mean, in different scenarios, for example:
- carefully researching end-user needs and behaviours in order to choose an appropriate sample set.
- exhaustively testing a function because it is on the critical path.
- deciding not to test X, because risks there are already well understood, and putting the available time into Y instead.
Manual edge cases offered no suggestions to developers to help them go beyond the happy path. So here are a handful of generic tips.
Ask yourself questions such as what would wrong look like? Under what circumstances might that happen? How could I set that up? How could I easily identify that it had gone wrong?
Don't test the same path every time when you run the code. Deliberately give different inputs if you think the input shouldn't matter. You will find things this way over time.
Do write unit tests that clearly separate data from test machinery, whose overall coverage can be guaged by reading the tests, and whose intent is clear. This will help others to understand what's being tested and so where there might still be risk worth reviewing.
Property testing is an interesting tool for straddling CAN vs DOES. By defining the space of valid inputs and outputs along with properties that must hold whichever input is chosen, and then running multiple times, it attempts to broaden the inductive impact. If the inputs are available afterwards, they can also be assessed for coverage, and inspire further testing. This is an approach I use regularly myself in regression tests and in the model-based walkers I've built.
The Test Heuristics Cheat Sheet is a concise reminder of numerous things that might be worth checking around inputs, outputs, variable types, execution environments, and so on.
--00--
A significant skill in the art of testing is choosing how, when, and where to invest your time so that, even if you can't prove correctness, you can at least remove reasonable doubt.
Image: Google Gemini
P.S. Simon's free weekly newsletter is a treasure trove fire hose of notes, insights, and experiments around software and, in a wonderfully Pascalian move, you can sponsor him to get a monthly short version.
P.P.S. This is the sequence of prompts I used to create the image at the top.
- How about making a rubber-stamp image of "QED?" I want it on a transparent background.
- No, I want the question mark on the stamp as well.
- Good. Now make it green.
- This doesn't have a transparent background. You've just made a grid on the image.
Naturally, I got another non-transparent image with a wonky grid at this point, but it made me smile: how far should I go to check that what was produced is what I asked for? On a simple happy path check it looked OK.
