There was a time when testing was all about the mnemonics. Well, we had no AI back then so thinking about our human craft and how to share what we had learned using our human intelligence with other engaged humans for later recall in their human heads seemed important.
But this post isn't about dumping on AI. It isn't about mnemonics either even if WTTWTH does look like something that'd fit well into that ancient world.
No, this post is just a snappier version of what I said to my team this week when I was walking through some testing I'd done the day before. It was concerned with a change to a particular turn in the dialog system we're building where multiple variables are in play, some populated by an external call to an LLM service.
I wanted to make the point that the bulk of the testing work was done in the research I did and the spreadsheet I made, not in the interaction with our product.
That spreadsheet was the result of me looking in our service's codebase, exercising our product a little via the UI, background knowledge of another relevant downstream service, and talking to a couple of developers:
As the colouring shows, two of the variables are the focus, v1 takes three values and v5 takes four, giving 12 combinations of interest. The other variables will not make a difference, but in one particular case I wanted to confirm that specifically so I varied v6 a little too.
I exported the spreadsheet to a CSV file and asked Cursor to make mock payloads from the external service for each of the rows, naming the files for the Id column. (In the before times, the mnemonic times, I would have written a script for that, but ...)
With the test data, I mocked the external service using Wiremock and set up a nice tight controlled loop where I could exhaustively exercise the scenarios that matter, using our product's UI as a user would, but totally certain about the data being passed.
The loop went like this, for each row of the spreadsheet:
- configure the mock data for this row
- refresh the mock server
- enter the dialog turn in one way
- observe the outcome
- enter the same dialog turn a different way
- observe the outcome again
- record findings in the spreadsheet
I did this before and after our change, to generate a couple of tables of 28 rows each that I could easily compare to understand the behavioural difference that we'd implemented. During the testing I was able to explore around the behaviours too, when I felt the need, but I didn't detect any unwanted side-effects of the work.
I also got Cursor to make a second set of mocks, this time for all possible permutations, not just the ones I thought would be of interest (there were 3000 of them and, amusingly, it chose to write a script to generate them, which I archived for later re-use.) I configured the mock server to choose from these randomly each time it was called and ran my dialog walker, configured to hit this particular dialog turn with different user inputs, against our service to look for potentially interesting cases.
--00--
So, what did I do here?
First, I thought about why I was testing, and it wasn't to confirm a couple of happy paths, it was to explore the behaviour on relevant data in a complex piece of distributed logic that needs to be correct for the integrity of our product. (For reasons, it's hard to test this exhaustively in code right now, although that's something we're working on.)
Next I looked into what I should test to satisfy that need. I identified what looked like the key variables and decided that there were sufficiently few cases that I would look at them all by hand. But, because the area is complex, I also decided to use tooling to give me a chance of spotting edge cases that I hadn't considered.
Finally, I exercised the product in a couple of different ways, with a couple of different evaluation processes in mind. The mocks help me to control the data coming back from the external service which is important because it's not quite deterministic and the behaviours in our product I was exploring are subtle. I wanted to be "in" the product for the key scenarios to see how it looked for users and make it possible to spot side-issues, but I was prepared to go "big data" for the wider-scope test and just run the tooling and search the results for outliers.
This isn't to say that all testing needs to be planned up the wazoo but it is to say that testing should be intentional. The actions follow from that and any other contraints such as time, resources, perceived risk and so on. Remaining exploratory and being prepared to pivot give flexibility.
--00--
Why Test, Test What, Then How? That's WTTWTH. Use it to find those mnemonic WTF!?s.
