Skip to main content

Posts

Not Strictly for the Birds

  One of my chores takes me outside early in the morning and, if I time it right, I get to hear a charming chorus of birdsong from the trees in the gardens down our road, a relaxing layered soundscape of tuneful calls, chatter, and chirrupping. Interestingly, although I can tell from the number and variety of trills that there must be a large number of birds around, they are tricky to spot. I have found that by staring loosely at something, such as the silhouette of a tree's crown against the slowly brightening sky, I see more birds out of the corner of my eye than if I scan to look for them. The reason seems to be that my peripheral vision picks up movement against the wider background that direct inspection can miss. An optometrist I am not, but I do find myself staring at data a great deal, seeking relationships, patterns, or gaps. I idly wondered whether, if I filled my visual field with data, I might be able to exploit my peripheral vision in that quest. I have a wide monito
Recent posts

Oblique Strategies

In Obliquity , John Kay argues that success may be best achieved indirectly. When the goal is non-trivial, the environment unpredictable, and the system in which we are operating is complex, then top-down working, planned to completion, is fragile. He recommends that we instead proceed obliquely , taking small steps, making choices opportunistically, and accepting that we do not have all the information or all of the control we might feel we want. Kay presents numerous examples of people and organisations that have done well with the oblique approach and some that have suffered when their indirect strategy straightened up. That's not to say the direct approach can't work or that it is a mistake to apply it to some situations, just that the set of real-world scenarios where directness is a good first choice is pretty constrained. It doesn't take much imagination to see the strong parallel between obliquity and agile software development. Likewi

The Best Laid Test Plans

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "What's the best format for a test plan?" I'll side-step the conversation about what a test plan is and just say that the format you should use is one that works for you, your coll

Vanilla Flavour Testing

I have been pairing with a new developer colleague recently. In our last session he asked me "is this normal testing?" saying that he'd never seen anything like it anywhere else that he'd worked. We finished the task we were on and then chatted about his question for a few minutes. This is a short summary of what I said. I would describe myself as context-driven . I don't take the same approach to testing every time, except in a meta way. I try to understand the important questions, who they are important to, and what the constraints on the work are. With that knowledge I look for productive, pragmatic, ways to explore whatever we're looking at to uncover valuable information or find a way to move on. I write test notes as I work in a format that I have found to be useful to me, colleagues, and stakeholders. For me, the notes should clearly state the mission and give a tl;dr summary of the findings and I like them to be public while I'm working not just w

Is the Variability Variable?

I'm definitely no expert in Large Language Models but expertise is not required, given all the publicity, to know that variation in the responses from the technology is expected.  To illustrate that, I asked ChatGPT  "In one sentence, why do LLM responses vary on the same prompt?" and got three different, although clearly related and overlapping, answers: LLM responses vary on the same prompt due to the model's stochastic nature, diverse training data, and sensitivity to slight changes in input phrasing.  LLM responses vary on the same prompt due to the model's probabilistic nature, employing diverse sampling techniques and leveraging its extensive training data to generate a range of contextually plausible and grammatically coherent outputs. LLM responses vary on the same prompt due to the model's inherent stochasticity and diverse training data, leading to dif

On Skinning the Cat

This week I was working with a developer to make a change to a legacy codebase in an area neither of us are very familiar with. The need is easy to describe in general and some occurrences of the behaviour we want to alter are common, straightforward to identify in use, and clear in the code. Unfortunately, the logic in the application is complex, the data used is domain-specialised and the behaviour we are interested in can occur in extremely specific combinations that are hard for a layperson to predict. I had no confidence that cases we knew were all of the cases. My colleague did a round of work and asked me to take a look. I exercised the application in a few ways while inspecting its logs in real time so that I could see the effect of the changes (or not) immediately. This gave up some rarer examples which had not been covered in the code. I added a couple of tests to characterise them and he identified another code change. Next, rather than continue exercising the product lookin

Build Quality

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "When the build is green, the product is of sufficient quality to release" An interesting take, and one I wouldn't agree with in general. That surprises you? Well, ho

What are the Chances?

  I was listening to the Radiolab podcast Stochasticity yesterday as I walked to the shop. The presenters were talking to two women, both named Laura Buxton, who became friends years ago because one of them released a balloon with their name and address attached from their back garden and the other found it 150 miles away in their back garden.  After Laura Two got in touch with Laura One they discovered other incredible similarities: they were both tall for their age, had brown hair and blue eyes, both had a labrador, black, a rabbit, grey, and a guinea pig with orange markings. They ended up in the local newspaper, and on national and even international television talking about how glad they were that fate had brought them together. If that sounds astonishing, how about this? As I strolled along the river and the podcast turned to the question of the probability of Laura-like events, David Spiegelhalter , a renowned statistician, jogged past me. What are the chances? The podcast use

Make, Fix, and Test

A few weeks ago, in A Good Tester is All Over the Place , Joep Schuurkes described a model of testing work based on three axes: do testing yourself or support testing by others be embedded in a team or be part of a separate team do your job or improve the system It resonated with me and the other testers I shared it with at work, and it resurfaced in my mind while I was reflecting on some of the tasks I've picked up recently and what they have involved, at least in the way I've chosen to address them. Here's three examples: Documentation Generation We have an internal tool that generates documentation in Confluence by extracting and combining images and text from a handful of sources. Although useful, it ran very slowly or not at all so one of the developers performed major surgery on it. Up to that point, I had never taken much interest in the tool and I could have safely ignored this piece of work too because it would have been tested by

We Missed You

Dear Bug, After all we've been through I didn't expect to see you again the other day. Perhaps you thought I'd forgotten about us? Well, no, I remember you very well although without fondness.  Our relationship was intense but short-lived. It started in the dev environment when I glimpsed you out of the corner of my eye and knew immediately that I wanted to hold you close. Love at first sight? I wouldn't say that, but I recollect you teasingly jinking this way and that as I followed you through the architecture of our service, laughing and crying in turn. Eventually I caught up and found that I had been pursuing twins, two effects from the same underlying cause. We went on double dates (triple dates?) with a developer friend of mine until it became apparent that she understood you way better than I did. And that was the beginning of the end for us. As soon as she had finished with the bug she was seeing, you left me to be with her. That's OK, even before you and she