Skip to main content

Posts

My Frame, Your Thing

I was talking with a colleague the other week and we got onto the topic of framing our work. This is one of my suggestions: I want to help whoever I'm working with build the best version of their thing, whatever 'best' means for them, given the constraints they have. That's it. Chef's kiss. I like it because it packs in, for example: exploration of ideas, software, process, business choices, and legal considerations conversations about budget, scope, resources, dreams, and priorities communicating findings, hypotheses, and suggestions helping to break down the work, organise the work, and facilitate the work making connections, pulling information from outside, and sharing information from inside It doesn't mean that I have no core expertise to bring, no scope for judgement, no agency, and no way to be creative or express myself, and it specifically does not mean that I'm going to pick up all the crap that no-one else wants to do.  Of course, I might pick up
Recent posts

Why Question?

Questions are a powerful testing tool and, like any tool, can be used in different ways in different scenarios with different motivations and different results. A significant part of my role is generating questions and I will generally have a lot of them. I will rarely ask them all, though, and I've put a lot of time and effort into learning to be comfortable with that. A couple of examples: I was in a meeting this week where the technical conversation was too deep for me to give a perspective from a position of knowledge. I could have disengaged, but I didn't. Instead, I asked occasional questions, not wanting to derail the discussion or disrupt the flow. Some were detail questions, to help grow my understanding. Some were scoping questions, to help understand motivations. The one that really landed, however, was about the focus of the meeting. Although I couldn't contribute at a low level, I understood enough to suspect that we were not discussing the key problem tha

ChatGPTesters

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00--  "Why don’t we replace the testers with AI?" We have a good relationship so I feel safe telling you that my instinctive reaction, as a member of the Tester's Union, is to ask why we don&

A Model Prank

Yesterday I was listening to an episode of Arts and Ideas hosted by Matthew Sweet. The topic was pranks and the first request he made of his guests was for a typology of the terms prank, hoax, stunt, and practical joke. No one was prepared to give one but, through the course of the programme, they clearly preferred one term over the others in specific instances or tried to bypass the distinctions by claiming that what mattered was whether there was a laugh. This is no great surprise. Categories invariably have fuzzy boundaries although, famously, we like to think that we can know where something belongs " when we see it ." My thoughts turned to work, and the problem of stakeholders using sets of overlapping terms when discussing what they want with no time for conversations about the meanings ("don't give me all that semantics!"). So I thought, on 1st April, I would take the fool's errand of trying to imagine working on a project where those concepts were

Not Strictly for the Birds

  One of my chores takes me outside early in the morning and, if I time it right, I get to hear a charming chorus of birdsong from the trees in the gardens down our road, a relaxing layered soundscape of tuneful calls, chatter, and chirrupping. Interestingly, although I can tell from the number and variety of trills that there must be a large number of birds around, they are tricky to spot. I have found that by staring loosely at something, such as the silhouette of a tree's crown against the slowly brightening sky, I see more birds out of the corner of my eye than if I scan to look for them. The reason seems to be that my peripheral vision picks up movement against the wider background that direct inspection can miss. An optometrist I am not, but I do find myself staring at data a great deal, seeking relationships, patterns, or gaps. I idly wondered whether, if I filled my visual field with data, I might be able to exploit my peripheral vision in that quest. I have a wide monito

Oblique Strategies

In Obliquity , John Kay argues that success may be best achieved indirectly. When the goal is non-trivial, the environment unpredictable, and the system in which we are operating is complex, then top-down working, planned to completion, is fragile. He recommends that we instead proceed obliquely , taking small steps, making choices opportunistically, and accepting that we do not have all the information or all of the control we might feel we want. Kay presents numerous examples of people and organisations that have done well with the oblique approach and some that have suffered when their indirect strategy straightened up. That's not to say the direct approach can't work or that it is a mistake to apply it to some situations, just that the set of real-world scenarios where directness is a good first choice is pretty constrained. It doesn't take much imagination to see the strong parallel between obliquity and agile software development. Likewi

The Best Laid Test Plans

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "What's the best format for a test plan?" I'll side-step the conversation about what a test plan is and just say that the format you should use is one that works for you, your coll

Vanilla Flavour Testing

I have been pairing with a new developer colleague recently. In our last session he asked me "is this normal testing?" saying that he'd never seen anything like it anywhere else that he'd worked. We finished the task we were on and then chatted about his question for a few minutes. This is a short summary of what I said. I would describe myself as context-driven . I don't take the same approach to testing every time, except in a meta way. I try to understand the important questions, who they are important to, and what the constraints on the work are. With that knowledge I look for productive, pragmatic, ways to explore whatever we're looking at to uncover valuable information or find a way to move on. I write test notes as I work in a format that I have found to be useful to me, colleagues, and stakeholders. For me, the notes should clearly state the mission and give a tl;dr summary of the findings and I like them to be public while I'm working not just w

Is the Variability Variable?

I'm definitely no expert in Large Language Models but expertise is not required, given all the publicity, to know that variation in the responses from the technology is expected.  To illustrate that, I asked ChatGPT  "In one sentence, why do LLM responses vary on the same prompt?" and got three different, although clearly related and overlapping, answers: LLM responses vary on the same prompt due to the model's stochastic nature, diverse training data, and sensitivity to slight changes in input phrasing.  LLM responses vary on the same prompt due to the model's probabilistic nature, employing diverse sampling techniques and leveraging its extensive training data to generate a range of contextually plausible and grammatically coherent outputs. LLM responses vary on the same prompt due to the model's inherent stochasticity and diverse training data, leading to dif

On Skinning the Cat

This week I was working with a developer to make a change to a legacy codebase in an area neither of us are very familiar with. The need is easy to describe in general and some occurrences of the behaviour we want to alter are common, straightforward to identify in use, and clear in the code. Unfortunately, the logic in the application is complex, the data used is domain-specialised and the behaviour we are interested in can occur in extremely specific combinations that are hard for a layperson to predict. I had no confidence that cases we knew were all of the cases. My colleague did a round of work and asked me to take a look. I exercised the application in a few ways while inspecting its logs in real time so that I could see the effect of the changes (or not) immediately. This gave up some rarer examples which had not been covered in the code. I added a couple of tests to characterise them and he identified another code change. Next, rather than continue exercising the product lookin