Skip to main content


Control and Observe

This week I was testing a new piece of functionality that will sanitise query parameters on an API endpoint. The developer had implemented a first cut and I was looking at the code and the unit tests for a few minutes before we got together for a walkthrough. After exercising the new function through the unit tests, trying other values, and checking that the tests could fail, I wanted to send some requests from an external client. I already had the service set up in IntelliJ and running under the debugger, so ... Sending requests from Postman and inspecting sanitised values in a breakpoint was OK, but there was too much  jumping back and forth between applications, clicking on fields, scrolling and so on. Each iteration was many seconds long. Luckily, we'd already  decided to log sanitised strings for monitoring and later review,  so I could disable the breakpoints and just look at the service console. That was better but still too heavy for me. Ther
Recent posts


Last night I attended a Consequence Scanning workshop at the Cambridge Tester Meetup . In it, Drew Pontikis walked us through the basics of an approach for identifying opportunities and risks and selecting which ones to target for exploitation or mitigation. The originators of Consequence Scanning recommend that it's run as part of planning and design activities with the outcomes being specific actions added to a backlog and a record of all of the suggested consequences for later review. So, acting as a product team for the Facebook Portal pre-launch, we  listed potential intended and unintended consequences sorted them into action categories (control, influence, or monitor) chose several consequences to work on explored possible approaches for the action assigned to each selected consequence In the manual there are various resources for prompting participants to think broadly and laterally about consequences. For example, a product can have an effect on people other than its u

An Outside Chance

An idea came up in several conversations in the last few weeks: the analogy between externalities in economics and the location of activity and costs of software development. This post just dumps the thoughts I had around it. In economics, an externality is  an indirect cost or benefit to an uninvolved third party that arises as an effect of another party's ... activity ... Air pollution from motor vehicles is one example. The cost of air pollution to society is not paid by either the producers or users of motorized transport to the rest of society. Externalities may be beneficial; my neighbour might buy and plant flowers that I can enjoy from my side of our shared garden fence at no cost. But the recent discussions have been more about negative externalities. Everywhere that software is developed by more than one person there is process around the development of the software. In any given shop it will be more or less defined, refined, and front-of-mind, but it exists and has two

Best Heuristics

The Association for Software Testing is crowd-sourcing a book, Navigating the World as a Context-Driven Tester , which aims to provide responses to common questions and statements about testing from a context-driven perspective . It's being edited by Lee Hawkins who is posing questions on Twitter ,  LinkedIn ,  Slack , and the AST mailing list and then collating the replies, focusing on practice over theory. I've decided to contribute by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "There are no best practices, really?" The  context-driven testing principles do say that, yes, but not quite so baldly:  There are good practices in c

Hidden Thoughts

I'm in a work book club that's reading The DevOps Handbook . This week part of our discussion was around work which is invisible and should be exposed. The conversation was fun, interesting, and relevant to our day-to-day activities but I felt like it had touched only isolated areas of a potentially very large space. For example, work might be visible to some but not others, at some times but not others, for some kinds of work but not others, for justifiable reasons or not, with positive effects or not. So this morning I set aside an hour to factor my thoughts into the mind map above. Interestingly, I found myself preferring "hidden" over "invisible" because it more explicitly acknowledges the contextuality of views of work.  I'd love to hear other people's thoughts or experiences, so if you have some please don't keep them to yourself.

Got Legs

Time for another anniversary reflection . My goal back in October 2011 was to write once a week for year, with a couple of weeks off for holidays. Two years and 100 posts in I paused to consider how things were going and since then I've done the same every 50 posts or calendar year. Today is post number 550. I am regularly asked how I manage this. It helps that I like writing and find it valuable to get my ideas straight, record experiences, and document my thinking and learning but mostly, I think, it's because I've made it my habit. Seth Godin calls it showing up : When we commit to a practice, we don’t have to wonder if we’re in the mood, if it’s the right moment, if we have a headache or momentum or the muse by our side. We already made those decisions. ... Outcomes are important ... But the outcome isn’t the practice, the practice leads us to the outcome. Find work worth doing, and begin there. When

Notes on Testing Notes

Ben Dowen pinged me and others on Twitter last week , asking for "a nice concise resource to link to for a blog post - about taking good Testing notes." I didn't have one so I thought I'd write a few words on how I'm doing it at the moment for my work at Ada Health, alongside Ben. You may have read previously that I use a script to upload Markdown-based text files to Confluence . Here's the template that I start from: # Date + Title # Mission # Summary WIP! # Notes Then I fill out what I plan to do. The Mission can be as high or low level as I want it to be. Sometimes, if deeper context might be valuable I'll add a Background subsection to it. I don't fill in the Summary section until the end. It's a high-level overview of what I did, what I found, risks identified, value provided, and so on. Between the Mission and Summary I hope that a reader can see what I initially intended and what actually

Reasons to be Cheerful, Part 2

Last week I posted a long list of reasons for testing software , crowdsourced from members of the Association for Software Testing and software professionals on Twitter and LinkedIn. This week I've done some rough and ready analysis to see what's common across them and the results are at the top. A few notes: Information includes learning and feedback. Risks are unspecified or explicit (such as damage to reputation and costs). Confidence covers reducing uncertainty and sleeping well. Finding problems incorporates the fallibility of (other) humans. There are definitely other ways to classify and cluster this data. Testing, for me, is the pursuit of relevant incongruity . I didn't include my answers in the previous post but this is what I dropped into the conversation on the AST Slack: To check that it can do what we intended it should do. To look for ways in which it does things it wasn't intended to do. I think they're in the space the leftmost columns cover but I&

Why Do They Test Software?

My friend Rachel Kibler asked me the other day "do you have a blog post about why we test software?" and I was surprised to find that, despite having touched on the topic many times, I haven't. So then I thought I'd write one. And then I thought it might be fun to crowdsource so I asked in the Association for Software Testing member's Slack, on LinkedIn , and on Twitter for reasons, one sentence each. And it was fun!  Here are the varied answers, a couple lightly edited, with thanks to everyone who contributed. Edit: I did a bit of analysis of the responses in Reasons to be Cheerful, Part 2 . --00-- Software is complicated, and the people that use it are even worse. — Andy Hird Because there is what software does, what people say it does, and what other people want it to do, and those are often not the same. — Andy Hird Because someone asked/told us to — Lee Hawkins To learn, and identify risks — Louise Perold sometimes: reducing the risk of harming people —

Testing and Semantics

The other day I got tagged on a Twitter thread started by Wicked Witch of the Test about people with a background in linguistics who’ve ended up in testing. That prompted me to think about the language concepts I've found valuable in my day job, then I started listing them, and then realised how many of them I've mentioned here over the years .   This post is one of an occasional series collecting some of those thoughts.  --00-- In this series so far we've looked at words and syntax. In both cases we've found that natural language is an imprecise medium for communication. We might know the same words and grammar as others ... but they will have their own idea about what they mean ... and even where we agree there is ambguity ... and all of us, the world, and the language are evolving ... all the time. Today we'll add semantics which, in a pleasing twist, is itself ambiguo