Skip to main content


AST Lean Coffee

  My second Lean Coffee in a week, this one online with the Association for Software Testing . Here's a few aggregated notes from the conversation. Why do people want to speak at conferences, and can we support them to get what they need in other ways? Lots of noise recently on Twitter about people being rejected, and discussion of tactics for getting in. So why do people want to speak at conferences? Increase their brand. Be more employable. Be better known. Go to the conference for free. Company will only pay if they are speaking. Share what I've learned. Share my story. Challenge yourself. Because they see others doing it Personal access to "big names." Conferences always have the same speakers. Do people need better access to conferences? Can be a vicious cycle: accepted because we know you; we know you because you speak. Perhaps the return to in-person conferences has increased the demand for speaking slots. People don't know how to sell their talk to confere
Recent posts

Agile in the Ether

  What I particularly like about Lean Coffee are the timeboxes. At Agile in the Ether yesterday it was an hour for the whole event and just eight minutes per topic. At that level, the investment is low and the potential returns are high: some ideas for right-now problems and background for those that will surely come later. On top of that, the possibility that I can share something that will be a win for someone else. Here's my notes aggregated from the conversations. Ideas for agile coaching teams when you're in there on a very ad hoc basis. How to not disturb but still bring a value. Is it possible? The teams are typically overconstrained which makes change difficult. It's common for the coach to suggest an approach but not return for a couple of iterations. They don't know whether it landed. Can you build a relationship with someone in the team and have close communication with them to get the feedback? Ideally this would be with someone who cares about improvements

Control and Observe

This week I was testing a new piece of functionality that will sanitise query parameters on an API endpoint. The developer had implemented a first cut and I was looking at the code and the unit tests for a few minutes before we got together for a walkthrough. After exercising the new function through the unit tests, trying other values, and checking that the tests could fail, I wanted to send some requests from an external client. I already had the service set up in IntelliJ and running under the debugger, so ... Sending requests from Postman and inspecting sanitised values in a breakpoint was OK, but there was too much  jumping back and forth between applications, clicking on fields, scrolling and so on. Each iteration was many seconds long. Luckily, we'd already  decided to log sanitised strings for monitoring and later review,  so I could disable the breakpoints and just look at the service console. That was better but still too heavy for me. Ther


Last night I attended a Consequence Scanning workshop at the Cambridge Tester Meetup . In it, Drew Pontikis walked us through the basics of an approach for identifying opportunities and risks and selecting which ones to target for exploitation or mitigation. The originators of Consequence Scanning recommend that it's run as part of planning and design activities with the outcomes being specific actions added to a backlog and a record of all of the suggested consequences for later review. So, acting as a product team for the Facebook Portal pre-launch, we  listed potential intended and unintended consequences sorted them into action categories (control, influence, or monitor) chose several consequences to work on explored possible approaches for the action assigned to each selected consequence In the manual there are various resources for prompting participants to think broadly and laterally about consequences. For example, a product can have an effect on people other than its u

An Outside Chance

An idea came up in several conversations in the last few weeks: the analogy between externalities in economics and the location of activity and costs of software development. This post just dumps the thoughts I had around it. In economics, an externality is  an indirect cost or benefit to an uninvolved third party that arises as an effect of another party's ... activity ... Air pollution from motor vehicles is one example. The cost of air pollution to society is not paid by either the producers or users of motorized transport to the rest of society. Externalities may be beneficial; my neighbour might buy and plant flowers that I can enjoy from my side of our shared garden fence at no cost. But the recent discussions have been more about negative externalities. Everywhere that software is developed by more than one person there is process around the development of the software. In any given shop it will be more or less defined, refined, and front-of-mind, but it exists and has two

Best Heuristics

The Association for Software Testing is crowd-sourcing a book, Navigating the World as a Context-Driven Tester , which aims to provide responses to common questions and statements about testing from a context-driven perspective . It's being edited by Lee Hawkins who is posing questions on Twitter ,  LinkedIn ,  Slack , and the AST mailing list and then collating the replies, focusing on practice over theory. I've decided to contribute by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "There are no best practices, really?" The  context-driven testing principles do say that, yes, but not quite so baldly:  There are good practices in c

Hidden Thoughts

I'm in a work book club that's reading The DevOps Handbook . This week part of our discussion was around work which is invisible and should be exposed. The conversation was fun, interesting, and relevant to our day-to-day activities but I felt like it had touched only isolated areas of a potentially very large space. For example, work might be visible to some but not others, at some times but not others, for some kinds of work but not others, for justifiable reasons or not, with positive effects or not. So this morning I set aside an hour to factor my thoughts into the mind map above. Interestingly, I found myself preferring "hidden" over "invisible" because it more explicitly acknowledges the contextuality of views of work.  I'd love to hear other people's thoughts or experiences, so if you have some please don't keep them to yourself.

Got Legs

Time for another anniversary reflection . My goal back in October 2011 was to write once a week for year, with a couple of weeks off for holidays. Two years and 100 posts in I paused to consider how things were going and since then I've done the same every 50 posts or calendar year. Today is post number 550. I am regularly asked how I manage this. It helps that I like writing and find it valuable to get my ideas straight, record experiences, and document my thinking and learning but mostly, I think, it's because I've made it my habit. Seth Godin calls it showing up : When we commit to a practice, we don’t have to wonder if we’re in the mood, if it’s the right moment, if we have a headache or momentum or the muse by our side. We already made those decisions. ... Outcomes are important ... But the outcome isn’t the practice, the practice leads us to the outcome. Find work worth doing, and begin there. When

Notes on Testing Notes

Ben Dowen pinged me and others on Twitter last week , asking for "a nice concise resource to link to for a blog post - about taking good Testing notes." I didn't have one so I thought I'd write a few words on how I'm doing it at the moment for my work at Ada Health, alongside Ben. You may have read previously that I use a script to upload Markdown-based text files to Confluence . Here's the template that I start from: # Date + Title # Mission # Summary WIP! # Notes Then I fill out what I plan to do. The Mission can be as high or low level as I want it to be. Sometimes, if deeper context might be valuable I'll add a Background subsection to it. I don't fill in the Summary section until the end. It's a high-level overview of what I did, what I found, risks identified, value provided, and so on. Between the Mission and Summary I hope that a reader can see what I initially intended and what actually

Reasons to be Cheerful, Part 2

Last week I posted a long list of reasons for testing software , crowdsourced from members of the Association for Software Testing and software professionals on Twitter and LinkedIn. This week I've done some rough and ready analysis to see what's common across them and the results are at the top. A few notes: Information includes learning and feedback. Risks are unspecified or explicit (such as damage to reputation and costs). Confidence covers reducing uncertainty and sleeping well. Finding problems incorporates the fallibility of (other) humans. There are definitely other ways to classify and cluster this data. Testing, for me, is the pursuit of relevant incongruity . I didn't include my answers in the previous post but this is what I dropped into the conversation on the AST Slack: To check that it can do what we intended it should do. To look for ways in which it does things it wasn't intended to do. I think they're in the space the leftmost columns cover but I&