Skip to main content


Showing posts from May, 2022

Great Shot, Kid

This week I've been playing with altwalker , a model-based testing tool. To get the hang of it, I attempted to build a very simple model of a workflow that is supported by the service my team owns. Hacking away at the example code, and looking frequently at the docs, I was able to get up and running in a few hours, creating: a basic model: nodes for system states, edges for operations simple assertions: mainly consistency checks on the states client: HTTP client to implement the operations against the service's API I configured this so that altwalker will perform a random walk of the model, starting state data is randomised, and the client will choose randomly whenever offered an option. Why so much randomness? Because it means that, over successive runs, more of the infinite space of possible workflow executions will be covered. Once I had that basically working I wrote a shell script that would run this loop a number of times: call altwalker

AST Lean Coffee

  My second Lean Coffee in a week, this one online with the Association for Software Testing . Here's a few aggregated notes from the conversation. Why do people want to speak at conferences, and can we support them to get what they need in other ways? Lots of noise recently on Twitter about people being rejected, and discussion of tactics for getting in. So why do people want to speak at conferences? Increase their brand. Be more employable. Be better known. Go to the conference for free. Company will only pay if they are speaking. Share what I've learned. Share my story. Challenge yourself. Because they see others doing it Personal access to "big names." Conferences always have the same speakers. Do people need better access to conferences? Can be a vicious cycle: accepted because we know you; we know you because you speak. Perhaps the return to in-person conferences has increased the demand for speaking slots. People don't know how to sell their talk to confere

Agile in the Ether

  What I particularly like about Lean Coffee are the timeboxes. At Agile in the Ether yesterday it was an hour for the whole event and just eight minutes per topic. At that level, the investment is low and the potential returns are high: some ideas for right-now problems and background for those that will surely come later. On top of that, the possibility that I can share something that will be a win for someone else. Here's my notes aggregated from the conversations. Ideas for agile coaching teams when you're in there on a very ad hoc basis. How to not disturb but still bring a value. Is it possible? The teams are typically overconstrained which makes change difficult. It's common for the coach to suggest an approach but not return for a couple of iterations. They don't know whether it landed. Can you build a relationship with someone in the team and have close communication with them to get the feedback? Ideally this would be with someone who cares about improvements

Control and Observe

This week I was testing a new piece of functionality that will sanitise query parameters on an API endpoint. The developer had implemented a first cut and I was looking at the code and the unit tests for a few minutes before we got together for a walkthrough. After exercising the new function through the unit tests, trying other values, and checking that the tests could fail, I wanted to send some requests from an external client. I already had the service set up in IntelliJ and running under the debugger, so ... Sending requests from Postman and inspecting sanitised values in a breakpoint was OK, but there was too much  jumping back and forth between applications, clicking on fields, scrolling and so on. Each iteration was many seconds long. Luckily, we'd already  decided to log sanitised strings for monitoring and later review,  so I could disable the breakpoints and just look at the service console. That was better but still too heavy for me. Ther


Last night I attended a Consequence Scanning workshop at the Cambridge Tester Meetup . In it, Drew Pontikis walked us through the basics of an approach for identifying opportunities and risks and selecting which ones to target for exploitation or mitigation. The originators of Consequence Scanning recommend that it's run as part of planning and design activities with the outcomes being specific actions added to a backlog and a record of all of the suggested consequences for later review. So, acting as a product team for the Facebook Portal pre-launch, we  listed potential intended and unintended consequences sorted them into action categories (control, influence, or monitor) chose several consequences to work on explored possible approaches for the action assigned to each selected consequence In the manual there are various resources for prompting participants to think broadly and laterally about consequences. For example, a product can have an effect on people other than its u

An Outside Chance

An idea came up in several conversations in the last few weeks: the analogy between externalities in economics and the location of activity and costs of software development. This post just dumps the thoughts I had around it. In economics, an externality is  an indirect cost or benefit to an uninvolved third party that arises as an effect of another party's ... activity ... Air pollution from motor vehicles is one example. The cost of air pollution to society is not paid by either the producers or users of motorized transport to the rest of society. Externalities may be beneficial; my neighbour might buy and plant flowers that I can enjoy from my side of our shared garden fence at no cost. But the recent discussions have been more about negative externalities. Everywhere that software is developed by more than one person there is process around the development of the software. In any given shop it will be more or less defined, refined, and front-of-mind, but it exists and has two

Best Heuristics

The Association for Software Testing is crowd-sourcing a book, Navigating the World as a Context-Driven Tester , which aims to provide responses to common questions and statements about testing from a context-driven perspective . It's being edited by Lee Hawkins who is posing questions on Twitter ,  LinkedIn ,  Slack , and the AST mailing list and then collating the replies, focusing on practice over theory. I've decided to contribute by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "There are no best practices, really?" The  context-driven testing principles do say that, yes, but not quite so baldly:  There are good practices in c