Skip to main content

Storming Risk

A couple of weeks ago I was asked to facilitate a group risk analysis for a project. The session would be remote and, having participated in this kind of thing before there were some things I knew I wanted to avoid:

  • unclear mission
  • participant overwhelm and underwhelm
  • intangible outcome

The tools I had to work with were Miro and Google Meet.

Unclear Mission

I wanted to minimise time on the call where we were not looking at risk so I decided to prepare a concise intro to the project, the mission for this session, and our approach to the analysis.

I was the facilitator rather than a participant and it's not my project so I didn't need deep knowledge but in order to scope the mission I wanted some background.  I briefly reviewed the project documentation, got a picture of its status from a couple of people, and proposed to the PO that we review only a slice of it. 

That slice was one I thought we could plausibly fit into a two-hour session, that would have value to the project right then, and that fit with project phases so that subsequent analysis could address other concerns. He agreed.

Next I extracted a couple of sentences that summarised the project aims and its phases and added blocks to the Miro board for each of them. I also made a list of all the Confluence pages and Jira epics I'd found while doing my background research and put that in too.

I came across a couple of architecture diagrams that showed the key change under review in the session so they were dropped into the Miro board, along with one which showed alternative approaches being considered. Only one was in scope for this session, so I put a large red cross over the other to emphasise that we should not be distracted by the possibility of it.

I wasn't sure how much experience any of the participants would have had with risk analysis so I took a definition of it from one of our SOPs and broke it down in a way that I thought helped to make the intent clear.

Before the session, I sent a note to all the participants pointing at the Miro board and asking them to prepare in advance as there would be no deep intro during the session. 

At the start of the session I took just a couple of minutes to run through the background and emphasise the mission. At that point we had a baseline common understanding of the mission and its context and could begin.

Overwhelm and Underwhelm

At the same time as thinking about the scope of the session I started looking at the risk templates already available in Miro at work.

The "official" one, for me, contributes to participant overwhelm. There is a huge canvas and loads of suggestions of quality attributes, test approaches, and risk mitigations. 

I have found that, unless people are familiar with that material already, just understanding what is available can be a significant effort. At in-person sessions this has been less of a problem because side conversations happen all the time without being a major distraction but it's just not the same online.

I've been in some sessions where only a subset, the quality attributes, are used and I found that I liked them. There are only around 25 which makes it easy to get your head around in the session itself. I could see that one of our templates focused only those and I decided to do the same.

Big group discussions contribute to underwhelm. We've all seen a core of vocal participants dominate a conversation while others hover on the sidelines or disengage completely. I wanted to give everyone the chance to contribute so I took some inspiration from the 1, 2, 4, All approach where small groups talk and then combine into bigger groups. 

Given the size of the group I was expecting to deal with, and the available time, I created three sub-groups, with a breakout room per group, to do each section of the exercise first in parallel and then combine outcomes in the session as a whole. 

I was interested to see that another  colleague had taken a similar approach in their Miro template, although I never attended a session run by them to see it in action. There's a trade-off for the facilitator in this setup, though, because conversation happens in breakout rooms so gauging the engagement and "buzz," or recognising that course correction is needed is harder than in person.

I tried to overcome this in a couple of ways. First, I made a space on the board for each sub-group to do their work so that I could at least see what state they were in. Second, I was fortunate to have three testers in the group, and I assigned one to each sub-group. I didn't ask them to do anything in particular, but as they all have some experience of our approach to risk analysis I trusted that it would come out if needed.


In the the first phase we choose the quality attributes to prioritise for risk analysis. Having three groups each choose a selection meant that there was already an implicit score: the number of groups that chose each one. Starting from a place where there was already some concensus made the group conversation more focussed on the differences, which I thought was a positive for time and engagement.

For the second phase, suggesting possible risks for the the chosen quality attributes, participants returned to their groups. I asked each group to use a specific colour of sticky note, again so that it was straightforward to see who was doing what on the board.

The colours were also helpful when we got back together in the larger group because the "owner" of a suggestion was clear and, when looking for duplicate or related tickets, it's much less likely that they will exist within the same colour ticket. Again, this reduces time taken in housekeeping and retains focus on the conversation.

Tangible Outcome

I asked for risks to be written in a reasonably specific way, with the threat to business value in mind. It's so easy to write tasks or very technical behaviours but what we really want to see is threats and impacts and some idea of who would be affected. I added some examples to the board beforehand and, during the session, asked for a handful of tickets to be reframed.

I also used my research on the project to suggest some areas that people could consider when thinking about risk. It's been my experience that participants often struggle to think broadly around the area so I though having a list of starter suggestions could help.

After discussing and grouping the risks we moved to prioritisation. I took an idea from one of my colleagues to use a risk matrix to get an idea of how the group felt about each one.  I wanted some way to get the wisdom of the crowd without bikeshedding and lots of copy-pasting of dots on the board. 

What I suggested, and which worked extremely well, was that participants hover over the square they choose for the risk under discussion and on my screen, which I was also sharing in the meeting, we could see everyone's avatar. At the appropriate point, I could count the number of people voting green, yellow, or red and take a majority decision on a simple prioritisation. 

The final phase in our process is to think about mitigation of the identified risks. For this, I copied a nice prevent/observe/react canvas from a colleague's template and dropped each of the high and medium level risks onto one for people to add their suggestions.

The mitigation did not fit into the time we'd allocated and so it was done asynchronously afterwards.

Reflection

As facilitator I felt that the engagement was good throughout the session and the feedback from participants was positive. Using sub-groups helped with the engagement and reduced friction too because it allowed us to focus on differences of opinion as a group.

I was particularly pleased with the way that risks were formulated and with the voting on priorities. Both of those things also helped to smooth the process out and contributed to the outcome of a coherent set of risks being produced and prioritised with mitigation suggestions.

Setting the board up in advance with some context and an explicit mission helped to get us going quickly but it's a useful historical device too. For others coming to this board later, it'll be clear what was done and why. 

Unless there's a good reason, I would probably not spend as much time getting familiar with the project under discussion as I did this time. I'd still want to link to relevant resources but I think I can do that with a lighter review.

I took some time over setting this up, anticipating that I'll need to facilitate other sessions in future so the thought will pay off, and I don't mean to sound immodest but I thought this was a storming risk analysis.

Acknowledgements

With thanks to the participants but particularly to João Proença, Patrick Prill, and Cassandra Leung for their advice and inspiration. It was wonderful to be able to start from the great resources that others had already created.
Images: https://flic.kr/p/of25Jf


Comments

Nils said…
Hi,

I created a Risk Storming Card Set and uploaded it ti Github. Maybe it helps with storming Risks. https://github.com/nilsbert/Risk-Storming

Popular posts from this blog

Meet Me Halfway?

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Stop answering my questions with questions." Sure, I can do that. In return, please stop asking me questions so open to interpretation that any answ...

Can Code, Can't Code, Is Useful

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "If testers can’t code, they’re of no use to us" My first reaction is to wonder what you expect from your testers. I am immediately interested ...

The Best Programmer Dan Knows

  I was pairing with my friend Vernon at work last week, on a tool I've been developing. He was smiling broadly as I talked him through what I'd done because we've been here before. The tool facilitates a task that's time-consuming, inefficient, error-prone, tiresome, and important to get right. Vern knows that those kinds of factors trigger me to change or build something, and that's why he was struggling not to laugh out loud. He held himself together and asked a bunch of sensible questions about the need, the desired outcome, and the approach I'd taken. Then he mentioned a talk by Daniel Terhorst-North, called The Best Programmer I Know, and said that much of it paralleled what he sees me doing. It was my turn to laugh then, because I am not a good programmer, and I thought he knew that already. What I do accept, though, is that I am focussed on the value that programs can give, and getting some of that value as early as possible. He sent me a link to the ta...

Beginning Sketchnoting

In September 2017 I attended  Ian Johnson 's visual note-taking workshop at  DDD East Anglia . For the rest of the day I made sketchnotes, including during Karo Stoltzenburg 's talk on exploratory testing for developers  (sketch below), and since then I've been doing it on a regular basis. Karo recently asked whether I'd do a Team Eating (the Linguamatics brown bag lunch thing) on sketchnoting. I did, and this post captures some of what I said. Beginning sketchnoting, then. There's two sides to that: I still regard myself as a beginner at it, and today I'll give you some encouragement and some tips based on my experience, to begin sketchnoting for yourselves. I spend an enormous amount of time in situations where I find it helpful to take notes: testing, talking to colleagues about a problem, reading, 1-1 meetings, project meetings, workshops, conferences, and, and, and, and I could go on. I've long been interested in the approaches I've evol...

Not Strictly for the Birds

  One of my chores takes me outside early in the morning and, if I time it right, I get to hear a charming chorus of birdsong from the trees in the gardens down our road, a relaxing layered soundscape of tuneful calls, chatter, and chirrupping. Interestingly, although I can tell from the number and variety of trills that there must be a large number of birds around, they are tricky to spot. I have found that by staring loosely at something, such as the silhouette of a tree's crown against the slowly brightening sky, I see more birds out of the corner of my eye than if I scan to look for them. The reason seems to be that my peripheral vision picks up movement against the wider background that direct inspection can miss. An optometrist I am not, but I do find myself staring at data a great deal, seeking relationships, patterns, or gaps. I idly wondered whether, if I filled my visual field with data, I might be able to exploit my peripheral vision in that quest. I have a wide monito...

ChatGPTesters

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00--  "Why don’t we replace the testers with AI?" We have a good relationship so I feel safe telling you that my instinctive reaction, as a member of the T...

Vanilla Flavour Testing

I have been pairing with a new developer colleague recently. In our last session he asked me "is this normal testing?" saying that he'd never seen anything like it anywhere else that he'd worked. We finished the task we were on and then chatted about his question for a few minutes. This is a short summary of what I said. I would describe myself as context-driven . I don't take the same approach to testing every time, except in a meta way. I try to understand the important questions, who they are important to, and what the constraints on the work are. With that knowledge I look for productive, pragmatic, ways to explore whatever we're looking at to uncover valuable information or find a way to move on. I write test notes as I work in a format that I have found to be useful to me, colleagues, and stakeholders. For me, the notes should clearly state the mission and give a tl;dr summary of the findings and I like them to be public while I'm working not just w...

Build Quality

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "When the build is green, the product is of sufficient quality to release" An interesting take, and one I wouldn't agree with in gener...

Postman Curlections

My team has been building a new service over the last few months. Until recently all the data it needs has been ingested at startup and our focus has been on the logic that processes the data, architecture, and infrastructure. This week we introduced a couple of new endpoints that enable the creation (through an HTTP POST) and update (PUT) of the fundamental data type (we call it a definition ) that the service operates on. I picked up the task of smoke testing the first implementations. I started out by asking the system under test to show me what it can do by using Postman to submit requests and inspecting the results. It was the kinds of things you'd imagine, including: submit some definitions (of various structure, size, intent, name, identifiers, etc) resubmit the same definitions (identical, sharing keys, with variations, etc) retrieve the submitted definitions (using whatever endpoints exist to show some view of them) compare definitions I submitted fro...

Express, Listen, and Field

Last weekend I participated in the LLandegfan Exploratory Workshop on Testing (LLEWT) 2024, a peer conference in a small parish hall on Anglesey, north Wales. The topic was communication and I shared my sketchnotes and a mind map from the day a few days ago. This post summarises my experience report.  Express, Listen, and Field Just about the most hands-on, practical, and valuable training I have ever done was on assertiveness with a local Cambridge coach, Laura Dain . In it she introduced Express, Listen, and Field (ELF), distilled from her experience across many years in the women’s movement, business, and academia.  ELF: say your key message clearly and calmly, actively listen to the response, and then focus only on what is relevant to your needs. I blogged a little about it back in 2017 and I've been using it ever since. Assertiveness In a previous role, I was the manager of a test team and organised training for the whole ...