Skip to main content

Storming Risk

A couple of weeks ago I was asked to facilitate a group risk analysis for a project. The session would be remote and, having participated in this kind of thing before there were some things I knew I wanted to avoid:

  • unclear mission
  • participant overwhelm and underwhelm
  • intangible outcome

The tools I had to work with were Miro and Google Meet.

Unclear Mission

I wanted to minimise time on the call where we were not looking at risk so I decided to prepare a concise intro to the project, the mission for this session, and our approach to the analysis.

I was the facilitator rather than a participant and it's not my project so I didn't need deep knowledge but in order to scope the mission I wanted some background.  I briefly reviewed the project documentation, got a picture of its status from a couple of people, and proposed to the PO that we review only a slice of it. 

That slice was one I thought we could plausibly fit into a two-hour session, that would have value to the project right then, and that fit with project phases so that subsequent analysis could address other concerns. He agreed.

Next I extracted a couple of sentences that summarised the project aims and its phases and added blocks to the Miro board for each of them. I also made a list of all the Confluence pages and Jira epics I'd found while doing my background research and put that in too.

I came across a couple of architecture diagrams that showed the key change under review in the session so they were dropped into the Miro board, along with one which showed alternative approaches being considered. Only one was in scope for this session, so I put a large red cross over the other to emphasise that we should not be distracted by the possibility of it.

I wasn't sure how much experience any of the participants would have had with risk analysis so I took a definition of it from one of our SOPs and broke it down in a way that I thought helped to make the intent clear.

Before the session, I sent a note to all the participants pointing at the Miro board and asking them to prepare in advance as there would be no deep intro during the session. 

At the start of the session I took just a couple of minutes to run through the background and emphasise the mission. At that point we had a baseline common understanding of the mission and its context and could begin.

Overwhelm and Underwhelm

At the same time as thinking about the scope of the session I started looking at the risk templates already available in Miro at work.

The "official" one, for me, contributes to participant overwhelm. There is a huge canvas and loads of suggestions of quality attributes, test approaches, and risk mitigations. 

I have found that, unless people are familiar with that material already, just understanding what is available can be a significant effort. At in-person sessions this has been less of a problem because side conversations happen all the time without being a major distraction but it's just not the same online.

I've been in some sessions where only a subset, the quality attributes, are used and I found that I liked them. There are only around 25 which makes it easy to get your head around in the session itself. I could see that one of our templates focused only those and I decided to do the same.

Big group discussions contribute to underwhelm. We've all seen a core of vocal participants dominate a conversation while others hover on the sidelines or disengage completely. I wanted to give everyone the chance to contribute so I took some inspiration from the 1, 2, 4, All approach where small groups talk and then combine into bigger groups. 

Given the size of the group I was expecting to deal with, and the available time, I created three sub-groups, with a breakout room per group, to do each section of the exercise first in parallel and then combine outcomes in the session as a whole. 

I was interested to see that another  colleague had taken a similar approach in their Miro template, although I never attended a session run by them to see it in action. There's a trade-off for the facilitator in this setup, though, because conversation happens in breakout rooms so gauging the engagement and "buzz," or recognising that course correction is needed is harder than in person.

I tried to overcome this in a couple of ways. First, I made a space on the board for each sub-group to do their work so that I could at least see what state they were in. Second, I was fortunate to have three testers in the group, and I assigned one to each sub-group. I didn't ask them to do anything in particular, but as they all have some experience of our approach to risk analysis I trusted that it would come out if needed.


In the the first phase we choose the quality attributes to prioritise for risk analysis. Having three groups each choose a selection meant that there was already an implicit score: the number of groups that chose each one. Starting from a place where there was already some concensus made the group conversation more focussed on the differences, which I thought was a positive for time and engagement.

For the second phase, suggesting possible risks for the the chosen quality attributes, participants returned to their groups. I asked each group to use a specific colour of sticky note, again so that it was straightforward to see who was doing what on the board.

The colours were also helpful when we got back together in the larger group because the "owner" of a suggestion was clear and, when looking for duplicate or related tickets, it's much less likely that they will exist within the same colour ticket. Again, this reduces time taken in housekeeping and retains focus on the conversation.

Tangible Outcome

I asked for risks to be written in a reasonably specific way, with the threat to business value in mind. It's so easy to write tasks or very technical behaviours but what we really want to see is threats and impacts and some idea of who would be affected. I added some examples to the board beforehand and, during the session, asked for a handful of tickets to be reframed.

I also used my research on the project to suggest some areas that people could consider when thinking about risk. It's been my experience that participants often struggle to think broadly around the area so I though having a list of starter suggestions could help.

After discussing and grouping the risks we moved to prioritisation. I took an idea from one of my colleagues to use a risk matrix to get an idea of how the group felt about each one.  I wanted some way to get the wisdom of the crowd without bikeshedding and lots of copy-pasting of dots on the board. 

What I suggested, and which worked extremely well, was that participants hover over the square they choose for the risk under discussion and on my screen, which I was also sharing in the meeting, we could see everyone's avatar. At the appropriate point, I could count the number of people voting green, yellow, or red and take a majority decision on a simple prioritisation. 

The final phase in our process is to think about mitigation of the identified risks. For this, I copied a nice prevent/observe/react canvas from a colleague's template and dropped each of the high and medium level risks onto one for people to add their suggestions.

The mitigation did not fit into the time we'd allocated and so it was done asynchronously afterwards.

Reflection

As facilitator I felt that the engagement was good throughout the session and the feedback from participants was positive. Using sub-groups helped with the engagement and reduced friction too because it allowed us to focus on differences of opinion as a group.

I was particularly pleased with the way that risks were formulated and with the voting on priorities. Both of those things also helped to smooth the process out and contributed to the outcome of a coherent set of risks being produced and prioritised with mitigation suggestions.

Setting the board up in advance with some context and an explicit mission helped to get us going quickly but it's a useful historical device too. For others coming to this board later, it'll be clear what was done and why. 

Unless there's a good reason, I would probably not spend as much time getting familiar with the project under discussion as I did this time. I'd still want to link to relevant resources but I think I can do that with a lighter review.

I took some time over setting this up, anticipating that I'll need to facilitate other sessions in future so the thought will pay off, and I don't mean to sound immodest but I thought this was a storming risk analysis.

Acknowledgements

With thanks to the participants but particularly to João Proença, Patrick Prill, and Cassandra Leung for their advice and inspiration. It was wonderful to be able to start from the great resources that others had already created.
Images: https://flic.kr/p/of25Jf


Comments

Popular posts from this blog

Can Code, Can't Code, Is Useful

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "If testers can’t code, they’re of no use to us" My first reaction is to wonder what you expect from your testers. I am immediately interested in your working context and the way

Meet Me Halfway?

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Stop answering my questions with questions." Sure, I can do that. In return, please stop asking me questions so open to interpretation that any answer would be almost meaningless and certa

Not Strictly for the Birds

  One of my chores takes me outside early in the morning and, if I time it right, I get to hear a charming chorus of birdsong from the trees in the gardens down our road, a relaxing layered soundscape of tuneful calls, chatter, and chirrupping. Interestingly, although I can tell from the number and variety of trills that there must be a large number of birds around, they are tricky to spot. I have found that by staring loosely at something, such as the silhouette of a tree's crown against the slowly brightening sky, I see more birds out of the corner of my eye than if I scan to look for them. The reason seems to be that my peripheral vision picks up movement against the wider background that direct inspection can miss. An optometrist I am not, but I do find myself staring at data a great deal, seeking relationships, patterns, or gaps. I idly wondered whether, if I filled my visual field with data, I might be able to exploit my peripheral vision in that quest. I have a wide monito

Postman Curlections

My team has been building a new service over the last few months. Until recently all the data it needs has been ingested at startup and our focus has been on the logic that processes the data, architecture, and infrastructure. This week we introduced a couple of new endpoints that enable the creation (through an HTTP POST) and update (PUT) of the fundamental data type (we call it a definition ) that the service operates on. I picked up the task of smoke testing the first implementations. I started out by asking the system under test to show me what it can do by using Postman to submit requests and inspecting the results. It was the kinds of things you'd imagine, including: submit some definitions (of various structure, size, intent, name, identifiers, etc) resubmit the same definitions (identical, sharing keys, with variations, etc) retrieve the submitted definitions (using whatever endpoints exist to show some view of them) compare definitions I submitted fro

Testers are Gate-Crashers

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Testers are the gatekeepers of quality" Instinctively I don't like the sound of that, but I wonder what you mean by it. Perhaps one or more of these? Testers set the quality sta

Vanilla Flavour Testing

I have been pairing with a new developer colleague recently. In our last session he asked me "is this normal testing?" saying that he'd never seen anything like it anywhere else that he'd worked. We finished the task we were on and then chatted about his question for a few minutes. This is a short summary of what I said. I would describe myself as context-driven . I don't take the same approach to testing every time, except in a meta way. I try to understand the important questions, who they are important to, and what the constraints on the work are. With that knowledge I look for productive, pragmatic, ways to explore whatever we're looking at to uncover valuable information or find a way to move on. I write test notes as I work in a format that I have found to be useful to me, colleagues, and stakeholders. For me, the notes should clearly state the mission and give a tl;dr summary of the findings and I like them to be public while I'm working not just w

Build Quality

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "When the build is green, the product is of sufficient quality to release" An interesting take, and one I wouldn't agree with in general. That surprises you? Well, ho

Make, Fix, and Test

A few weeks ago, in A Good Tester is All Over the Place , Joep Schuurkes described a model of testing work based on three axes: do testing yourself or support testing by others be embedded in a team or be part of a separate team do your job or improve the system It resonated with me and the other testers I shared it with at work, and it resurfaced in my mind while I was reflecting on some of the tasks I've picked up recently and what they have involved, at least in the way I've chosen to address them. Here's three examples: Documentation Generation We have an internal tool that generates documentation in Confluence by extracting and combining images and text from a handful of sources. Although useful, it ran very slowly or not at all so one of the developers performed major surgery on it. Up to that point, I had never taken much interest in the tool and I could have safely ignored this piece of work too because it would have been tested by

The Best Laid Test Plans

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "What's the best format for a test plan?" I'll side-step the conversation about what a test plan is and just say that the format you should use is one that works for you, your coll

Test Now

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "When is the best time to test?" Twenty posts in , I hope you're not expecting an answer without nuance? You are? Well, I'll do my best. For me, the best time to test is when there