Skip to main content

Storming Risk

A couple of weeks ago I was asked to facilitate a group risk analysis for a project. The session would be remote and, having participated in this kind of thing before there were some things I knew I wanted to avoid:

  • unclear mission
  • participant overwhelm and underwhelm
  • intangible outcome

The tools I had to work with were Miro and Google Meet.

Unclear Mission

I wanted to minimise time on the call where we were not looking at risk so I decided to prepare a concise intro to the project, the mission for this session, and our approach to the analysis.

I was the facilitator rather than a participant and it's not my project so I didn't need deep knowledge but in order to scope the mission I wanted some background.  I briefly reviewed the project documentation, got a picture of its status from a couple of people, and proposed to the PO that we review only a slice of it. 

That slice was one I thought we could plausibly fit into a two-hour session, that would have value to the project right then, and that fit with project phases so that subsequent analysis could address other concerns. He agreed.

Next I extracted a couple of sentences that summarised the project aims and its phases and added blocks to the Miro board for each of them. I also made a list of all the Confluence pages and Jira epics I'd found while doing my background research and put that in too.

I came across a couple of architecture diagrams that showed the key change under review in the session so they were dropped into the Miro board, along with one which showed alternative approaches being considered. Only one was in scope for this session, so I put a large red cross over the other to emphasise that we should not be distracted by the possibility of it.

I wasn't sure how much experience any of the participants would have had with risk analysis so I took a definition of it from one of our SOPs and broke it down in a way that I thought helped to make the intent clear.

Before the session, I sent a note to all the participants pointing at the Miro board and asking them to prepare in advance as there would be no deep intro during the session. 

At the start of the session I took just a couple of minutes to run through the background and emphasise the mission. At that point we had a baseline common understanding of the mission and its context and could begin.

Overwhelm and Underwhelm

At the same time as thinking about the scope of the session I started looking at the risk templates already available in Miro at work.

The "official" one, for me, contributes to participant overwhelm. There is a huge canvas and loads of suggestions of quality attributes, test approaches, and risk mitigations. 

I have found that, unless people are familiar with that material already, just understanding what is available can be a significant effort. At in-person sessions this has been less of a problem because side conversations happen all the time without being a major distraction but it's just not the same online.

I've been in some sessions where only a subset, the quality attributes, are used and I found that I liked them. There are only around 25 which makes it easy to get your head around in the session itself. I could see that one of our templates focused only those and I decided to do the same.

Big group discussions contribute to underwhelm. We've all seen a core of vocal participants dominate a conversation while others hover on the sidelines or disengage completely. I wanted to give everyone the chance to contribute so I took some inspiration from the 1, 2, 4, All approach where small groups talk and then combine into bigger groups. 

Given the size of the group I was expecting to deal with, and the available time, I created three sub-groups, with a breakout room per group, to do each section of the exercise first in parallel and then combine outcomes in the session as a whole. 

I was interested to see that another  colleague had taken a similar approach in their Miro template, although I never attended a session run by them to see it in action. There's a trade-off for the facilitator in this setup, though, because conversation happens in breakout rooms so gauging the engagement and "buzz," or recognising that course correction is needed is harder than in person.

I tried to overcome this in a couple of ways. First, I made a space on the board for each sub-group to do their work so that I could at least see what state they were in. Second, I was fortunate to have three testers in the group, and I assigned one to each sub-group. I didn't ask them to do anything in particular, but as they all have some experience of our approach to risk analysis I trusted that it would come out if needed.


In the the first phase we choose the quality attributes to prioritise for risk analysis. Having three groups each choose a selection meant that there was already an implicit score: the number of groups that chose each one. Starting from a place where there was already some concensus made the group conversation more focussed on the differences, which I thought was a positive for time and engagement.

For the second phase, suggesting possible risks for the the chosen quality attributes, participants returned to their groups. I asked each group to use a specific colour of sticky note, again so that it was straightforward to see who was doing what on the board.

The colours were also helpful when we got back together in the larger group because the "owner" of a suggestion was clear and, when looking for duplicate or related tickets, it's much less likely that they will exist within the same colour ticket. Again, this reduces time taken in housekeeping and retains focus on the conversation.

Tangible Outcome

I asked for risks to be written in a reasonably specific way, with the threat to business value in mind. It's so easy to write tasks or very technical behaviours but what we really want to see is threats and impacts and some idea of who would be affected. I added some examples to the board beforehand and, during the session, asked for a handful of tickets to be reframed.

I also used my research on the project to suggest some areas that people could consider when thinking about risk. It's been my experience that participants often struggle to think broadly around the area so I though having a list of starter suggestions could help.

After discussing and grouping the risks we moved to prioritisation. I took an idea from one of my colleagues to use a risk matrix to get an idea of how the group felt about each one.  I wanted some way to get the wisdom of the crowd without bikeshedding and lots of copy-pasting of dots on the board. 

What I suggested, and which worked extremely well, was that participants hover over the square they choose for the risk under discussion and on my screen, which I was also sharing in the meeting, we could see everyone's avatar. At the appropriate point, I could count the number of people voting green, yellow, or red and take a majority decision on a simple prioritisation. 

The final phase in our process is to think about mitigation of the identified risks. For this, I copied a nice prevent/observe/react canvas from a colleague's template and dropped each of the high and medium level risks onto one for people to add their suggestions.

The mitigation did not fit into the time we'd allocated and so it was done asynchronously afterwards.

Reflection

As facilitator I felt that the engagement was good throughout the session and the feedback from participants was positive. Using sub-groups helped with the engagement and reduced friction too because it allowed us to focus on differences of opinion as a group.

I was particularly pleased with the way that risks were formulated and with the voting on priorities. Both of those things also helped to smooth the process out and contributed to the outcome of a coherent set of risks being produced and prioritised with mitigation suggestions.

Setting the board up in advance with some context and an explicit mission helped to get us going quickly but it's a useful historical device too. For others coming to this board later, it'll be clear what was done and why. 

Unless there's a good reason, I would probably not spend as much time getting familiar with the project under discussion as I did this time. I'd still want to link to relevant resources but I think I can do that with a lighter review.

I took some time over setting this up, anticipating that I'll need to facilitate other sessions in future so the thought will pay off, and I don't mean to sound immodest but I thought this was a storming risk analysis.

Acknowledgements

With thanks to the participants but particularly to João Proença, Patrick Prill, and Cassandra Leung for their advice and inspiration. It was wonderful to be able to start from the great resources that others had already created.
Images: https://flic.kr/p/of25Jf


Comments

Popular posts from this blog

Can Code, Can't Code, Is Useful

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "If testers can’t code, they’re of no use to us" My first reaction is to wonder what you expect from your testers. I am immediately interested in your working context and the way

Testing (AI) is Testing

Last November I gave a talk, Random Exploration of a Chatbot API , at the BCS Testing, Diversity, AI Conference .  It was a nice surprise afterwards to be offered a book from their catalogue and I chose Artificial Intelligence and Software Testing by Rex Black, James Davenport, Joanna Olszewska, Jeremias Rößler, Adam Leon Smith, and Jonathon Wright.  This week, on a couple of train journeys around East Anglia, I read it and made sketchnotes. As someone not deeply into this field, but who has been experimenting with AI as a testing tool at work, I found the landscape view provided by the book interesting, particularly the lists: of challenges in testing AI, of approaches to testing AI, and of quality aspects to consider when evaluating AI.  Despite the hype around the area right now there's much that any competent tester will be familiar with, and skills that translate directly. Where there's likely to be novelty is in the technology, and the technical domain, and the effect of

Testers are Gate-Crashers

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Testers are the gatekeepers of quality" Instinctively I don't like the sound of that, but I wonder what you mean by it. Perhaps one or more of these? Testers set the quality sta

Am I Wrong?

I happened across Exploratory Testing: Why Is It Not Ideal for Agile Projects? by Vitaly Prus this week and I was triggered. But why? I took a few minutes to think that through. Partly, I guess, I feel directly challenged. I work on an agile project (by the definition in the article) and I would say that I use exclusively exploratory testing. Naturally, I like to think I'm doing a good job. Am I wrong? After calming down, and re-reading the article a couple of times, I don't think so. 😸 From the start, even the title makes me tense. The ideal solution is a perfect solution, the best solution. My context-driven instincts are reluctant to accept the premise, and I wonder what the author thinks is an ideal solution for an agile project, or any project. I notice also that I slid so easily from "an approach is not ideal" into "I am not doing a good job" and, in retrospect, that makes me smile. It doesn't do any harm to be reminded that your cognitive bias

Play to Play

I'm reading Rick Rubin's The Creative Act: A Way of Being . It's spiritual without being religious, simultaneously vague and specific, and unerring positive about the power and ubiquity of creativity.  We artists — and we are all artists he says — can boost our creativity by being open and welcoming to knowledge and experiences and layering them with past knowledge and experiences to create new knowledge and experiences.  If that sounds a little New Age to you, well it does to me too, yet also fits with how I think about how I work. This is in part due to that vagueness, in part due to the human tendency to pattern-match, and in part because it's true. I'm only about a quarter of the way through the book but already I am making connections to things that I think and that I have thought in the past. For example, in some ways it resembles essay-format Oblique Strategy cards and I wrote about the potential value of them to testers 12 years ago. This week I found the f

Meet Me Halfway?

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Stop answering my questions with questions." Sure, I can do that. In return, please stop asking me questions so open to interpretation that any answer would be almost meaningless and certa

Test Now

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "When is the best time to test?" Twenty posts in , I hope you're not expecting an answer without nuance? You are? Well, I'll do my best. For me, the best time to test is when there

Rage Against the Machinery

  I often review and collaborate on unit tests at work. One of the patterns I see a lot is this: there are a handful of tests, each about a page long the tests share a lot of functionality, copy-pasted the test data is a complex object, created inside the test the test data varies little from test to test. In Kotlin-ish pseudocode, each unit test might look something like this: @Test fun `test input against response for endpoint` () { setupMocks() setupTestContext() ... val input = Object(a, OtherObject(b, c), AnotherObject(d)) ... val response = someHttpCall(endPoint, method, headers, createBodyFromInput(input) ) ... val expected = Object(w, OtherObject(x, y), AnotherObject (z)) val output = Object(process(response.getField()), otherProcess(response.getOtherField()), response.getLastField()) assertEquals(expected, output) } ... While these tests are generally functional, and I rarely have reason to doubt that they

A Qualified Answer

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn ,   Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Whenever possible, you should hire testers with testing certifications"  Interesting. Which would you value more? (a) a candidate who was sent on loads of courses approved by some organisation you don't know and ru

README

    This week at work my team attended a Myers Briggs Type Indicator workshop. Beforehand we each completed a questionnaire which assigned us a personality type based on our position on five behavioural preference axes. For what it's worth, this time I was labelled INFJ-A and roughly at the mid-point on every axis.  I am sceptical about the value of such labels . In my less charitable moments, I imagine that the MBTI exercise gives us each a box and, later when work shows up, we try to force the work into the box regardless of any compatiblity in size and shape. On the other hand, I am not sceptical about the value of having conversations with those I work with about how we each like to work or, if you prefer it, what shape our boxes are, how much they flex, and how eager we are to chop problems up so that they fit into our boxes. Wondering how to stretch the workshop's conversational value into something ongoing I decided to write a README for me and