Skip to main content

Posts

Showing posts with the label Risk

Storming Risk

A couple of weeks ago I was asked to facilitate a group risk analysis for a project. The session would be remote and, having participated in this kind of thing before there were some things I knew I wanted to avoid: unclear mission participant overwhelm and underwhelm intangible outcome The tools I had to work with were Miro and Google Meet. Unclear Mission I wanted to minimise time on the call where we were not looking at risk so I decided to prepare a concise intro to the project, the mission for this session, and our approach to the analysis. I was the facilitator rather than a participant and it's not my project so I didn't need deep knowledge but in order to scope the mission I wanted some background.  I briefly reviewed the project documentation, got a picture of its status from a couple of people, and proposed to the PO that we review only a slice of it.  That slice was one I thought we could plausibly fit into a two-hour session, that would have value to the project ri...

Risk-Based Testing Averse

  Joep Schuurkes started a thread on Twitter last week. What are the alternatives to risk-based testing? I listed a few activities that I thought we might agree were testing but not explicitly driven by a risk evaluation (with a light edit to take later discussion into account): Directed. Someone asks for something to be explored. Unthinking. Run the same scripted test cases we always do, regardless of the context. Sympathetic. Looking at something to understand it, before thinking about risks explicitly. In the thread , Stu Crook challenged these, suggesting that there must be some concern behind the activities. To Stu, the writing's on the wall for risk-based testing as a term because ... Everything is risk based, the question is, what risks are you going to optimise for? And I see this perspective but it reminds me that, as so often, there is a granularity tax in c...

Result!

Last night I attended a Consequence Scanning workshop at the Cambridge Tester Meetup . In it, Drew Pontikis walked us through the basics of an approach for identifying opportunities and risks and selecting which ones to target for exploitation or mitigation. The originators of Consequence Scanning recommend that it's run as part of planning and design activities with the outcomes being specific actions added to a backlog and a record of all of the suggested consequences for later review. So, acting as a product team for the Facebook Portal pre-launch, we  listed potential intended and unintended consequences sorted them into action categories (control, influence, or monitor) chose several consequences to work on explored possible approaches for the action assigned to each selected consequence In the manual there are various resources for prompting participants to think broadly and laterally about consequences. For example, a product can have an effect on people other than its u...

Of What? To Who? When?

Fiona Charles ran a workshop on business risk analysis for my team at Linguamatics last week. Across the day we covered risk-based testing , how it can help with prioritisation, and how it is often overlooked as a factor in test design. We also looked at how the presentation of risks and their potential impact to someone who matters can be a way to engage stakeholders in the testing effort. Hopefully, this would in turn encourage contribution to activities such as test idea generation, triage, and attempts to mitigate risk elsewhere during design and development. Stakeholders often expect a level of testing we can't deliver. (Fiona Charles) The approach to risk assessment that Fiona outlined has some similarity to a pre-mortem . Essentially: assume the system has been implemented then look for ways in which it could go wrong. It's important to understand who the relevant stakeholders are — they are more than just your users — and to solicit diverse perspectives in you...