When I asked Twitter this:
Anyone know of a course on risk analysis that could work as in-house training for software testers?
It's a topic we're interested in but the stuff I'm turning up is more to do with corporate analysis, register building , mitigation at the business level #testing #riskPaul Hankin suggested a book, Superforecasting, The Art of Science and Prediction, by Philip Tetlock and Dan Gardner.
As the title suggests the book is about prognosis rather than peril, but the needs of prediction and risk analysis overlap in interesting ways: understanding possible outcomes, identifying factors that contribute to those outcomes, and weighting the factors and their interactions.
Tetlock and Gardner study forecasting and forecasters. They use a metric called the Brier score to assess the accuracy of forecasts, and, over time, and with repeated forecasts, it becomes clear that some people tend to make better forecasts than others.
The Brier score relies on the forecasts being made in a testable way, unambiguously, with sufficient specificity, that evaluation is possible:
If we are serious about measuring and improving ... [forecasts] must have clearly defined terms and timelines. They must use numbers. And one more thing is essential: we must have lots of forecasts. (Kindle location 939-940)It's striking that, according to the authors, many pundits appear to offer untestable, ambiguous, mutable predictions, ones that can be post-hoc found to fit events. Confidence in delivery is a powerful device for helping the receivers of predictions to find them acceptable, and by the time the situation being assessed has played out no-one cares what was said.
Superforecasters are not like that. They seek out feedback on the outcomes of their predictions in order to improve their powers of prediction. They tend to have other characteristics too, such as:
... thinking that is open-minded, careful, curious, and—above all—self-critical. [Superforecasting] also demands focus. The kind of thinking that produces superior judgment does not come effortlessly. Only the determined can deliver it reasonably consistently, which is why our analyses have consistently found commitment to self-improvement to be the strongest predictor of performance. (335-338)Through long-term scientific research such as The Good Judgment Project the authors have been able to tease out techniques that separate superforecasters from merely good forecasters. They summarise them in a helpful appendix of commandments in the book, and I've crunched them down even further here:
- Triage: answer those questions where the return is likely to be worth the investment.
- Quantise: look for more possible outcomes than a simple will or won't; be informally Bayesian and update your expectations as evidence is uncovered.
- Decompose: break big problems down into a sequence of smaller ones, isolate the unknowns, make best guesses where necessary and aggregate the analyses.
- Perspectives: look at the big (external) and small (local) picture; seek and take on board other perspectives but also question them with empathy.
- Judgement: be aware when situations change but don't overreact to them; spend only the right amount of time to collect evidence, make a decision, and justify it.
- Positive negatives: find views that are counter to yours and list their strengths; own your failures and learn from them but don't become biased by them.
Image: World of Books
Comments
Post a Comment