I'm lucky that my current role at Ada Health gives me, and the rest of the staff, a fortnightly community day for sharing and learning.
I've done my, erm, share of sharing, but today I took advantage of the learning on offer to attend a workshop on our approach to making medical terminology accessible to non-experts, a presentation on how we manage our medical knowledgebase, another on the single sign-on infrastructure we're using in our customer integrations, and a riskstorming workshop using TestSphere to assess an air fryer.
So that would have been a great day by itself, but I, erm, capped it off by attending Capgemini's TestJam event, to see the keynotes by Janet Gregory and Lisi Hocke.
Janet talked about holistic testing, or the kinds of critical review, discovery, and mitigation activities that can take place at any point in the software development (and deployment, and release) life cycle. The foundation for all of this is good communication and relationships between the people and teams involved, and she often sees testers being the ones to cultivate that.
The key thing about a cycle is that there is no end. Release isn't where we wash our hands of the frickin' thing and relax, it's the point at which we can begin to observe what our customers are doing with it, and frame some hypotheses about how we could improve their experience. Testers should be here, framing experiments that feed into the next round of discovery that leads to planning and new features.
Experiments were the focus of Lisi Hocke's talk (slides), an experience report on an experiment she conducted at the company level to encourage teams to experiment with their own activities.
In a business with a large number of autonomous cross-functional teams, there was a perception that quality was a black box: no common perspective on what it meant, and hard to judge its level. Lisi, and others, co-ordinated an experiment to improve the quality culture in the company, hypothesising that transparency in approach and status, along with the sharing of ideas and techniques, would help to bring teams to a level where each of them had explicit test strategies and could talk about what quality meant to them.
Several teams volunteered and a few of them were selected as participants in a series of workshops which identified pain points, risks, and implicit test strategies. This was followed by the framing and running of experiments, each deliberately focused on improving one thing, with explicit hypotheses and criteria to judge success.
It was a lot of effort but definitely had some positive outcomes: lots of useful conversations, much more awareness of what was possible, and tangible improvements. Unfortunately there were also negatives: silos remained, some people felt inhibited from participating, and there was inertia to change.
In addition, the overall project success criteria were only partially met. This might be acceptable in the first iteration, if not for the fact that the approach taken was so heavyweight that it was clear it wouldn't scale So, a second hypothesis: a leaner process with less hand-holding, and more facilitated peer-based activity could have the same kind of outcomes.
Good news, it did! Bad news, COVID hit and other activities were prioritised.
Reflection on the experience still gave some useful learning, including
- perhaps don't solve the team's problems but instead support the team in solving the problems
- put effort into making improvement desirable by showing good outcomes
- make the system reward the behaviours you want to see
Comments
Post a Comment