Last weekend I was talking Testability at DEWT 9. Across the two days I accumulated nodes on the mind map above from the presentations, the facilitated discussions, and the informal conversations. As you can see it's a bit dense (!) and I'll perhaps try to rationalise or reorganise it later on. For now, though, this post is a brain dump based on the map and a few other notes.
--00--
Testability is the ease with which the product permits itself to be tested (its intrinsic testability)
- ... but also factors outside of the product that enable or threaten its testing (so extrinsic)
- ... meaning that people, knowledge, opportunity, time, environments, and so on can be part of testability.
Desired outcomes of increasing testability might include
- easier, more efficient, testing and learning
- better feedback, risk identification and mitigation
- building trust and respect across project teams
The term testability can be unhelpful to non-testers
- ... and also to testers (!)
- ... and so consider casting testability conversations in terms of outcomes
- ... and business value.
Actions that might be taken with the intention of increasing testability include
- changing the product (e.g. logging, control functions)
- collecting information (e.g. stakeholder requirements, user desires)
- applying tooling (e.g. for deployment, log analysis)
- acquiring expertise (e.g. for the customer domain, your own product range)
- obtaining more time (e.g. by moving deadlines, cutting low priority tasks)
Situations in which testability might be decreased include
- side-effecting an attempt to increase testability (e.g. introduce bugs, waste time on useless tools)
- losing motivation (e.g. because of poor working conditions, ill health)
- being asked to test to an unreasonable standard (e.g. to great depth in little time, in unhelpful circumstances)
- recognising that the existing test strategy misses important risks (e.g. there are previously unknown dependencies)
Blockers or challenges to testability might include
- only talking to other testers about it
- an inability to persuade others why it would be valuable
- bad testing
- previous failures
When requests for testability are denied, consider
- getting creative
- going under the radar
- finding allies
- the main mission
It might be appropriate to sacrifice testability when
- adding it would risk the health, safety, or commitment of the team, product, or company
- trading one kind of testability against another (e.g. adding dependencies vs increasing coverage)
- no-one would use the information that it would bring
- another term will be more acceptable and help to achieve the same goal
- another action for the same or lower costs will achieve the same goal
- a business argument cannot be made for it (this point may summarise all of the above)
Intrinsic changes for testability are features
- ... and should be reviewed alongside other requested product changes
- ... in whatever process is appropriate in context (of the product and the proposed change)
- ... by whoever is empowered to make those kinds of decisions.
Extrinsic changes for testability are less easily typed
- ... but should still be reviewed by appropriate people
- ... to an appropriate level
- ... in relevant process for the context.
Unsurprisingly, there are some things that I want to think about more ...
I found intrinsic and extrinsic testability a useful distinction but too coarse because, for instance, I haven't distinguished factors that influence testability and factors that require testability.
Although it was easy to agree on intrinsic testability, there was less agreement on the existence or extent of extrinsic testability and no clear boundary on extrinsic testability for those who are prepared to accept it. I'm not sure that matters for practical purposes but definitional questions interest me.
There was consensus that we should sshhh! ourselves on testability and instead describe testability issues in terms of their impact business value. Unfortunately, characterising business value is not itself an easy task.
--00--
The participants at DEWT 9 were: Adina Moldovan, Andrei Contan, Ard Kramer, Bart Knaack, Beren van Daele, Elizabeth Zagroba, Huib Schoots, James Thomas, Jean-Paul Varwijk, Jeroen Schutter, Jeroen van Seeter, Joep Schuurkes, Joris Meerts, Maaike Brinkhof, Peter Schrijver, Philip Hoeben, Ruud Cox, Zeger van Hese.
Material from DEWT is jointly owned (but not necessarily agreed with) by all of the participants. Any mistakes or distortions in the representation of it here are mine.
Thank you to the Association for Software Testing for helping to fund the event through their grant programme.
Other material referenced or posted during the conference:
Comments
It would help to see specific examples. Especially good to see examples of testability of applications (as opposed to infrastructure).
Also worth pointing out that testability is not the new shiny. Testability is a great tool/aspect. You still need the hard work of testing. It might be good to highlight that in the outcomes. Testability *improves* risk identification; *improves* building trust.
My notes on that are in Hard to Test
Post a Comment