The Association for Software Testing is crowd-sourcing a book, Navigating the World as a Context-Driven Tester, which aims to provide responses to common questions and statements about testing from a context-driven perspective.
It's being edited by Lee Hawkins who is posing questions on Twitter, LinkedIn, Mastodon, Slack, and the AST mailing list and then collating the replies, focusing on practice over theory.
I've decided to contribute by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be.Perhaps you'd like to join me?
--00--
"When is the best time to test?"
Twenty posts in, I hope you're not expecting an answer without nuance?
You are? Well, I'll do my best.
For me, the best time to test is when there's a good chance the effort will be worth the expense.
Hold that thought and let's make a simple model of software development as a set of steps, executed linearly:
- Identify a need worth addressing.
- Choose and build a solution for it.
- Release the new feature.
Did I say simple? Over-simple would have been better, but it'll do for today. Around, across, and inside those steps, tasks like these could all be productive and cost-effective testing at different times:
- Explore user data to look for patterns.
- Test a new market for opportunities.
- Analyse telemetry data for potential bottlenecks.
- Suggest pros and cons of potential projects.
- Build models of potential features.
- Look for evidence that the suggested needs are worth addressing.
- Synthesise previous discussion to provide a set of constraints on any solution.
- Prototype solutions to discover likely issue.
- Research similar solutions to identify missing features, failure modes, test ideas, etc.
- Explore the emerging solution, as it emerges.
- Explore the implemented solution in context.
- Compare the capabilities of the implementation to the original needs.
- Try to find new risks introduced by the solution.
- Explore the new feature in production.
- Monitor the behaviour and performance of the new feature.
- Look for evidence that the solution is satisfying the identified needs.
- Explore user data to look for patterns.
That might seem like a lot, but it's not even the half of it. As usual, the context is crucial.
Having said that, I do have some good news for you!
If you're unsure about what risks to investigate, the scale of a potential problem, the cost of a range of solutions, or have any important unknown at all, I'd suggest that the best time to test, all other things being equal, is now.
Comments
Post a Comment