This month's Lean Coffee was hosted by Abcam. Here's some brief, aggregated comments and questions on topics covered by the group I was in.
Suggest techniques for identifying and managing risk on an integration project.
- Consider the risk in your product, risk in third-party products, risk in the integration
- Consider what kinds of risk your stakeholders care about; and to who (e.g. risk to the bottom line, customer data, sales, team morale ...)
- ... your risk-assessment and mitigation strategies may be different for each
- Consider mitigating risk in your own product, or in those you are integrating with
- Consider hazards and harms
- Hazards are things that pose some kind of risk (objects and behaviours, e.g. a delete button, and corruption of database)
- Harms are the effects those hazards might have (e.g. deleting unexpected content, and serving incomplete results)
- Consider probabilities and impacts of each harm, to provide a way to compare them
- Advocate for the resources that you think you need
- ... and explain what you won't (be able to) do without them
- Take a bigger view than a single tester alone can provide
- ... perhaps something like the Three Amigos (and other stakeholders)
- Consider what you can do in future to mitigate these kinds of risks earlier
- Categorise the issues you've found already; they are evidence for areas of the product that may be riskier
- ... or might show that your test strategy is biased
- Remember that the stuff you don't know you don't know is a potential risk too: should you ask for time to investigate that?
Didn't get time to discuss some of my own interests: How-abouts and What-ifs, and Not Sure About Uncertainty.
Can templates be used to generate tests?
- Some programming languages have templates for generating code
- ... can the same idea apply to tests?
- The aim is to code tests faster; there is a lot of boilerplate code (in the project being discussed)
- How would a template know what the inputs and expectations are?
- Automation is checking rather than testing
- Consider data-driven testing and QuickCheck
- Consider asking for testability in the product to make writing test code easier (if you are spending time reverse-engineering the product in order to test it)
- ... e.g. ask for consistent Ids of objects in and across web pages
- Could this (perceived) problem be alleviated by factoring out the boilerplate code?
How can the coverage of manual and automated testing be compared?
- Code coverage tools could, in principle, give some idea of coverage
- ... but they have known drawbacks
- ... and it might be hard to tie particular tester activity to particular paths through the code to understand where overlap exists
- Tagging test cases with e.g. story identifiers can help to track where coverage has been added (but not what the coverage is)
- What do we really mean by coverage?
- What's the purpose of the exercise? To retire manual tests?
- One participant is trying to switch to test automation for regression testing
- ... but finding it hard to have confidence in the automation
- ... because of the things that testers can naturally see around whatever they are looking at, that the automation does not give
What are the pros and cons of being the sole tester on a project?
- Chance to take responsibility, build experience ... but can be challenging if the tester is not ready for that
- Chance to make processes etc that works for you ... but perhaps there are efficiencies in sharing process too
- Chance to own your work ... but miss out on other perspectives
- Chance to express yourself ... but can feel lonely
- Could try all testers on all projects (e.g. to help when people are on holiday or sick)
- ... but this is potentially expensive and people complain about being thinly sliced
- Could try sharing testing across the project team (if an issue is that there's insufficient resource for the testing planned)
- Could set up sharing structures, e.g. team standup, peer reviews/debriefs, or pair testing across projects
What do (these) testers want from a test manager?
- Clear product strategy
- As much certainty as possible
- Allow and encourage learning
- Allow and encourage contact with testers from outside the organisation
- Recognition that testers are different and have different needs
- Be approachable
- Give advice based on experience
- Work with the tester
- ... e.g. coaching, debriefing, pointing out potential efficiency, productivity, testing improvements
- Show appreciation
- Must have been a tester
Comments
Post a Comment