Greg Sypolt, Building a Better Tomorrow with Model-Based Testing
As he was telling us at CAST 2021, Greg’s team have built a model-based testing system and integrated it into continuous integration infrastructure which has scaled to be capable of exhaustively exercising the 30 or 40 deployment scenarios that each of their products supports.
The models bring advantages such as describing system behaviour in reusable chunks that are independent of implementation details, making maintenance straightforward, and broadening coverage.
Sounds great, and is, but it comes at a price. Getting buy-in for this kind of approach — from both management and the team — can be tricky and there’s a lot of up-front effort, unfamiliar concepts and technology, and steep learning curves.
The models Greg needs can be quite simple because each product is basically a linear navigation through a sequence of pages. When the system runs, test cases are generated from the models, permutations of cases are created for each environment, and then Cypress and Applitools exercise the systems under test.
Interestingly, the team makes “models” for steps they want to perform, as well as for the system behaviour. This is theoretically questionable but a pragmatic and practical approach to, for example, invoking Applitools or waiting for page load completion.
Perhaps more interesting, there is sufficient confidence in this approach in this context that exploratory testing is crowd-sourced, with the team relying heavily on the framework to perform a (simulated) mouse, (simulated) keyboard, and visual check on each component via the model.
Comments
Post a Comment