In Tariq King's extremely interactive workshop at CAST 2022 we got a quick introduction to Artificial Intelligence and Machine Learning as we played with some toy online systems and trained our own model (using a Teachable Machine) for recognising people or objects in the room.
We didn't cover testing these kinds of systems specifically, but applied testing skills when trying to understand which features were being used by the model of workshop participants that we built. What's more discriminating: hair, skin colour, proximity to the camera, background, lighting, clothing, something else?
Tariq also pointed to GANs (Generative Adversarial Networks) as tools that should be of interest to testers since they seek to model and then challenge the system they are in contest with.
Finally we discussed how AI could be of use to testers right now. Tariq gave an example of tooling trained using computer vision and reinforcement learning to navigate login pages generally rather than the specific page for a company's site.
Now, when the company site changes a little the tool can likely still log in. This allows for flexibility of testing the core function without being distracted by surface detail such as images or text box positions.
Of course, that surface detail might still be crucial for a human to be able to navigate the site so this approach is not a magic bullet that can dispose of all testing, and should be used judiciously. However, it is an alternative to the fail-and-heal approach you may have seen, for example promoted by CAST sponsor Mabl.
As testers we may be interested to know when login was successful in spite of some change and the tooling can give us that information too, Tariq says. Further, once we've accumulated data on the kinds of changes we see, we can extend the model by teaching it which of those changes we care about.
The same kind of system could in principle be put to use identifying interesting differences between your site and those of your competitors. What are they doing that you are not? That question could be asked at scale, round the clock.
This is powerful and interesting technology and I am immediately thinking about how I might exploit it to help me to explore the systems I test. But I'm also wondering how much testing I'd want to do of the tooling itself, to feel comfortable that it was modelling behaviours I expected and not those I didn't.
Which is not to be hubristic about my code or the tools I use currently. For sure they will have bugs and I definitely can not model them well enough in my head to understand and predict their behavioural envelope completely either.
To finish, here's some of the other sites we looked at during the workshop:
- https://experiments.withgoogle.com/collection/ai
- https://www.craiyon.com/
- https://thispersondoesnotexist.com/
- https://boredhumans.com/text-to-image.php
Image: Tristan Lombard on Twitter
Comments
Post a Comment