Last night I attended Using Artificial Intelligence, a webinar hosted by BCS SIGIST, the Special Interest Group in Software Testing of The Chartered Institute for IT. In different ways, the two presentations were both concerned with practical applications of AI technology.
The first speaker, Larissa Suzuki, gave a broad but shallow overview of machine learning systems in production. She started by making the case for ML, notably pointing out that it's not suited for all types of application, before running through barriers to deployment of hardened real-world systems.
Testing was covered a little, with emphasis on testing the incoming data at ingestion, and again after each stage of processing, and then again in the context of the models that are built, and then again when the system is live.
To finish, Larissa described an emerging common pipeline for getting from idea to usable system which highlighted how much pure software engineering there needs to be around the box of AI tricks.
In the second half, Adam Leon Smith walked us through three demonstrations of artificial intelligence tooling with potential application for testing.
He showed us Evosuite (video, code), a library that, unsupervised, creates unit tests that cover a codebase. There's no guarantee that these are tests that a human would have made, and Adam noted a bias towards negative cases, but in some sense this tool captures a behavioural snapshot of the code which could be used to identify later changes.
In the next demo (video, code) Adam trained a model on images of magnifying glasses and used it to identify the search icon at Amazon's home page, an approach that might be used to check for the presence of expected icon types without requiring a fixed gold standard. Finally, he showed how synthetic test data could be generated by AI systems, using thispersondoesnotexist.com which creates photorealistic images of non-existent people as an example.
Comments
Post a Comment