Last November I gave a talk, Random Exploration of a Chatbot API , at the BCS Testing, Diversity, AI Conference . It was a nice surprise afterwards to be offered a book from their catalogue and I chose Artificial Intelligence and Software Testing by Rex Black, James Davenport, Joanna Olszewska, Jeremias RƶĆler, Adam Leon Smith, and Jonathon Wright. This week, on a couple of train journeys around East Anglia, I read it and made sketchnotes. As someone not deeply into this field, but who has been experimenting with AI as a testing tool at work, I found the landscape view provided by the book interesting, particularly the lists: of challenges in testing AI, of approaches to testing AI, and of quality aspects to consider when evaluating AI. Despite the hype around the area right now there's much that any competent tester will be familiar with, and skills that translate directly. Where there's likely to be novelty is in the technology, and the technical domain, and the effect of
Yes. It's bound up in something called a practice language. See Collins: Rethinking Expertise, also Tacit and Explicit Knowledge
ReplyDeleteNote that prescriptive grammar is not the only kind of grammar.
ReplyDeleteDescriptive grammar is something every speaker has to have wired in their brains in order to use it. And if it exists it's never futile to try to figure it out (even if it turns out as ugly as the standard model).
Also focusing on writing instead of speaking when discussing natural languages outside NLP domain is a bit short-sighted (see Pinker's talk for some nice definitions).
Anyway, if testing is a language, does it have a universal grammar?