Ben Simo, Computer-Assisted Testing
Ben kicked off CAST 2021 with a brief history lesson, tracing the use of the term checking as a tactic in software testing back to at least Daniel McCracken’s Digital Computer Programming from 1957 and through into his own recent model. Checking for him is a confirmatory activity, focusing on knowns and attempting to demonstrate that what was known yesterday has not changed today.
Checking need not be performed by machine but it’s a common target for automation because it comes with a straightforward decision function: the previously-known state. In fact, this is for many all of what “test automation” is or can be and numerous regression test frameworks exist to facilitate that kind of work.
Ben would, I think, reject the both the term and the limited thinking about where computer tooling can be valuable for testers. In his model, testing is an exploratory, investigative activity, focused on unknowns, providing challenges to the system under test and processing its responses in ways that make human assessment of them tractable.
I find myself less bothered about the term, enjoying the model a great deal, and very interested in breaking through the idea that automation, and tooling more generally, has nothing to contribute to exploration.
The remainder of Ben’s talk was a tour-de-force of ways in which he has built tools that assisted testers in their testing. These were not one-size-fits all utilities, but chosen to suit the context in which their need arose.
In one example, the tool asked the tester for the constraints they were interested in imposing on a system under test, generated test data that fitted, ran with that data, looked for anomalies, and collected trace data to help diagnosis. In a second, a similar starting point resulted in the creation of a test environment.
For another project, the tool ran the same sets of keystrokes on multiple deployments of a product in different tech stacks. There were no asserts, simply comparison of the responses. If they differed then there might be a problem and a human was asked to investigate. As patterns emerged, and permissible differences were identified, the system was enhanced with heuristics that would suppress requests for human inspection.
Other approaches built and explored state models; implemented sophisticated logic into a log aggregator by creating derived columns and querying over them; and harvested production data to drive load tests.
Even if this kind of material were not right in my wheelhouse (see e.g. Exploratory Tooling or Exploring to the Choir) I'd still see Ben's examples as powerful illustrations of the value of context-driven testing by a skilled practitioner.
Comments
Post a Comment