I paired with my friend Vernon last week. He mentioned it in a blog post afterwards:
Watching my colleagues Lisi and James work is like watching wizards cast spells.
He's very kind, and I do like a pointy hat, but there was no sorcery involved, simply intentful exploratory testing.
What do I mean by that?
In this case, I mean that we started with a question and looked for ways that we could find information to help us to answer it.
This particular question was very open because we didn't have a very specific oracle: can we find examples where the output of the system under test might be problematic?
What did we do?
CODS: Control, Observe, Decompose, Simplify.
We could use the debugger to trace execution to a few pivotal functions and see what the application was doing (control, observe) but that was tiresome after a while.
So we hacked the source code a little (conceptually simplify) so that variables were available to be dumped to the console as the application ran. We added tags to the print statements to make the output easy to search for (control) but differentiate the functions (decompose).
Then we ran the application.
At the command line we set up a terminal split into three windows. In the first window we had a tool that generates test data, exercises the application, and dumps output and various metrics (control, observe, simplify).
In the second window we used a pipeline of bash utilities to watch the
application's console output and search for lines containing the tags we added
(observe).
In the third window we used a different pipeline to search the same console output for error messages (observe).
And?
And then we were in a position to kick off a run — a user journey, if you like — and see what the input and outputs of it were while also watching in real time for data of interest from the application itself.
As we observed behaviours that looked interesting we were able to tweak our
runs to focus on them, or take data generated by this temporary test rig and
view it in an editor, or parse it for patterns using other bash utilities, or
cross-reference it with other data sources.
Sounds cool!
Yes, and also fun! This is intentful exploratory testing, using automation to help us find places to look for information that guides us in framing new or more focussed questions and hypotheses.
This stuff doesn't come for free though. As Vern said on the day this approach, and the tools used to do it, were gathered over time, through repetition, through experimentation, through reflection, and through an open, learning mindset.
That last point is key: a learning mindset. Learning about the problem, about the problem space, about the way we're approaching it, about the tooling we're using to approach it, about ways to test, and, and, and...
To me there's no wizards here, just two perpetual apprentices of their craft.
Image:
Filmaffinity
Comments
Post a Comment