On Friday, in the weekly ensemble I have with some of the medical
practitioners at work, I suggested we take a look at the
first challenge from The Testing Map.
We all spend a lot of time testing although the others focus more on the
domain side than the pure software side. That focus was reflected in the kinds of checks they tended towards: input
strings that have obvious human semantics but which, for a seasoned software
tester, would probably fall into a single equivalence class.
While exploring some those strings we stumbled into security testing with the input "I don't know" because, due to the apostrophe, the testing challenge credited us with an attempted SQL injection. From there we were able to talk about script and HTML injection, and that slid into opening the developer tools in our browser and poking around the source, network traffic, cookies, and so on.
The consensus when debriefing was that coverage can be improved when looking at the system under test from multiple perspectives. This led to an interesting discussion about bias. One of my colleagues made the point that we risk missing issues because of unknown unknowns, and he's right, we do.
But he also suggested that we can't overcome the unknown unknowns because of our biases towards the things we do know. On this topic, I think he's wrong. For me, we can give ourselves more chance of spotting the unknown unknowns by exposing ourselves to more data.
For example, opening the developer tools when testing a web application doesn't mean I know there's an issue or that I'll be able to recognise one if it presents itself, but it gives me a chance to make observations and comparisons across time and context and data. If there are ten fields in a form and clicking in one of them provokes a message "Here!" in the browser console, I don't need to be an expert in anything to notice the exception and ask whether it's intentional, desirable, or required.
A large part of exploratory testing is intentionally looking in
different ways to afford ourselves the opportunity to discover things that
might be interesting, finding ways to review the data that is generated,
choosing which observations to pursue, and when, and for how long, and pulling
in other expertise at an appropriate time. These are generic skills and can be
learned, and can be valuable in pretty much any line of work.
Image: https://flic.kr/p/4ufwAb
Comments
Post a Comment