Testing is like making love... it never lasts as long as you wanted it to. Testing is like making love... once it's over you're the only one interested in discussing how it went. Testing is like making love... well, I'm sure you get the idea.
But if testing isn't really much like making love, what is it like? Non-testers can think testing software consists of nothing more than trying a couple of examples and then declaring that "it seems to work OK". Finding a simple analogy for the testing process could help to disabuse this notion.
Which is where word searches come in. Every year, to show what a great guy I am I spend as little time as possible thinking up some test-related game for a team meeting in late December and force the team to play along. (And they love me for it. Or, at least, they know I'd sack the first one to protest.) One year we taste-tested stollen, mince pies and panettone; last year it was proofing a bogus Christmas card from QA to the other teams (complete with the Santa James above) and this year I generated a festive word search supposedly from our Marketing team to our customers for the team to review.
While they were eating panettone and pretending to enjoy the task until I was out of earshot, I went to make some coffee for them (you see, I really do care) and started thinking about the validity of the word search as an analogy for feature testing: the spec is the list of words, it defines everything that must be present in the feature. The rules of word searches are the implicit specs, the conventions and other internal constraints on software such as the UX guide. The finished grid and the list are the delivered software. Testing is a matter of taking what's been delivered and investigating, for example:
- whether the feature contains everything that the spec said it should - are all the words in the list to be found in the grid in legal configurations?
- whether the feature contains anything that the spec didn't say it should - are there other legal and plausible (e.g. for the theme of the word search) words in the grid?
- whether the feature violates any of the implicit rules - are there any non-letters in the grid?
- whether the feature is what the user would want - do the characters that are not part of the list words spell out profanities?
Just like in real life, you can take different testing approaches and they'll have trade-offs in terms of time and effort vs issues found, say:
- bottom-up - starting with each cell and looking in each direction for each of the spec items
- top-down - taking a spec item and scanning for letters that might be in it
- heuristic - looking for common letter patterns and seeing whether they are part of the intended words
- random - just straight eyeballing and hoping to spot something positive or negative
- automated - writing a tool to identify the spec items in the grid and look for other common words too
Again just like real life, while you can convince yourself that all of the spec items are present (or not), it's much harder to convince yourself of the acceptability of the whole. (I like Iain's take on testing against spec in Exploring Uncertainty.)
So testing is like a word search, then? Perhaps, but do you have something better?
Comments
Post a Comment