When new staff join Linguamatics they'll get a brief intro to each of the groups in the company as part of their induction. I give the one for the Test team and, as many of our joiners haven't worked in software before, I've evolved my spiel to be a sky-high view of testing, how we service other parts of the company, and how anyone might take advantage of that service, even if they're not developing software.
This takes a whiteboard and about 10 minutes and I'll then answer questions for as long as anyone cares to ask. Afterwards we all go on our separate ways happy (I hope) that sufficient information was shared for now and that I'm always available for further discussion on that material or anything else.
I mentioned the helicopter perspective that I give to Karo Stoltzenburg when she was thinking about when and how to draw a testing/checking distinction in her Exploratory Testing 101 workshop for TestBash Manchester. I was delighted that she was able to find something from it to use in her session, and also that she produced a picture that looks significantly nicer than any version I've ever scrawled as I talked.
Karo's picture is above, her notes from the whole session are on the Ministry of Testing club, and below is the kind of thing I typically say to new staff ...
What many people think software testing is, if they've ever even thought for a second about software testing, goes something like this:
- an important person creates a big, thick document (the specification) full of the things a piece of software is supposed to do
- this spec gets given to someone else (the developer) to build the software
- the spec and the software are in turn given to another person (the tester) who checks that all the things listed in the spec are in the software.
In this view of the world, the tester takes specification items such as "when X is input, Y is output" and checks the software to see whether Y is indeed produced when X is put in. The result of testing in this kind of worldview probably looks like a big chart of specification items with ticks or crosses against them, and if there are enough crosses in important enough places the software goes through another round of development.
While checking against specification can be an important part of testing, there's much more that can be done. For example, I want my testers to be thinking about input values other than X, I want them to be wondering what other values can give Y, I want them to be exploring situations where there is no X, or when X is clearly invalid, or when X and some other input are incompatible, or what kinds of things the software shouldn't do and under what conditions ...
That's all good stuff, but there's scope for more. I also want my testers to be wondering what the motivation for building this software is, who the users are, and whether the specification seems appropriate for a software product that meets that need for those users. I'd also expect them to think about whether the project, and the team — including themselves — is likely to be able to create the product, given that requirement and specification, in the current context. For example, is there time to do this work, is there resource to do this work, do the team have sufficient tooling, expertise, or other dependencies, ...?
Even assuming the spec is appropriate, the context is conducive to implementation, and the team are not blocked, you won't be surprised to find that more possibilities exist. I'd like the tester to be alert to factors that might be important but which might not be in the specification much at all. These might include usability, performance, or integration considerations, ...
For me, one of the joys of testing is the intellectual challenge of identifying the places where there might be risk and wondering which of those it makes sense to explore with what priority. Checking the specification can certainly be part of testing, but it's not all that testing is: there's always more.
Image: Jimmy Cricket, Karo Stoltzenburg
Comments
Post a Comment