One of the reasons that software testing is challenging, both intellectually and practically, is that the information about the state of the system under test is partial. It's part of the testing role to formulate a model (or, more usually, a cloud of overlapping, incomplete and contradictory models) that represent our best view of the system at any given time and we've developed a collection of monochrome boxes that reflect the idea that access to source code can help make sense of it. But even that doesn't equate to an understanding of the model that the software has when it operates. For example:
An aside. A few weeks ago, during heavy rain, I heard a rapid and repetitive thudding on our flat kitchen roof. I assumed was a drip and when the rain had stopped I got up and had a look. There were two obvious candidates: a join in the guttering between us and next door and a TV aerial pointing slightly below the horizontal. The weather was dry but I know about soak testing, so I poured a bucket of water over the aerial and another into the guttering which prompted water droplets forming on the joint and falling in a rhythmic way.
I'm no guttering expert (although as a student I once got mistaken for a tramp; that's a different kind of gutter) but I could see that a clip on a plastic band that applied pressure to the two pipes had cracked, opening up the seal. I squirted some sealant into the joint and forced the clip shut.
It broke.
After cursing for a while, I drilled through the band and the guttering, put a bolt through the hole and tightened a nut onto it. Pouring more water in showed no leak so I put some grease on the nut and bolt to waterproof them for the future me revisting the cheap and cheerful repair and made myself a nice cup of tea.
And the point of this DIY yarn? While I was on the roof it occurred to me that my model of the system I was testing and working with was very close to being the system itself. I can touch or visualise the entire thing easily. Sure, there are levels beyond my comprehension - I don't understand the chemical or physical properties of the materials used to manufacture the guttering, the nut and bolt or the clip but I have general experience of plastics, metals and so on that covers enough of that to give me what I need.
Even considering the wider systems in which this is a small component, I could initially see that there were multiple candidates for the source of the drip and latterly recognise that when it gets wet the bolt might rust which would make further maintenance more difficult.
That's not to suggest that all software can be reduced to the complexity of a joint between two half-pipes or that all physical things can be analysed simply by looking and interacting - I wouldn't have a chance with the engine in my car, for example. But, it is the case that the more of the underlying thing that can be inspected, the less effort is required to create the initial models and the more time can be spent on refining and testing them.
So I'm going to be giving myself some time to think what we can do to make the model the software I'm testing has of its state - or, more realistically, the sub-models it has of the bits of state of interest at any given time - more available and useful to the testers and other users.
For the record, I noted down my initial thoughts while I was writing this:
Image: http://flic.kr/p/bpTUr
- The tester may not follow the source code (completely).
- External libraries may implement a substantial part of the functionality but appear minimally in the source.
- Interactions with other layers, such as the operating system for file operations, will form part of the model without being part of the codebase.
- If the source code is compiled, it may be optimised in ways that contradict the tester's understanding.
An aside. A few weeks ago, during heavy rain, I heard a rapid and repetitive thudding on our flat kitchen roof. I assumed was a drip and when the rain had stopped I got up and had a look. There were two obvious candidates: a join in the guttering between us and next door and a TV aerial pointing slightly below the horizontal. The weather was dry but I know about soak testing, so I poured a bucket of water over the aerial and another into the guttering which prompted water droplets forming on the joint and falling in a rhythmic way.
I'm no guttering expert (although as a student I once got mistaken for a tramp; that's a different kind of gutter) but I could see that a clip on a plastic band that applied pressure to the two pipes had cracked, opening up the seal. I squirted some sealant into the joint and forced the clip shut.
It broke.
After cursing for a while, I drilled through the band and the guttering, put a bolt through the hole and tightened a nut onto it. Pouring more water in showed no leak so I put some grease on the nut and bolt to waterproof them for the future me revisting the cheap and cheerful repair and made myself a nice cup of tea.
And the point of this DIY yarn? While I was on the roof it occurred to me that my model of the system I was testing and working with was very close to being the system itself. I can touch or visualise the entire thing easily. Sure, there are levels beyond my comprehension - I don't understand the chemical or physical properties of the materials used to manufacture the guttering, the nut and bolt or the clip but I have general experience of plastics, metals and so on that covers enough of that to give me what I need.
Even considering the wider systems in which this is a small component, I could initially see that there were multiple candidates for the source of the drip and latterly recognise that when it gets wet the bolt might rust which would make further maintenance more difficult.
That's not to suggest that all software can be reduced to the complexity of a joint between two half-pipes or that all physical things can be analysed simply by looking and interacting - I wouldn't have a chance with the engine in my car, for example. But, it is the case that the more of the underlying thing that can be inspected, the less effort is required to create the initial models and the more time can be spent on refining and testing them.
So I'm going to be giving myself some time to think what we can do to make the model the software I'm testing has of its state - or, more realistically, the sub-models it has of the bits of state of interest at any given time - more available and useful to the testers and other users.
For the record, I noted down my initial thoughts while I was writing this:
- when reporting derived metrics the raw data should be available too,
- logging should be as complete as possible or (to some sensible level) complete logging should be available,
- log time stamps from different components should be in step,
- error and warning messages should be precise, clear and informative,
- similar operations on the model should be similar operations in the view,
- similar structures (semantically and/or physically) should have similar realisations in the product,
- naming conventions should be consistent and transparent from the UI through the variables in the code to the model itself,
- any extra reporting must be trustworthy, and the trust should be economic to establish, or else we'll have an additional test burden.
Image: http://flic.kr/p/bpTUr
Comments
Post a Comment