Assume and make an ass out of u and me is how the old saw goes. But when we're testing we're always making assumptions. We can't test everything so we take a view on the least risky areas and test them least. We're assuming that our risk analysis is reasonable, based on whatever evidence is available. We're up front about it - we may even have been directed to do it - and stakeholders have visibility of it.
However, often assumptions aren't prominent. This may be because we didn't think it was worth documenting them, or that we didn't even know we were making them. The second set are the more troublesome. We need to be clear in our own mind what we think it is we are doing and what information we think we are going to get from it.
For instance, if we're testing a new feature and we think it affects components X, Y and Z, we need to be aware that what we're doing is restricting our test space and we need to state that assumption up front because otherwise no-one will have any reason to tell us that it's critical in component W too. Other people will generally assume we know what we're doing, and that we know what they know. (You'll do it yourself too - another assumption to watch out for.)
Or let's say we're running some profiling experiments that generate a lot of data so we decide to write it to the shared "big disk". Are we thinking about whether that's a network disk, whether other people are accessing it at the same time, what its RAID level is, whether it's NFS or ZFS? Are we thinking about whether any of those things make any difference to the validity of our test for the result we're trying to obtain, such as whether it's what the previous figures used?
If we don't think about and share what is implicit in our testing we'll end up the ass in our assumptions, u and me.
Image: Juan Gnecco / FreeDigitalPhotos.net