I'm not talking about deliberately entering junk content into applications, configurations, data and the like. I'm not talking about random clicking in a system under test or trying to engineer corner cases by artificially restricting disk space or RAM or some other resource or any other of those legitimate test vectors that benefit from experimenting with the available parameters.
I'm talking about sympathetic testing - or even just plain usage - in an untidy way to try to increase my test coverage in passing. Here's some examples:
- If your application is in the cloud, maybe don't log out of it when you've finished your current task. Look for odd effects next time, perhaps after you've moved between networks or not accessed the system for a few days.
- If you start an application daemon or service at the console, don't close it immediately, but leave the console open while the system runs. Check in on it now and again as you move between tasks and look for exceptions, unexpected errors and so on.
- If you have an instance of your application open already and want to try something, sometimes don't use the open instance but open another and watch for effects due to cross-contamination.
- At the end of a session, don't close down cleanly. Instead, leave the application in the middle of an interaction - with a dialog open, say - and see what happens when you come back to it.
- If you're testing a web-based application don't always use the same bookmark to get there. Use your browser history, or type into the URL bar and get some old URL from your history or paste a URL from an old email to find a new starting point and see what the server does with it.
People talk about setup, test, teardown sequences and in some cases that's exactly what's required. You may need to understand your starting context as precisely as possible so that when running a sequence of test ideas you can return to that context each time. But your users won't be running inside a sterile laboratory, so make sure you don't always do that either.