Skip to main content


Showing posts from November, 2012

Lancet Then Lance It

Short arms and deep pockets, that's me. Not that I'm a total skinflint (although I'll admit to thrifty) but I do like to try to get good value out everything that I do. Sometimes this can be planned and sometimes I get it by exploiting existing resource. With test automation, one way to get value is to write code that can do double duty - as a surgical tool , crafting data and scenarios for those very specific checks, and by simply multiplying it up, as a long stick to whack the application with. For example, I might think about writing test code to run these ways, as a second-order priority: parallel: Look to run multiple instances of the tests in parallel without them interfering with one another. Write data to unique locations per instance of the test to ensure this. Check that any data that the suites are dependent on is not altered in the execution of the suite. serial: Look to run the same suite multiple times against the same application in series, perhaps

Brain Food

You're reading a blog about testing (cheers!) I read them too, and in addition I try to find cheap ways to expose myself to material outside of the area as well. I'm looking for easy routes into content I might be interested in, that might be relevant to me or the technology I use or that my company develops, teach me something, provide a useful resource, make a connection, spark a new thought or sometimes just make me chuckle. An efficient and productive way to do this, for me, has been to find a few trusted guides, guides that report regularly, with reliable quality and the breadth that I'm looking for, who consistently point me at content I wouldn't otherwise have come across. Here's a couple: If I only read one blog in a day, it'll be Four Short Links . One of the posts this week led me to Ray Dallo's Principles  (PDF)  in which he enumerates a couple of hundred rules that he applies to life and, particularly, management. These are extracted from

Testing, 1, 2

The idea that rapid review of potentially large volumes of data can be facilitated by the right kind of visualisation is reasonably common these days. Charting has come on leaps and bounds in recent times and blink testing attaches a name to the idea that we can quickly scan even relatively raw data for anomalies. Even at a basic level we've probably all flicked back and forth between browser windows or, in the old days, overlaid two printouts and held them up to a bright light to look for differences. But this isn't the only way that data can be reported. Or the only way that data can be reviewed. A Geiger counter is effectively a testing probe that reports audibly and leaves the operator's sense of sight and touch free for other tasks, such as operating the tool or analysing additional stimuli. Humans have a well-developed capacity for distinguishing small variations in pitch, tone, duration and so on. In most software applications sound is not key and so we

The J in Irk

One of the risks associated with being a tester is getting a jaded view of your application. For instance, spending a lot of time in the product for testing purposes can expose you disproportionately to the issues that have been triaged away to the future in the expectation that customers won't encounter them (often, for now). It's easy to take the skewed exposure and obtain a skewed perspective. Alternatively, it's also undesirable if understandable that, when looking for errors and finding them, and then retesting their fixes, and then reretesting their refixes, aspects of the application, the process, your team mates, your version control system and the like will begin to grate. You should do everything you can to keep a lid on these kinds of frustrations, and carefully choose your moment to bring them up. It's right to care about the AUT but it's not right, or productive, to let your gripes out at every opportunity. One of the strengths of the best tester