Skip to main content

Posts

Showing posts from August, 2013

Ask a Stupid Equestrian

I soon decided, when asked to contribute the first A day in the life... column for our new internal company newsletter, that my daily grind in words wasn't very interesting. So I wrote this instead, based heavily on Iain McCowatt's excellent blog post (which is part of my team's recommended reading)  Spec Checking and Bug Blindness used here with his kind permission. One day So this horse walked into a pub. " Why the long face? " asked the chap behind the bar. The horse died a little inside and then said "I've been testing software all day." "Testing?" the barman chortled, "isn't that just making sure the thing does what it's supposed to?" The horse bridled at that and trotted away to an empty table where it picked up a pile of beer mats. Back at the bar it arranged three of them into a trefoil, each mat overlapping both of the others. It looked the barman in the eye. "You can think of it this ...

Bonfire of the Qualities

For the young, the neophyte, there is the enviable clarity of vision, the easy bipartition of almost any argument, the shiny guillotine of truth that demarcates those two polar opposites right and wrong. For the old, the more experienced, there's the enviable broad field of vision, the easy fragmentation of almost any argument, the dirty Brownian motion of compromise and pragmatism that blurs the distinction between those two already diaphanous, hazy notions right and wrong. As it is in life, so it is in testing. A significant challenge, I've found, in test management, is to explain why we're going to go ahead despite some issue. However much I might sympathise with the desire not to, however strongly the argument against it is felt, however clear the issue is thought to be, however strongly the argument is being put, if not addressing it now is the right thing to do, I need to say so without giving the impression that I view quality as some kind of Guy Fawkes dummy ,...

Them's the Breaks

After not really thinking about it for a long time, the other week I bumped into a handful of Twitter profiles claiming that their tester owner broke software and soon after happened across a couple of related discussions ( 1 , 2 ). Conversations about this old chestnut (see also certification, whether testers should be able to code, testing as art vs science etc) tend to be around whether or not finding issues can be classed as breaking the software or merely as exposing existing flaws. But probably even those who sit at the yeah-smash-it! end of the spectrum would admit that they've found a bug or two that didn't cause the software to perform the dying fly and maybe even agree that, occasionally, finding the esoteric corner case with a severe outcome may not be as useful as finding half a dozen minor UX issues that improves product workflow for a core user story without changing the basic capabilities or robustness of the software. Even as a young buck (although to be ...