Skip to main content

Posts

Showing posts from November, 2011

Tooled Up

You are a tool. Yes, read it good, you are a tool. I'm a tool. We're all tools and, we're selected for a job if we're the right tool. Or if there's nobody else available. Or our manager has a dark streak in their nature. (And which one doesn't?) So what kind of tool are you? A hammer? But is everything a nail to you? A chisel? But when there's no carving, you'll be opening tins of paint. A swiss army knife? Jack-of-all-tests and owner of none. A slide rule ? Pity no-one uses you, except for twanging on their desks. A calculator? Try not to smirk at the slide rule. Blunt and rusty? Sure, you'll get there. If you don't fall apart first. Shiny and sharp? Don't cut yourself on that bleeding edge. You're probably all of the above to some extent (you're a Dremel ). And that's OK. The key thing is to remember that you aren't carved in stone (you're an axe ), and that it's probably best for your team that yo

Shall We Ask the Magic 8-Ball?

Identifying a technology need is usually pretty easy - your team will complain at every opportunity, however tangential, about how some application is too complicated or is not powerful enough or has a major missing feature or doesn't integrate with other applications  or you can't search it or it's too slow or it uses different conventions to the other tools or there was something better at their last job or they just plain don't like it. You'll usually agree. And you'll usually want to wait for a (non-existent, and you know it) better time to think about it because  introducing a new technology can be time-consuming, hard work and risky . Eventually events will overtake you. When that happens, I start by drawing up a list of  application-specific requirements, prioritised of course, and then add this basic set of parameters that I want to compare across any candidate tools: user community: is it active? how is the tool viewed? support: forums, bug da

I'm no Developer but..

.. I can still be happy about a little script I wrote to sanity check the sort order of a list of files returned by server. I didn't want to hard code all the test data, in order to reduce maintenance and to benefit from a bit of extra coverage over multiple executions. So I started with a list of file names with potentially interesting alpha sort properties, e.g. a, aa, a0, a1, etc then shuffled that randomly to generate two arrays. The first is creation order. The position of a file name in the second governs the file size. I then use the same arrays to verify the results that come back from the server when I ask for the enumerations of sorted files. Here's a pseudocode extract of the Perl script: # create array of file names my @files=qw/ 00 01 1 1.1 1.2 2 20 A Z a a0 a1 aa z /; # randomly shuffle to govern creation and size order my @file_creation_order= shuffle @files; my @file_size_order= shuffle @files;   # create files, 1 per second for timestamp differences

The Power of Fancy Plain Text

In   The Power of Plain Text  Catherine Powell wrote that we should send plain text in emails so that it can be easily re-used. And she's right. But you can go further. My company use a mediawiki for internal documentation and so I write emails in plain text with wiki markup (e.g. * for bullets) to make later pasting into the wiki straightforward. In fact, I write all of my plain text notes and bug reports in that format too, for the same reason.  I use a wiki markup  extension  for Emacs  and wiki2xhtml  to render the mediawiki content as HTML for local viewing or easy printing. Plain text might be Cinderella these days, but it can still go to the ball.

Dark Matters

We all know about  Black Box Testing  and we've all heard the debates about  it versus White Box Testing and we all once wondered idly whether there's such a thing as Grey Box Testing and weren't particularly surprised, or bothered, when we found out that  there is . Wherever you fall on this greyscale spectrum (and the right answer is wherever you think the best result will be for a given piece of work) you'll definitely have had days, or weeks, when you're Black Hole Testing. Those are the projects where progress and then time, and then you, are sucked into a heavy and dark place from which no amount of effort can get you over the  event horizon . When you're in the middle of Black Hole Testing, your boss will doubtless be exhorting you to work harder, longer, faster, better, cleverer and weekends. His or her targets were the first things vacuumed up by the void, and they've now dropped out of a n  Einstein-Rosen Bridge  se veral months in the futu

Boiling Point

Boil it down to concise and precise. Say  the key stuff first, contextual stuff later . Use formatting (e.g. emphasis, bullets and tables) only when it aids clarity. Do it in your bug reports, especially, but it can apply to every communication you have with your colleagues and customers. They will notice and they will thank you for it. You will be pestered less for clarification and further information. You, your team and your company will function that little bit better each time.

The Appliance of Art

We're recruiting Senior Testers at the moment and one of the recent candidates said several times during interview that testing is a science. This is not a new topic in testing and his stance on it is not universally shared. Try  The Software Entomologist  or  IM Testy  or  Randy Rice  or consider the title of Myers' classic,  The Art of Software Testing . For what it's worth, I probably agree that testing is a science. Primarily,  the basic methodology of testers is also the basic methodology of scientists: formulate a model of the subject and then experiment to verify whether the model is a good fit with reality. It's indisputable that some testers have a nose for finding bugs, a gut feeling about risky areas, something in their bones that keeps them worrying away at a seemingly minor issue until they expose the major flaw, an intuition about where or how or when to poke the application under test in just the right way to cause it to break, or break in a spectac

Out of The Mouths of Babes

My youngest daughter is just starting to talk but it's hard to understand her when every word sounds like every other word and you have to repeat it and correct it and point at things to work out what she's just said. She's also starting to sing and, while she knows loads of songs, she doesn't know loads of lyrics so she just makes the closest sound she can to something like the tune. But, unlike her talking, the context gets her through and it's mostly easy enough to pick up the song and then you can sing along and over time she gets closer and closer to the words. When I'm trying to implement something, if I do it bit by bit, designing each component to completion before the next, it's hard to get right. I have to repeat things and correct things and when I try to demo it to people, they can't work out what it is I'm trying to do. If I try to do it end-to-end, not worrying too much about completeness but just trying to get one path from

Here Come The Warm Tests

Brian Eno famously developed a deck of cards to help himself to think freely under pressure. His insight was that when you're in a stressful situation, or you're working too hard, or you've run out of ideas for solving a problem, your patterns of thinking become rigid and constrained.  Each card contains one sentence designed to steer thinking in unexpected ways. He calls these  Oblique Strategies  and examples include   Use an old idea,  State the problem in words as clearly as possible,  What would your closest friend do?,  Try faking it! and  Work at a different speed. Hmm.  I wonder whether any testers ever get stressed about deadlines or requirements or build stability or the number of defects they're seeing or the massive number of regressions that "easy and safe" one-line change caused or the number of permutations that new dialog with 20 check boxes is going to have or the number of questions they can always think of when they start to think abo

Seems To Isn't Does

I've got a paddock full of hobby horses and I'll jump onto one at the slightest provocation. One of the sleekest, through constant exercise and a healthy diet of suppressed anger, is the one where somebody tells me that something seems to work. It really gets my goat, to mix ungulate metaphors. Don't tell me that it seems to work. "Seems to" isn't "does". And in any case "does work" is only an approximation of the kind of answer I'll want most of the time. Don't tell me that it seems to work. What that means to me is that you don't know whether the thing does what it's supposed to but you prodded it, probably with noddy data, and the behaviour you observed wasn't completely contrary to your expectations, such as they were. Don't tell me that it seems to work. I'll just think that you were looking over the developer's shoulder while they ran through the same single test case they've used to deve