Skip to main content

Posts

Showing posts from December, 2011

A Gradual Decline into Disorder

I like to listen to podcasts on my walk to work and I try to interleave testing stuff with general science and technology. The other day a chap from Cambridge University was talking about entropy and, more particularly, the idea that the natural state of things is to drift towards disorder. Entropy : "Historically, the concept of entropy evolved in order to explain why some processes (permitted by conservation laws) occur spontaneously while their time reversals (also permitted by conservation laws) do not; systems tend to progress in the direction of increasing entropy."  In an ironic reversal of its naturally confused state, my brain spontaneously organised a couple of previously distinct notions (entropy in the natural world and the state of a code base) and started churning out ideas: Is the development of a piece of code over time an analogue of entropy in the universe? Could we say that as more commits are made, the codebase becomes more fragmented and any orig...

The Ass in the Assumptions

Assume and make an ass out of u and me   is how the old saw goes. But when we're testing we're always making assumptions. We can't test everything so we take a view on the least risky areas and test them least. We're assuming that our risk analysis is reasonable, based on whatever evidence is available. We're up front about it - we may even have been directed to do it - and stakeholders have visibility of it. However, often assumptions aren't prominent. This may be because we didn't think it was worth documenting them, or that we didn't even know we were making them. The second set are the more troublesome. We need to be clear in our own mind what we think it is we are doing and what information we think we are going to get from it. For instance, if we're testing a new feature and we think it affects components X, Y and Z, we need to be aware that what we're doing is restricting our test space and we need to state that assumption up...

The U In User

I ask my test team to be proxy  users. They can and do report issues over, above, outside, around, across and regardless of any spec, convention or any other consideration if they feel that customers will find it an issue. To help to keep the team in step with customer thinking, all QA engineers are watchers on support traffic. (This has other benefits too, but that's for another time .) When you're doing this, especially in exploratory work, you have to be careful not to confuse your user with your self. You just won't have a single model user and, even if you did, the chances of it or any particular user having your own set of behaviours and prejudices is small (although obviously your behaviours and prejudices are the optimal set of behaviours and prejudices to have). So, if you're navigating the product and you like to use short cuts , don't just use short cuts. Take the slow route with mouse clicks too because some of your users will. Hover over all the ...

Every Second Counts

You're busy and you'd like to squeeze some extra time out of the day without spending more time in the office? You should look for ways to make small repetitive tasks more efficient. I'm not talking about test automation (although that's certainly something you should be open to and looking for) but the kinds of things you do all day every day, probably without even thinking about them. You almost certainly swap between applications under test, test harnesses, email clients, editors, command lines, word processors, spreadsheets, bug trackers, browsers and so on tens or hundreds of times a day. If you're a mouse jockey you probably spend a few seconds mousing to the task bar, clicking on the next application, mousing back up and clicking in the application to get focus. Did you know that Alt-Tab  throws up a quick task switcher? You probably have to edit and run scripts at the command line. Do you find yourself repeatedly opening an editor, editing a script, c...

Sympathy for the Dev Team

We had to update a mature regression test suite recently. The task was effectively moving a set of tests from a source data set that was no longer supported to its replacement. For product reasons this wasn't a trivial undertaking, but we could anticipate most of the problems and decide in advance on a strategy that would make the migration as straightforward as possible. Unfortunately, as the source had been in the test system for several years dependencies had built up around it. Dependencies that were reasonable pragmatic choices at the time, or forced short-cuts to meet a deadline or just plain wrong but dependencies nonetheless. Of course, as soon as we retired the old source data, a selection of apparently unrelated tests started falling over. Because this is the real world, our remedial action has only removed the dependencies in some cases and just made the suites run again in others. We're filing bug tickets fo...