Skip to main content


Showing posts from July, 2012

Testing Generally

I sometimes consciously split the functionality I'm testing into two parts:   general : behaviour that is the same, or similar, regardless of where it appears, how it is invoked and so on; and  specific : which differs according to function, context, time, data types etc.  I'll tend to do this more on larger projects when  the areas are new to me, or to the product, or if they're complex, or I think the test framework will be complex, or the specific is heavily dependent for its delivery on the general, or perhaps when the specific details are certain to change but the general will be stable.   I'll be looking to implement automation that concentrates first on general functionality and self-consistency and that will serve as a backstop when I move on to the more specific material.  To speed things up, to get wider coverage easily, and to avoid dependencies, I'll try to avoid crafting new test data by looking for data already in the company that can be reuse

A Clavicle Education

Co-location is intrinsic to  some software development  and it can also have social benefits, build an esprit de corps and smooth out the kinds of communication issues that time zones and typing often cause.  But, for me, there's another softer reason why co-location is advantageous - I learn stuff in passing from the natural interactions I have in the course of a working day.  When I go to a colleague's desk to ask about some functionality, and they pull up source files for inspection, I'm looking at the text, but I'm also interested in the editor they're using, the powerful ways it lets them search/replace and the fact that it has a plug-in for fancy diffing that I wasn't aware of and that I can use myself next time. Sitting with a developer as they write code is a welcome insight into the mindset of someone who really knows the nuts, bolts, screws, rivets, nails and other fixings when my skills, relatively speaking, extend to being able to tell a  brad

Mock the Afflicted

The concept of test doubles  is well-established in unit testing with mocking probably the most familiar. The idea is that you fake enough of an API to permit unit tests to run against it. You can control the way that the mock API responds, tailor your test coverage and avoid external executables, dependencies on other code and so on to run your tests. It can be useful to use similar concepts at the component level too. For example: for diagnosis and investigation of misbehaviour in complex systems you can replace components in a workflow. I like this especially when the problem behaviour is hard to reproduce naturally. Part of our core product is  a server that calls out to other software for some tasks and returns data back to a client. Replacing a server-side executable with something that generates a specific output (e.g. returning an error response to the server, or bad data to the client) or behaviour (e.g. taking so long to respond that the server is forced to time out) c

The Elephant in the Fume

I read my daughter a story last night, about an elephant that thinks she's a mouse. It seems reasonable, her big book of knowledge says mice can be grey with big ears and skinny tails. Note to new testers: specifications are seldom sufficient . Think broadly. She moves in with a mouse family but it doesn't go well. Luckily Granny Mouse has the wisdom of age and experience, works out what's gone wrong and takes Nelly to the zoo to be with her own kind. Unfortunately, one of the mice then reads Nelly' s book and gets the idea he's an elephant. Note to old-hand testers: spread your insight through the team. A few times recently I've found myself talking to less-experienced colleagues and realising that I've segued from the mouse to the elephant, assuming they were following. They weren't, and the discussion got confused. We started off on a small acute, practical problem of right now and I expanded out to the big, theoretical potential solution o