Skip to main content


Showing posts from November, 2016

Mum's the Word

A few weeks ago I put out an appeal for resources for testers who are pulled into live support situations: Looking for blogs, books, videos or other advice for testers pulled into real-time customer support, e.g. helping diagnose issues #testing — James Thomas (@qahiccupps) October 28, 2016 One suggestion I received was The Mom Test  by Rob Fitzpatrick, a book intended to help entrepreneurs or sales folk to efficiently validate ideas by engagement with an appropriate target market segment. And perhaps that doesn't sound directly relevant to testers? But it's front-loaded with advice for framing information-gathering questions in a way which attempts not to bias the the answers ("This book is specifically about how to properly talk to customers and learn from them"). And that might be, right? The conceit of the name, I'm pleased to say, is not that mums are stupid and have to be talked down to. Rather, the insight is that "Your mom will lie to y

Cambridge Lean Coffee

This month's  Lean Coffee  was hosted by Abcam . Here's some brief, aggregated comments and questions  on topics covered by the group I was in. Suggest techniques for identifying and managing risk on an integration project. Consider the risk in your product, risk in third-party products, risk in the integration Consider what kinds of risk your stakeholders care about; and to who (e.g. risk to the bottom line, customer data, sales, team morale ...) ... your risk-assessment and mitigation strategies may be different for each Consider mitigating risk in your own product, or in those you are integrating with Consider hazards and harms Hazards are things that pose some kind of risk (objects and behaviours, e.g. a delete button, and corruption of database) Harms are the effects those hazards might have (e.g. deleting unexpected content, and serving incomplete results) Consider probabilities and impacts of each harm , to provide a way to compare them Advocate for the r

A Mess of Fun

In The Dots  I referenced How To Make Sense of Any Mess by Abby Covert. It's a book about information architecture for non-information architects, one lesson per page, each page easily digestible on its own, each page informed by the context on either side. As a tester, I find that there's a lot here that intersects with the way I've come to view the world and how it works and how I work with and within it. I thought it would be interesting to take a slice through the book by noting down phrases and sentences that I found thought-provoking as I went. So, what's below is information from the book, selected and arranged by one reader, and so it is also information about that reader. Mess: a situation where the interactions between people and information are confusing or full of difficulties. (p. 169) Messes are made of information and people. (p.11) Information is whatever is conveyed or represented by a particular arrangement or sequence of things. (p. 19)

The Dots

One of the questions that we asked ourselves at CEWT 3 was what we were going to do with the things we'd discovered during the workshop. How would, could, should we attempt to share any insights we'd had, and with who? One of the answers I gave was that Karo and me would present our talks at Team Eating , the regular Linguamatics brown-bag lunch get-together. And this week we did that, to an audience of testers and non-testers from across the company. The talks were well-received and the questions and comments were interesting. One of them came from Rog, our UX Specialist. I presented a slide which showed how testing, for me , is not linear or strictly hierarchical, and it doesn't necessarily proceed in a planned way from start to finish, and it can involve people and objects and information outside of the software itself. Testing can be gloriously messy, I probably said: His comment was (considerably paraphrased) that that's how design feels to him. We sp

Something of Note

The Cambridge Tester meetup last week was a workshop on note-taking for testers by Neil Younger and Karo Stoltzenburg . An initial presentation, which included brief introductions to techniques and tools that facilitate note-taking in various ways ( Cornell , mind map , Rapid Reporter , SBTM ), was followed by a testing exercise in which we were encouraged to try taking notes in a way we hadn't used before. (I tried the Cornell method.) What I particularly look for in meetups is information, inspiration, and the stimulation of ideas. And I wasn't disappointed in this one. Here's some assorted thoughts. I wonder how much of my note-taking is me and how much is me in my context ? ... and how much I would change were I to move somewhere else, or do a different job at Linguamatics ... given that I already know that I have evolved note-taking to suit particular tasks over time ... further, I already know that I use different note-taking approaches in different conte

The Anatomy of a Definition of Testing

At CEWT 3 I offered a definition of testing up for discussion. This is it: Testing is the pursuit of actual or potential incongruity As I said there , I was trying to capture something of the openness, the expansiveness of what testing is for me: there is no specific technique; it is not limited to the software; it doesn't have to be linear; there don't need to be requirements or expectations; the same actions can contribute to multiple paths of investigation at the same time; it can apply at many levels and those levels can be distinct or overlapping in space and time.   And these are a selection of the comments and questions that it prompted before, during and after the event, loosely grouped: Helicopter view it is sufficiently open that people could buy into it, and read into it, particularly non-testers. it's accurate and to the point. it has the feel of Weinberg's definition of a problem.  it sounds profound but I'm not sure whether there is a


CEWT is the Cambridge Exploratory Workshop on Testing , a peer discussion event on ideas in and around software testing. The third CEWT, held a week or so ago, had the topic Why do we Test, and What is Testing Anyway?  With six speakers and 12 participants in total, there was scope for a variety of viewpoints and perspectives to be voiced - and we heard them - but I'll pull out just three particular themes in my reflection on the event. Who Lee Hawkins viewed testing through the eyes of different players in the wider software development industry, and suggested aspects of what testing could be to them. For tools vendors or commercial conference organisers, testing is an activity from which money can be made; for financial officers, testing is an expense, something to be balanced against its return and other costs; for some managers and developers and even testers, testing is something to be automated and forgotten. James Coombes also considered a range of actors, but

Testing All the Way Down, and Other Directions

This is a prettied-up version of the notes I based my CEWT #3 talk on. Explore It! by Elisabeth Hendrickson is a classic book on exploratory testing that we read - and enjoyed - in the Test Team book club at Linguamatics a few months ago. Intriguingly, to me, although the core focus of the book is exploration, I found myself over and again drawn back to a definition given early on (p.6): Tested = Checked + Explored where, to elaborate (p.5): Checking [is testing] that you design in advance to check that the implementation behaves as intended under supported configurations and conditions. Exploratory Testing [is] simultaneously designing and executing tests to learn about the system, using your insights from the last experiment to inform the next. And both of these aspects are necessary for testing to have been performed (p.4-5):  ... you need a test strategy that answers two core questions:  1. Does the software behave as intended under the conditions it’s supposed t

Testical Debt

So the other day, while listening to Testing in the Pub with Keith Klain , as it happens, a thought that made me chuckle popped into my head. And when, an hour or two later, I was still chuckling, I tweeted it. Testical Debt: the #testing that is prioritised out of a cycle and then later kicks you in the ... well, later really hurts you. — James Thomas (@qahiccupps) October 25, 2016 The tweet format can be a sweet format because the detail is left to the reader's imagination. But I wanted to add a couple of notes. The term is a pun on technical debt : In this metaphor, doing things the quick and dirty way sets us up with a technical debt, which is similar to a financial debt. Like a financial debt, the technical debt incurs interest payments, which come in the form of the extra effort that we have to do in future development because of the quick and dirty design choice.  And, while the tweet is funny because we've all felt that kind of pain due to that kind of debt