Skip to main content


Showing posts from November, 2015

Happy Not to Know

The current issue of The Guardian's Weekend magazine includes the transcription of a conversation between the authors Marlon James and Jeanette Winterson . This extract struck a chord: MJ : What I find, particularly with young writers and readers, is that they don’t want complicated feelings.  JW: But they’re young. And I feel sympathy with that. I’m happy to not know what I think about stuff; I’m happy to change my mind. But it’s relatively recently that I’ve been able to apply that to feelings. I used to like to know what I felt. I didn’t want those feelings to be complicated or muddled or clashing. I've been young (yeah, really, I had hair and everything) and I feel like these days I've made the transition they're talking about in writing, reading, feelings, work and life. And while I see that I also referred to age when I wrote about something similar a couple of years ago  I don't believe the opposition here need be about that, although the expe

Cambridge Lean Coffee

Yesterday's Lean Coffee was hosted by Jagex .  Here's a brief note on the topics that made it to discussion in the group that I was in. Automated testing. A big topic but mostly restricted this time to the question of screenshot comparison for web testing. Experience reports say it's fragile. Understanding what you want to achieve with it is crucial because maintenance costs will likely be high. Looking to test at the lowest level possible, for the smallest testable element possible, can probably reduce the number of screenshots you will want to take. For example, to check that a background image is visible for a page, you might check at a lower level that the image is served and assume that browsers are reliable enough to render it rather than taking a screenshot of the whole page which includes much more than the simple background image. Why go to a testing conference? It builds your confidence as a tester to find that other people think similar things, m

Means Testing

I've found it interesting to read the recent flurry of thought on the testing/checking question, thoughts referring back to Testing and Checking Refined , by James Bach and Michael Bolton, in which the following definitions for the terms are offered: Testing is the process of evaluating a product by learning about it through exploration and experimentation, which includes to some degree: questioning, study, modelling, observation, inference, etc. Checking is the process of making evaluations by applying algorithmic decision rules to specific observations of a product. The definitions are accompanied by a glossary which attempts to clarify some of the other terms used. These chaps really do sweat the semantics  but if I could be so presumptuous as to gloss the distinction I might say: testing is an activity which requires thought; checking is simply a rigid comparison of expectation to observation. There has been much misunderstanding about the relationship between the t

Telling Remarks

The last Cambridge Tester Meetup was a Show and Tell : "Bring whatever you found valuable or interesting recently in your testing." I imagine Karo  will blog about it in detail, so I'll just briefly mention the couple of things I took with me. I showed my Kindle and that it's currently got Jerry Weinberg's Tester's Library  and Laurent Bossavit's Leprechauns of Software Engineering  on it. I described how much I enjoy Weinberg's writing  and how much I get from it. I talked about how Bossavit's book is both a deconstruction of some factoids around software development and a guide to critical thinking. I said that I now read more software-related material than I do anything else, and I enjoy it for the most part. But I stop reading anything I'm not enjoying. I read to learn, to make connections, to expose myself to new ideas. I concentrate on the content because it's research and I want to try to understand and retain it. I showed my p

Can Dour

My face rests just north of miserable. (In a previous blog post I once called it the growler .) Sadly for me, from that starting point just a little puzzlement, thought, irritation or even too much milk in my tea can turn it into an apparent mask of rage. I usually remain blissfully unaware at the time, despite the fact that friends, family and colleagues have all told me about it. Given this, you might think it ironic that my EuroSTAR 2015 talk was about making people laugh, about the analogy I perceive between joke-making and testing and how I exercise my testing muscles when I'm not testing by exploratory joking. Or perhaps that simply makes you chuckle. Here's the slides: I enjoyed presenting, was delighted with the reaction, and then a little surprised to find myself instinctively asking everyone who spoke to me afterwards whether they felt there was something practical that they could take from it. Dour? For sure, but also a doer. Image: Twitter

The Antics in Semantics

In his EuroSTAR 2015 talk, Iain McCowatt described an ongoing project which involves establishing testing principles for his teams, across projects. In a large organisation in a heavily-regulated environment such as his, this is a really significant challenge, but he described an interesting, pragmatic approach based on Simon Sinek's Golden Circle . While discussing the principles that he and his team arrived at, he said something slowly and with great emphasis: We really sweated the semantics. The principles are important. They are directly tied to and defined to support the purpose of testing for his company. They are the foundation for all decisions that will be made about testing in the company. As you might expect given this, he was careful to also define what he means by a principle. For the record, this is it: Principles are heuristics and shared values that guide our thinking about how to test and organize our testing.  I enjoyed Michael Bolton 's talk, No M

Weinberg Woz Ere

Jerry Weinberg oozed out of many of the talks at EuroSTAR 2015 . He was most visible as the hero at the heart of Kristoffer Nordstrom 's keynote but numerous other talks that I attended and conversations that I had directly or indirectly, knowingly or unknowingly, as quote or in spirit offered some variant of this famous line : No matter what the problem is, it's always a people problem.  The observation that in testing we need to remember that we deal with people was a seam running through the the conference. A seam from which value can be extracted. A seam containing testing ore. And probably also testing lore, shared around by these speakers, these self-declared conversationalists, coffee cup evangelists, shoe leatherers. These kinds of testers do not view testing as just being heads-down in the product trying to break things (as valuable as that can be). They are out and talking to people, they are recognising (actual or potential) problems outside of the software

Testing is Simple (and Complicated)

In his keynote at EuroSTAR 2015 Rikard Edgren said many things that resonated with me. This was the one that rang out loudest: Testing is simple: you understand what is important and then you test it. Followed almost immediately by Testing is complicated.  Testing as recursion. A simple statement hiding deep complexity. An elegant surface belying the turbulence underneath. This is so beautiful. It put me in mind of fractals such as the Mandelbrot set where a benign-looking equation, if exercised, generates never-ending, self-similar, ever-finer detail. Searching for related insight, I see that Adam Knight has arrived in a similar place from a different direction. (And be sure to read the comments there for a salutary caution against shallow analogy .) Image: Edit: In the comments in  How Models Change , Michael Bolton describes a model of testing as fractal and Adam later revisited his ideas  and spoke about them to the Cambridge Tester Mee