Skip to main content

Posts

Showing posts from July, 2025

Projects I've Bean On

It was my wedding anniversary recently. The picture at the top is the front of the card I got for my wife. Yeah, I know. Somehow she still loves me. I asked my family which bean out of the couple they thought I was and everyone chose the one on the left, including me. Surprised, I showed the picture to my work colleagues and they also unanimously went for the left-hand bean. Why? Here's some of the reasons I was given: the legs it's spilling its drink, like a man would the light reflecting off the top of the head  the man always stands on the left in wedding pictures  it looks like you I don't want to go deep into this, especially that last point, so I'll just observe that despite strong surface agreement there was significant hidden misalignment in motivation. Like so many projects I've bean on. 

The Best Testing I Could

Maaret Pyhäjärvi  posted the quote above on LinkedIn a few weeks ago. It speaks strongly to me so I asked Maaret if she'd written more (because she's written a lot ) on this specific point. She hasn't, and the sentence keeps coming back into my head and I'd like to understand why, so I thought I'd try to write down what I take from it. I think it's easy to skim read as some kind of definition of exploratory testing but that would be a mistake in my eyes.  Testing by Exploring  summarises how I felt last time I went into the definition in any depth and, for me, Maaret's quote is concerned with the why  but says nothing of the what or how . But let's say we have a shared definition of exploratory testing, would I make this statement this baldly generally? No, I probably would not. Why? First, it's written in very personal terms (" my time", "the best testing I could") and, second, as a  contex...

LLEWT 2025

I attended LLEWT 2025 at the weekend. LLEWT is a peer conference  hosted by Chris Chant, Joep Schuurkes, and  Elizabeth Zagroba in Llandegfan  on the island of Anglesey, North Wales. This year's theme was Rules and constraints to ensure better quality: Think of things like WIP limits, zero bug policies, trunk-based development, not allowing any form of interprocess communication except through service interfaces that are externalizable, or just firing all your testers so the devs have to step up. (Yes, not all of these are a good idea all of the time.) Some terms related to this theme are forcing functions, poka-yoke, and behavior-shaping constraints. Basically we're looking for any rule or constraint you put in place to get to better quality. (Some systems thinking might be required.) The format of LLEWT encourages proposals for experience reports on the theme, takes feedback on the proposals, an...

Real vs Clear

I'm been working on an application that will orchestrate data from multiple services. As the developers add clients for those services, they have been writing integration tests and, naturally, many of them use mocked data. Mostly the data consists of non-trivial JSON response bodies full of domain-specific terminology captured during interactions with the other services. Consequently, many of our tests reflect this complexity and domain-specificity by asserting on its data structures and particular terms. This is functionally fine, but problematic for readability because test intent can be hidden in a mass of incomprehensible word salad. Again, this is usually fine for the author when writing the test because the intent is front of mind but it's problematic for other readers, including the author later. I have been vocal about this drawback and today one of my colleagues asked me to summarise my prescription for it. Without thinking I said this, and I ...