I attended LLEWT 2025 at the weekend. LLEWT is a peer conference hosted by Chris Chant, Joep Schuurkes, and Elizabeth Zagroba in Llandegfan on the island of Anglesey, North Wales. This year's theme was Rules and constraints to ensure better quality: Think of things like WIP limits, zero bug policies, trunk-based development, not allowing any form of interprocess communication except through service interfaces that are externalizable, or just firing all your testers so the devs have to step up. (Yes, not all of these are a good idea all of the time.) Some terms related to this theme are forcing functions, poka-yoke, and behavior-shaping constraints. Basically we're looking for any rule or constraint you put in place to get to better quality. (Some systems thinking might be required.) The format of LLEWT encourages proposals for experience reports on the theme, takes feedback on the proposals, an...
I'm been working on an application that will orchestrate data from multiple services. As the developers add clients for those services, they have been writing integration tests and, naturally, many of them use mocked data. Mostly the data consists of non-trivial JSON response bodies full of domain-specific terminology captured during interactions with the other services. Consequently, many of our tests reflect this complexity and domain-specificity by asserting on its data structures and particular terms. This is functionally fine, but problematic for readability because test intent can be hidden in a mass of incomprehensible word salad. Again, this is usually fine for the author when writing the test because the intent is front of mind but it's problematic for other readers, including the author later. I have been vocal about this drawback and today one of my colleagues asked me to summarise my prescription for it. Without thinking I said this, and I ...