Tuesday, January 15, 2019

The Value in Values

The testers at Linguamatics decided to explore the adoption of a set of team values and this short series of posts describes how we got to them through extended and open discussion.

If you find the posts read like one of those "what I did in the holidays" essays you used to be forced to write at school then I'll have achieved my aim. I don't have a recipe to be followed here, only the story of what we did, in the order we did it, with a little commentary and hindsight.
  • Introduction
  • Why?
  • Teasing Them Out
  • Living With Them
  • Reflection

Our team provides testing services to other teams in the company, in their contexts. That means we cover a selection of products, domains, and technologies across several development groups, operations, professional services projects, our internal compliance process, and more.

In terms of methodology, we are in permanent Scrum teams, we join time-bounded projects set up to implement a particular feature or satisfy a particular customer need, and we work day-to-day with groups whose priorities and resources are subject to change at very short notice.

In spite of the varied nature of our assignments it's historically been our desire to maintain strong team bonds and an information-sharing culture and so we've engineered some formal and informal opportunities to do that.

Amongst other things, each week we have a catch-up with some kind of presentation (such as a feature, a tool, an approach), we have a daily stand up (roughly: prefer outcomes over outputs, share issues, ask for data or help), and we have a tradition of optional, opportunistic, 5-10 minute overviews on topics that are potentially interesting right now but too deep for stand up.

We also have a regular team retrospective in which we allow ourselves to discuss pretty much anything about the way we work. It tends to stay out of project specifics — because they'll be discussed within the projects — but recent topics have included dedicating time to shortening the run time of a particular test suite to enable developers to get faster feedback from it, creating a specific type of virtual machine for us to share, and reviewing how we schedule work.

At the start of 2018, a retro topic that I proposed after hearing Keith Klain speak at Quality Jam 2017 was voted up. In his talk, Keith said that one of things he likes to see in a team is a shared understanding of the important factors that frame how they work. Based on that, I asked should we establish a set of team values, principles, or a mission statement?

The resulting discussion generated enthusiasm. And questions, naturally. They included:
  • What do we want to create: a defined mission? principles? values?
  • ... and how do these things relate to one another?
  • It shouldn't be be too low-level; it should apply across teams, projects, and so on.
  • It shouldn't be restrictive or prescriptive; there should be flexibility.
  • It should be a framework for decision-making, not a decision-maker.
  • Do we really need anything different to the company values?
  • Do we want it to change the way we work, or encapsulate the way we work?
  • Do we want others in the company to see it?
  • ... and might it change how others see us?

None of us had ever tried to externalise group values before so we began by researching what others had done. Here's a few examples from within the testing space:

Some of these were published after we started so didn't have as much chance to influence what we did. Iain McCowatt's Principles Not Rules was inspiring to me, but is unavailable as I write this. It's such strong material that I've left the links in the list above in the hope that it'll come back. Small comfort: I saw his talk on the same topic at EuroSTAR 2015 and a handful of my notes are here.

Outside of testing, in development and more generally, we looked at pieces like these:

Closer to home, we observed that our company has some useful data to contribute: our corporate values published on the internal wiki, and a set of informal values that are regularly called out verbally at all-hands meetings.

Finally, we looked to see whether values are encoded implicitly in our tester job adverts, which include lines like these:
  • We strive to provide relevant information to stakeholders and we're flexible about how we do it.
  • We use and we eagerly solicit peer review, we’re open to new ideas, and we perform regular retrospectives to help us improve ourselves and our work.
  • Our company respects what we do, and we’re a core part of our company’s work and culture.
  • Linguamatics is active in the local testing community, regularly hosting meetups and Lean Coffee.
  • We have regular in-house training in testing and other skills.
  • If you get the job you will be expected to
  • ... take responsibility for your work,
  • ... apply intelligence and judgement at all times,
  • ... be able to justify your position and be prepared to discuss alternatives,
  • ... look for ways to improve yourself, your work, the team and the company.

To summarise how we started down this road, then:
  • We wondered if we should think about making our implicit shared values explicit.
  • We discussed it, and decided that we'd give it a go.
  • We did some research to see what was out there, and what we already had.

In the next few posts I'll describe how we moved from this point to a set of values that we can agree on as a team.
Image: https://flic.kr/p/oGMUQ

Wednesday, January 2, 2019

Sweet Fifteen

What is the right number of tests? Which tester hasn't been asked that question many times in one form or another? When will the testing be done? Can you test to make sure this works? How much effort would it be to test that? Can you show that performance has improved? We need to shorten the run time of the automated tests, can you remove some? How many test cases are passing?

What is the right number of tests? According to Matalan, I found out over the Christmas holiday, the sweet spot appears to be fifteen:
We check our garments at least 15 times to ensure they meet your expectations on quality. 

Fifteen. It'd be easy to scoff, wouldn't it? Testing is never done, testing can never be complete, testing doesn't ensure anything, testing is brainwork, testing is an art, I tell you, it's an art!

Now, don't get me wrong: I love the theory, the philosophy, the abstract. I can be as up my own backside about testing as the next person. (And I am. I cite this blog as evidence.) But I also recognise that we work in a world where rubber, and risk, is constantly hitting the road. We are at the sharp end. The decisions we make under the constraints we have at any given time can matter. We also sometimes need to be able to provide genuine answers to questions like the ones at the top, when they're asked genuinely.

So I don't scoff (these days). I take the jarring statements and questions as an opportunity for a thought experiment. For example: what might be meant by the claim that Matalan are making? What real-world conditions could motivate the need to make such a claim? What kind of evidence could be used to back the claim up, were it ever challenged, and to what extent does that matter?

Which is why, while stalking round the store as my daughters hunted for new jumpers and a DVD to while away a couple of in-law hours that afternoon (after a family vote we ended up with The Pirates! In an Adventure with Scientists!) I found myself asking questions like these:
  • is the claim about every type of garment, regardless of its complexity? A sock gets the same attention as a three-piece suit?
  • is this a claim about some garments, some types of garment, every instance of a garment?
  • what are "our garments"? Those made by Matalan, those sold by Matalan, something else?
  • is it the same fifteen tests every time?
  • what even is a test of a garment? Are all tests equal? At all stages of manufacture, delivery, display, ...?
  • whose expectations are being satisfied?
  • who is likely to read this poster, on the outside of a store in a small out-of-town estate?
  • where else is the claim being made?
  • how is satisfaction being judged?
  • what is meant by quality? And how is it measured?
  • is the poster addressing a business need? Maybe potential customers are put off by perceptions of low quality?
  • is fifteen a marketing number based on data? Maybe in focus groups, people feel more confident with fifteen than fourteen or sixteen?
  • is fifteen, or perhaps the wording or phrasing, based on psychological research? Is the advert tuned to achieve its aim?
  • could a plausible number really be as low as fifteen, surely hundreds of checks are made during design, prototyping, trials, ...?
  • is this advert itself part of some A/B test? Are others seeing a different claim elsewhere?

Yes, yes, you say, very smart and all that, but what exactly does this kind of blathering achieve?

Fair point. For me, it serves as a reminder to stay humble, and also to think outside of the rails on which my thoughts might naturally run. There could be a bunch of reasons, a stack of context, assumptions, data, and belief behind a statement. Just because it doesn't fit you or me, given what we know, in the milliseconds it can take to form a reaction, doesn't mean it doesn't have a justification. There can be both sport and learning in taking the time to consider that.

I don't always remember, of course, and when I catch myself failing I think of Bob Marshall's definition of an idiot:
Anyone who is just trying to meet their needs, in the best way they know how, where their way makes little or no sense to us.
which I interpret as a call for "empathy first."

So, if you managed to endure what has turned out to be essentially a stream of consciousness this far down the page, and are right now wondering why on earth I bothered, let me just say there's at least fifteen reasons ...

Tuesday, December 25, 2018

Itch That Time Again

Just past the seven-year anniversary of Hiccupps, and taking stock once more, I find that I averaged around a post a week over the last 12 months. 50 posts a year  was my target when I started the blog back in 2011 and initially it was motivational: I'd find out if blogging could suit me, and me it, only by blogging, and I was prepared to put aside time to gather that data for a year.

You can judge whether I suit blogging (for your needs, at this time, naturally) but my judgement (for myself, over the years) is that writing this blog suits me. I've retained the loose goal of weekly postings not because there's anything intrinsically positive about that volume of material, nor because it guarantees anything about the content, but because it continues to stimulate me to find a time box to write in and writing regularly brings me better writing, deeper thinking, and happiness.

In these annual retrospective posts I've generally plucked out a few pieces that brought or bring me pleasure. This year is no exception, and here's a handful:

  • RIP Jerry Reminiscing about Jerry Weinberg, whose work has so significantly touched mine.
  • Exploring it! The cycle of skill refinement and reinforcement, inside and outside of testing. This is essentially zooming in on a couple of  the Heuristics for Working I also blogged about, based on long-term reflection on how I and others get stuff done, lead, explore, and manage ourselves.
  • Tufte A series of posts which lists the quotes that stood out for me when I read five of Edward Tufte's books on the presentation of information.
  • Beginning Sketchnoting I spoke and wrote about how I've approached sketchnoting, and why.
  • Hard To Test Notes from SoftTest, Dublin, where I did my first keynote, When Support Calls, based on my Ministry of Testing ebook.

Here's to a happy new year.

Sunday, December 23, 2018

Merry Cryptmas

It's become an annual tradition in our team to run a Testing Can Be Fun session at Christmas. In them, we'll do a group activity that exercises our testing muscles in a context outside of our usual work, have a laugh, and eat something sweet.

The first one was back in 2009 and saw us evaluating panettone, stollen, and mince pies as candidates for integration with our forthcoming Christmas Dinner product (or some other thin pretext to scarf multiple portions of cake) and it just carried on.

Why Testing Can Be Fun? To be honest, this far away in time, I forget why I chose the name. I mean, no-one's saying we don't have fun the rest of the year. At least, not to my face...

Aaaaaanyway, last week we got together with a big box of traditional British Christmas biscuits and Decrypto. It's a team game where the objective is to communicate codes securely to your team mates and to intercept and decrypt the codes transmitted by the opposing team.

Although it sounds like it might having something to say about security, there's little emphasis on that side of things and, instead, it seems to reward an interesting mixture of pattern matching, lateral thinking, and reasoning. It's also got great retro computing graphics all over it.

We managed to fit a short intro and the first game into around 35 minutes. Subsequent games are much faster, once the basic concepts (keywords, codes, clues, the "Encryptor") and the turn order are more familiar. Recommended.
Images: Amazon, Richmonds

Thursday, December 13, 2018

Vote Karo!

UKSTAR are running their Meetup Hero competition again this year and I nominated Karo Stoltzenburg with this somewhat embarrassing gush:

Cambridge has had a really active tester community in the last few years: an evening meetup, a Software Testing Clinic, the Cambridge Exploratory Workshop on Testing and a morning Lean Coffee. Karo runs the first two, has been ever-present at CEWT and a regular at Lean Coffee. And, in case that wasn't enough, in our team at work she's initiated a book club, a series of "What I don't know about X" sharing sessions, and brought in guests to speak at Team Eating, our brown bag lunches. Pretty much, if there's something happening with testers in Cambridge you can expect to find Karo there.

At the evening meetup she's given local testers the chance to be inspired by great speakers such as Anne-Marie Charrett, Adam Knight, and Neil Studd; to practice speaking in front of a friendly audience; to share testing stories and tips at show and tell nights; to learn Riskstorming with the creator of TestSphere, Beren Van Daele; to get stuck into (and stuck by) puzzles and games; to learn about mental health; or just have a quiet chat with some fellow explorers of the testing space down the pub. When someone new turns up, Karo will be the first to welcome them and bring them into the group.

In the Clinic her expertise and personality enhance the syllabus and the room. She is encouraging and good-humoured, always willing to offer her experience, but also extremely welcoming of contributions from others, and unstintingly enthusiastic, even on the night when she turned up and found that she was unexpectedly running things without her co-host! I have seen first-hand how thoroughly she prepares and the care she takes with the organisation to make sure that everyone's needs are considered and catered for and I'm staggered by the effort she puts in. 

When she isn't running things, she's a great participant and her contributions at CEWT have been reliably thoughtful and considered. At CEWT #6 recently, she gave an inspirational talk about diversity in test teams and the risks of testers letting ourselves sit in a box defined by others. Given that, it's perhaps unfair of me to put her in a box, but I think there's one that fits her: Meetup Hero. 

Vote Karo!

P.S. All of the nominees are great and I'll still be your mate if you vote for one of them, but really I'd prefer it if you'd just nip over and vote for Karo. Cheers.

P.P.S Full disclaimer: Karo is on my team and previously nominated me for this award (which was also embarrassing) but neither of those are the reasons I've nominated her.

Sunday, November 25, 2018

Talking Shop

It can be tempting to confuse training with learning, with skill acquisition, or with the ability to recognise situations in which training material could be used. Attending a workshop is, in general, unlikely to make you an expert in a thing, equip you to apply the thing in your real world context, or even necessarily make you aware that the thing could be applied. Attendees would do well to remember it (particularly when sending me a CV!) and their managers would do even better.

I'm an attendee and a manager: do I do better?

I hope so. In the test team at Linguamatics we spend our training budget on the same kinds of things that your teams probably do: books, conferences, courses, subscriptions, workshops and occasionally something different like an internal conference or escape room. Crucially, as both manager and attendee, I try hard not to mistake having and doing for knowing and being confident in practice.

It's important to me, as a manager, to participate in training and to demonstrate ways that I think it can be a productive experience: training shouldn't be something that's simply done to others. From the attendee side, training isn't about just turning up, listening up, and getting skilled up. Training, I've found, rewards a positive mindset and active participation rather than passive attention and the sense it has to be got over with.

Training is an opportunity to step outside the bunker and the usual mindset, to get exposed to new perspectives or tools or ways of working. It's a place to inspire, challenge, and compare notes. It's often a place to get a broad view of some field, rather than a deep one, and to identify things that might be useful to follow up on later.

Providing training sessions is one way that, as a company, we can show that we care about our employees, and making an effort with our training is one way that I can show that I care about my team mates. We organise in-house workshops for the whole team to do together, at work and inside regular working hours. These are the topics we've covered in the last five years:

  • Experimentation and Diagnosis: a workshop on design and interpretation of experiments (James Lyndsay)
  • Think on Your Feet: strategies for reporting, particularly when put on the spot (Illumine)
  • A Rapid Introduction to Rapid Software Testing: highlights from RST in one day (Michael Bolton)
  • Workplace Assertiveness: remaining calm and getting your point across whatever the situation (Soft Skills)
  • Web Testing 101: introduction to HTTP, REST, proxies, and related testing tools (Alan Richardson)

Quite apart from exposure to those topics, bringing training to work has other advantages. I don't underestimate the value of team-based exercises in building esprit de corps, encouraging collaboration, and promoting empathy through shared experience. I also want to be sensitive to my teams' personal situations where, for example, family commitments can make travel to outside events difficult.

From a practical perspective, whole-team training can be financially worthwhile; it tends to be lower cost per person than the same content at an external location, there's usually more opportunity to customise it, and questions about your specific context are easier to ask and have answered. It's also a convenient way for me to satisfy my personal goal of providing a training opportunity to everyone on the team every year.

But still there's the question of internalising the material, practising it, finding ways that it can work for an individual, team, and ultimately company. (Or finding that it doesn't.) Again, we probably do the same kinds of things that you do: those attending conferences might reinforce their learning and understanding by sharing aspects of their experience back to the team; those with subscriptions to resources like the Ministry of Testing Dojo often summarise articles or organise lunchtime video watching; as a team, after a workshop, we might each verbalise something that we felt was a valuable takeaway to the rest of the group.

Afterwards, taking the training into our own context can be challenging. When work needs to be done, it's not always easy to find time and opportunity to practice, particularly in a way in which it feels safe to fail or just take longer while unfamiliarity is worked through.  There's an often-quoted (and also widely-disputed) idea that 10000 hours of practice are required to become an expert in something. The truth of the claim doesn't matter much to me — I rarely need to be ninja level at anything — but my own experience dictates that without any practice there's little likelihood of any improvement.

I try to pick an aspect of the training that I think could be valuable to me and apply it pretty much everywhere there is a chance to. This way I learn about the tool or approach, my aptitude for it, my reaction to it, the applicability of it in different contexts, and its inapplicability in different contexts. I wrote about eagerly using the Express-Listen-Field loop in conversations after our assertiveness training last. This year, after Alan Richardson's training, I focused on making bookmarklets and now have a handful, largely as efficiency tools, which I've shared back to the team. They are not pretty, but they are functional, they have cemented the idea in my head, and they are delivering benefit to me now.

Pretty much every training session I've ever attended has some kind of key points summary at the end, so it seems appropriate to finish with something similar here.

  • care to find quality training to offer your teams, and attend it
  • don't confuse attendance with expertise and experience
  • demonstrate ways in which value can be taken 

  • take a positive mindset into it
  • be alert for things that you can take out of it
  • seek to experiment with those things quickly and regularly afterwards

Naturally, if any of that sounded interesting simply reading it is insufficient to extract its value to you.
Image: https://flic.kr/p/dq4qcX

Tuesday, October 30, 2018

Hard to Test

I attended SoftTest 2018 last week and really enjoyed it. The vibe was good, the people were friendly, and the low-price, one-day format is right up my street. I've already posted sketchnotes and the slides from my own talk so here's a few words about the presentations that I saw.

Conor Fitzgerald talked about the benefits of learning about other industries, disciplines and domains. Part career history — from "confirmatory checker" to exploratory tester — and part a list of resources he's found useful over the years, he covered how studying business and economics grew his testing perspectives; how heuristics, oracles, and a ready supply of questions help testers cover both breadth and depth; how burnishing your social skills helps to make and maintain interpersonal relationships (examples: don't blame, assume goodwill, be kind); and how explicit modelling and data gathering and analysis can guide and drive discovery and understanding.

To create a high-performing team first create a culture in which the team can operate in comfort and safety, that's Marco Foley's message. Based on Daniel Pink's concept of Motivation 3.0 he defined Management 3.0 and contrasted it with Management 2.0 — a carrot and stick culture where managers dictate the work and then reward or punish the workers. Management 3.0 is about intrinsic enjoyment. As in Maslow's Hierarchy of Needs, the basic idea is that once baseline requirements (such as salary) are met, people are motivated by the intrinsic enjoyment of a task; they seek an environment in which autonomy, mastery, and purpose (again due to Daniel Pink) are present, an environment in which they are free to do the right tasks at the right time. A manager can help to facilitate this by providing opportunities for failure to take place safely and to be viewed as learning, so as to encourage more trying and hence more success. (Note that although the name is the same, Marco's content hasn't come from Jurgen Appelo's Management 3.0.)

This was my quote of the day:
If it's hard to test, it won't be tested
Rob Meaney said it, in a talk on Testability where he described his CODS mnemonic: Controllability, Observability, Decomposability, Simplicity. Systems which have these properties designed in are likely to be easier to test and Rob walked us through the application of CODS to a system he'd been working on in which the team built a new component alongside an existing monolith (decomposability, simplicity) with extra logging (observability) and an interface which let it be tested independently of the rest of the system (controllability).  An earlier version of Rob's talk is available at the Ministry of Testing Dojo.

How can we testers show our value when there's no tangible "product" for others to observe as an outcome? Mags Dineen asked the question and then answered it: align ourselves with business needs and priorities. Of course, it's on the business side to be clear about them, and then back to us to make the effort to understand them and the intent behind them. Once present and grasped, they can become oracles for choosing a direction of travel and bringing others along with us. While we're about it, let's try to get the idea that quality is owned by the team not just the testers to be more widely understood. We can do that by being collaborative and open, by being canny about how we influence things (for example, consider the benefits of influencing an influencer), and collecting and exploiting data about our products, processes, and ourselves.

Claire Barry-Murphy and Darryn Downey described a team re-organisation where the existing process was known broken: slow, lots of work in progress (WIP), impenetrable specs, and lengthy standups. They embraced BDD, introduced WIP limits, added lane policies to their Kanban, used 3 amigo reviews, and wrote stories in a spec by example fashion. One of the points that stuck in my mind was that developers "drive" the tickets across the board, pairing along the way. This is very much not handovers, but rather ownership and purpose.

The closing keynote was Gwen Diagram's observations on how traditional management is broken and her personal prescription for fixing it. It was nice to see themes from talks earlier in the day reprised, and fun for it to be laced with anecdotes and bonus swearing. There was a serious message too, though; Gwen's management medicine tastes something like this: treat people like people, and don't talk down to them; work together for solutions; lead, don't manage; provide motivation; remove fear; give feedback; remember that everyone can be excellent; aim for happy employees.
Image: Discogs