Skip to main content

Team Values: Reflection




The testers at Linguamatics decided to explore the adoption of a set of team values and this short series of posts describes how we got to them through extended and open discussion.

If you find the posts read like one of those "what I did in the holidays" essays you used to be forced to write at school then I'll have achieved my aim. I don't have a recipe to be followed here, only the story of what we did, in the order we did it, with a little commentary and hindsight.
--00--
We took our own sweet time to arrive at a set of team values. That's okay with me. It felt like a good pace for us given all of the variables: the ongoing work, the people, the novelty, and the uncertainty of the outcome. We took small steps, with space for review, with clarity on the next stage, and with consensus.

We've now been living with our values for a few months and I thought it might be interesting to wonder whether the questions we asked ourselves back at the start were helpful, and whether we resolved them. They were:
  • Why are we interested in doing this?
  • What do we want to achieve with it?
  • How will we go about achieving it?

For me, those questions framed the whole arc of our approach. The order feels logical for this kind of endeavour and provided us with a grounding that we could look back to if we needed it. That's not to say that the Why and What we created were complete, nor that we wouldn't change them. No-one with any experience and sense of reality about the world would expect perfection up front ... would they?

Interestingly, the Why and What were relatively easy compared to the How. We quickly established good arguments for a set of values, and also the kinds of actions that we felt would motivate those values. Working out how to achieve it took a lot more and was, it seems to me, separated into two distinct parts: what mechanisms will we use to get to a set of values? and what should we do with our set of values to get the benefits we want? We managed to evolve an approach for the first question, with a lot of conversation along the way. We're still in the middle of the second.

Did we achieve our aims for the values, then? I think we did, in the sense that the values we generated do feel like they cover the areas we wanted, namely to:
  • Encourage
  • Emphasise
  • Empower
  • Explain

We are encouraging approaches that focus on risk and value; we are empowering ourselves to find ways (and time) to improve, and to feel safe to speak and act; we are setting ourselves up to explain what we're doing and why; and we are emphasising that all of these things are, to us, productive and valuable ways to go about our business.

A important question: could we have achieved these aims in a different way? There is no doubt about it, we definitely could. This was one path through a massive space of possible approaches and we negotiated it reasonably cautiously. It felt crucial to me that we didn't lose anyone en route if these were to be shared values, but getting unanimity from a group of fifteen people is not always easy.

Related: could this approach have achieved different values? Again, there is no doubt about it. A different set of people, in a different mood, on a different day, with a different facilitator or facilitation could easily have taken another route. And I think that it's possible that we'll revisit and change the values ourselves.

As I've said, we didn't rush into these values. It was my intuition at the beginning that slow and steady, with plenty of scope and time for checking and chewing over, was the way to go. The elapsed period was around 8 or 9 months but at any given point we didn't require much time from each of us and, when we did commit time, we were committed to the task.

I don't recall ever stating it explicitly, but one of my high-level goals was to try to keep the mechanics of the process out of the process as much as possible. When we got together to discuss this stuff, I wanted to be talking about the values much more frequently than how we would arrive at them.  And how we arrived at them was incrementally.

I might say that we were feeling our way into a process. You might say that we were (and particularly I was) making it up with each step. It wouldn't take much persuasion for me to agree that was a fair description too. But it wasn't a dictatorship: we talked about the process over the content at times, for example to decide that attendance would be optional and to time-box meetings at 30 minutes.

In a story of values, it'd be good to wonder whether this effort was valuable. Instinctively, I do feel that it was but I don't have an objective measure to judge it by. Subjectively, I think that we understand each other and our attitude to our work better. I get the feeling that, despite our differences — and believe me, we are very different people — we share a lot of outlook, desire to help the company craft great products, and care to do a good job inside business constraints.

It occurred to me at several points along the way, and again while writing up, that there's a strong analogy to the software development business here. In this case, our team took the role of stakeholders, implementers, and end users. Needing to state what we wanted, having to arbitrate in conversations between stakeholders, agreeing to relax requirements in the name of producing a thing, actually producing a thing, showing a thing to stakeholders, having a thing criticised by users ... all of these are roles performed by people we have to deal with all the time. And it doesn't hurt us to walk in their shoes from time to time.

On a personal level, in writing this I wondered whether there were in fact two stories, or perhaps three, intertwined: the values we got, the way we got to those values, the way that I wanted to participate in getting us to the values.

I briefly wondered about trying to disentangle them, but found that it read better to me with my commentary combined into the conversation itself. If I had to pull a few guiding principles out, I'd say these were core to me:
  • Not getting bogged down with process; as long as we were talking productively we continued.
  • Accepting that we could leave some aspects ambiguous; until it's critical to resolve them.
  • Giving all a chance to be heard; and feel they're being listened to.

I have been on this kind of journey before to some extent, as a solo traveller. I have worked through what my management principles are, and encouraged the line managers in the Test team to do the same. I've also organised a cross-company sharing of management principles where many of the line managers presented three things that they felt were fundamental to how they work.

Also, a year or two ago I found myself working through a definition of testing which involved many entertaining hours pondering semantics and my own perspectives on my work. The cut-and-thrust of that propose/challenge/respond dynamic, happening with two instances of me inside my own head, gives me enormous amounts of enjoyment. But it's not to everyone's taste and perhaps not so suited to a large group.

On the other hand, the benefits that I have accrued from having a definition was something that I thought and hoped could be more generally acceptable: as a reference point, an anchor, for decision making. I ask myself: should I be testing now? If the answer is yes, then I can see whether or not the activity qualifies as testing for me. If it doesn't then I have to consider my priorities. If it does, then I can wonder whether it's the right testing.

We can do the same thing as a team, using our values as a motivator, as an envelope, as a guide. But in all these cases, it must be heuristic — there may be circumstances in which we choose to do something else, but at least we'll expect to have taken a deliberate decision to do that, and understand why.

I have found that reference to our values in conversations about priorities or how to react to circumstances can be extremely helpful. They might not tell us precisely what to do, but do give us guidance in the direction we should be looking to take. As I've repeatedly said (and it bears repetition) the conversations around what we're about were immensely informative and pleasurable devices for building a shared understanding, team spirit, and empathy.

I'd like to thank: the Test team at Linguamatics for making and participating so deeply in such an interesting project, and crafting this set of tools for ourselves; Daniel Karlsson and Keith Klain who were kind enough to give me some of their time and background in what they were trying to achieve in their own similar projects.
Image: https://flic.kr/p/oGMUQ

Comments

Popular posts from this blog

Can Code, Can't Code, Is Useful

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "If testers can’t code, they’re of no use to us" My first reaction is to wonder what you expect from your testers. I am immediately interested in your working context and the way

Meet Me Halfway?

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Stop answering my questions with questions." Sure, I can do that. In return, please stop asking me questions so open to interpretation that any answer would be almost meaningless and certa

Testing (AI) is Testing

Last November I gave a talk, Random Exploration of a Chatbot API , at the BCS Testing, Diversity, AI Conference .  It was a nice surprise afterwards to be offered a book from their catalogue and I chose Artificial Intelligence and Software Testing by Rex Black, James Davenport, Joanna Olszewska, Jeremias Rößler, Adam Leon Smith, and Jonathon Wright.  This week, on a couple of train journeys around East Anglia, I read it and made sketchnotes. As someone not deeply into this field, but who has been experimenting with AI as a testing tool at work, I found the landscape view provided by the book interesting, particularly the lists: of challenges in testing AI, of approaches to testing AI, and of quality aspects to consider when evaluating AI.  Despite the hype around the area right now there's much that any competent tester will be familiar with, and skills that translate directly. Where there's likely to be novelty is in the technology, and the technical domain, and the effect of

Testers are Gate-Crashers

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Testers are the gatekeepers of quality" Instinctively I don't like the sound of that, but I wonder what you mean by it. Perhaps one or more of these? Testers set the quality sta

Postman Curlections

My team has been building a new service over the last few months. Until recently all the data it needs has been ingested at startup and our focus has been on the logic that processes the data, architecture, and infrastructure. This week we introduced a couple of new endpoints that enable the creation (through an HTTP POST) and update (PUT) of the fundamental data type (we call it a definition ) that the service operates on. I picked up the task of smoke testing the first implementations. I started out by asking the system under test to show me what it can do by using Postman to submit requests and inspecting the results. It was the kinds of things you'd imagine, including: submit some definitions (of various structure, size, intent, name, identifiers, etc) resubmit the same definitions (identical, sharing keys, with variations, etc) retrieve the submitted definitions (using whatever endpoints exist to show some view of them) compare definitions I submitted fro

Build Quality

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "When the build is green, the product is of sufficient quality to release" An interesting take, and one I wouldn't agree with in general. That surprises you? Well, ho

Make, Fix, and Test

A few weeks ago, in A Good Tester is All Over the Place , Joep Schuurkes described a model of testing work based on three axes: do testing yourself or support testing by others be embedded in a team or be part of a separate team do your job or improve the system It resonated with me and the other testers I shared it with at work, and it resurfaced in my mind while I was reflecting on some of the tasks I've picked up recently and what they have involved, at least in the way I've chosen to address them. Here's three examples: Documentation Generation We have an internal tool that generates documentation in Confluence by extracting and combining images and text from a handful of sources. Although useful, it ran very slowly or not at all so one of the developers performed major surgery on it. Up to that point, I had never taken much interest in the tool and I could have safely ignored this piece of work too because it would have been tested by

Am I Wrong?

I happened across Exploratory Testing: Why Is It Not Ideal for Agile Projects? by Vitaly Prus this week and I was triggered. But why? I took a few minutes to think that through. Partly, I guess, I feel directly challenged. I work on an agile project (by the definition in the article) and I would say that I use exclusively exploratory testing. Naturally, I like to think I'm doing a good job. Am I wrong? After calming down, and re-reading the article a couple of times, I don't think so. 😸 From the start, even the title makes me tense. The ideal solution is a perfect solution, the best solution. My context-driven instincts are reluctant to accept the premise, and I wonder what the author thinks is an ideal solution for an agile project, or any project. I notice also that I slid so easily from "an approach is not ideal" into "I am not doing a good job" and, in retrospect, that makes me smile. It doesn't do any harm to be reminded that your cognitive bias

Test Now

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "When is the best time to test?" Twenty posts in , I hope you're not expecting an answer without nuance? You are? Well, I'll do my best. For me, the best time to test is when there

Vanilla Flavour Testing

I have been pairing with a new developer colleague recently. In our last session he asked me "is this normal testing?" saying that he'd never seen anything like it anywhere else that he'd worked. We finished the task we were on and then chatted about his question for a few minutes. This is a short summary of what I said. I would describe myself as context-driven . I don't take the same approach to testing every time, except in a meta way. I try to understand the important questions, who they are important to, and what the constraints on the work are. With that knowledge I look for productive, pragmatic, ways to explore whatever we're looking at to uncover valuable information or find a way to move on. I write test notes as I work in a format that I have found to be useful to me, colleagues, and stakeholders. For me, the notes should clearly state the mission and give a tl;dr summary of the findings and I like them to be public while I'm working not just w