Skip to main content

Team Values: Teasing Them Out





The testers at Linguamatics decided to explore the adoption of a set of team values and this short series of posts describes how we got to them through extended and open discussion.

If you find the posts read like one of those "what I did in the holidays" essays you used to be forced to write at school then I'll have achieved my aim. I don't have a recipe to be followed here, only the story of what we did, in the order we did it, with a little commentary and hindsight.
--00--
    So far we've thought of a load of reasons that a team might want values, and loosely grouped them. That's the Why. Next step: the What.

    In a second meeting following the same lightly-facilitated format as the first, we asked ourselves what kinds of values might be good fits to each of our four motivational categories: encourage, emphasise, empower, and explain.

    We were flexible about what constituted a value, accepting philosophies, actions, outcomes, or whatever. We aimed to get the ideas out without too much filtering and, again, we used a mind map to document our balanced, open discussion.

    It was sometimes hard to fit suggestions into our categories, both because the categories overlapped and because none of the categories seemed appropriate. We didn't sweat it; so long as the suggestion was recorded it could go into any of the four, or a fifth, Other, that we added as we went. These are a few examples:
    • Encourage: communication, integrity
    • Emphasise: expertise, risk-driven
    • Empower: continuous improvement, safety
    • Explain: transparency, possible outcomes
    • Other: openness, non-judgemental

    We mostly didn't record detailed justifications for assigning a value to a category. At the time that felt right: did we really want to break the flow of the conversation or increase the paperwork burden? (My answer: no.)

    A consequence of this is that I can no longer tell why, for instance, we didn't we make openness and transparency a single entry. Maybe there was a good reason, maybe it was oversight, or maybe I transcribed things incorrectly. Regardless, what we had was good enough to move forwards in the moment, and with momentum.

    Following another period of reflection we gathered again to choose the subset of the values which would become our focus. We had generated 40 or 50 and I judged that a simple discussion would not easily get us to consensus. So I created a spreadsheet listing all of the values and we split into small groups to rank them, expecting that the top-ranked values would encode those that we felt most strongly about, for whatever reason.

    Interestingly, and painfully for me, each of the groups attacked this task with a unique twist: some provided a strict ranking that represented their combined opinion, some scored individually which meant multiple values could have the same ranking, some simply selected the values they felt were important without any scoring at all, some chose a few values and some chose many, some merged values together and scored the combination, and the merged values were not necessarily the same across groups.

    The effort in that session was deep and the conversations were sometimes quite intense. We'd hoped to combine the results on the day but, due to the complexities of the varied ranking schemes, we didn't even try. So, instead, we shared all of the results back to each other, with a little commentary from each group, and ended the meeting.

    For me, ranking at this point was productive but in retrospect I think it could have been easier to have proposed or imposed a more specific ranking scheme. Of course, this might have necessitated a meta conversation around ranking, which itself could have been difficult to resolve.

    Systemic discrepancies aside, there was real, countable, actionable data generated in those ranking conversations and, to do justice to it, I invested time in looking for ways to aggregate it fairly. In the end I presented several different normalisation strategies back to the team.

    The simplest merely counted the number of times a value was ranked at all, while more complex attempts scaled everyone's rankings to have a total "score" of 1, with weighting for merged items. I also created a graphic which showed visually which group had ranked which values, and which values had been merged.

    This visualisation was helpful because it clearly showed clusters in our categorisations and in our choice of values. The numerical analyses were interesting because they turned out to be quite consistent about the smaller set of values that scored well. They were:
    • Risk-driven
    • Collaboration
    • Communication
    • Experimentation
    • Non-judgemental
    • Continuous improvement
    • Expertise

    After agreeing that the analysis was reasonable we decided to take this set of seven values and begin to develop a non-shallow understanding of what we meant by them.

    The same groups as before each came up with short definitions for a value or two and we debated them together. Lesson learned: nuance is everything.  My role here reverted back to light facilitation, recording outcomes, keeping time, and aiming to ensure that all voices had an opportunity to be heard.

    It was delightful to witness the conversations we had across the weeks. Even if we had eventually failed to arrive at a set of values, the dialogue and the engagement with which we were having it (most of our team attended most of the meetings), felt like a valuable way to share perspectives and build empathy with one another.

    Outside of the meeting cycle I composed revised definitions from my notes, each with associated heuristics for their use that had emerged from the debate. Here's an example:

    Risk-driven: Use relevant data to try to identify the important risks, their likelihoods, and their impacts, and deal with them in a reasonable priority order.
    • Collect data from multiple sources, including stakeholders, the context into which the product will be placed, changes to the software, ourselves, ...
    • Consider risks to the company reputation, the bottom line, the product, the customer, the customer data, ...
    • Strive not to do this work in isolation from others.
    • Recognise that we won't always find all of the risks, and we won't always assess those we do identify correctly.
    • Be alert to the possibility that our analysis can change based on new data.

    In the next review we agreed that we'd captured reasonably well something that the group as a whole could agree with. Yay! We then tried and failed to reduce the set further. Boo! But pragmatism reigned and we decided that we'd live with the large set for a while before thinking again.

    Stopping seemed like a good choice to me. Despite the enthusiasm and continued strong level of participation across the team, I was wary of us becoming fatigued with the effort. Now felt like a good time to put the values into use, gauge our comfort with them and then iterate on the basis of experience.

    In the next post I'll talk about that experience and how it resulted in us cutting our list down to three core values.
    Image: https://flic.kr/p/oGMUQ

    Comments

    Popular posts from this blog

    Meet Me Halfway?

      The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Stop answering my questions with questions." Sure, I can do that. In return, please stop asking me questions so open to interpretation that any answer would be almost meaningless and certa

    Can Code, Can't Code, Is Useful

    The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "If testers can’t code, they’re of no use to us" My first reaction is to wonder what you expect from your testers. I am immediately interested in your working context and the way

    The Best Programmer Dan Knows

      I was pairing with my friend Vernon at work last week, on a tool I've been developing. He was smiling broadly as I talked him through what I'd done because we've been here before. The tool facilitates a task that's time-consuming, inefficient, error-prone, tiresome, and important to get right. Vern knows that those kinds of factors trigger me to change or build something, and that's why he was struggling not to laugh out loud. He held himself together and asked a bunch of sensible questions about the need, the desired outcome, and the approach I'd taken. Then he mentioned a talk by Daniel Terhorst-North, called The Best Programmer I Know, and said that much of it paralleled what he sees me doing. It was my turn to laugh then, because I am not a good programmer, and I thought he knew that already. What I do accept, though, is that I am focussed on the value that programs can give, and getting some of that value as early as possible. He sent me a link to the ta

    Not Strictly for the Birds

      One of my chores takes me outside early in the morning and, if I time it right, I get to hear a charming chorus of birdsong from the trees in the gardens down our road, a relaxing layered soundscape of tuneful calls, chatter, and chirrupping. Interestingly, although I can tell from the number and variety of trills that there must be a large number of birds around, they are tricky to spot. I have found that by staring loosely at something, such as the silhouette of a tree's crown against the slowly brightening sky, I see more birds out of the corner of my eye than if I scan to look for them. The reason seems to be that my peripheral vision picks up movement against the wider background that direct inspection can miss. An optometrist I am not, but I do find myself staring at data a great deal, seeking relationships, patterns, or gaps. I idly wondered whether, if I filled my visual field with data, I might be able to exploit my peripheral vision in that quest. I have a wide monito

    ChatGPTesters

    The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00--  "Why don’t we replace the testers with AI?" We have a good relationship so I feel safe telling you that my instinctive reaction, as a member of the Tester's Union, is to ask why we don&

    Postman Curlections

    My team has been building a new service over the last few months. Until recently all the data it needs has been ingested at startup and our focus has been on the logic that processes the data, architecture, and infrastructure. This week we introduced a couple of new endpoints that enable the creation (through an HTTP POST) and update (PUT) of the fundamental data type (we call it a definition ) that the service operates on. I picked up the task of smoke testing the first implementations. I started out by asking the system under test to show me what it can do by using Postman to submit requests and inspecting the results. It was the kinds of things you'd imagine, including: submit some definitions (of various structure, size, intent, name, identifiers, etc) resubmit the same definitions (identical, sharing keys, with variations, etc) retrieve the submitted definitions (using whatever endpoints exist to show some view of them) compare definitions I submitted fro

    Vanilla Flavour Testing

    I have been pairing with a new developer colleague recently. In our last session he asked me "is this normal testing?" saying that he'd never seen anything like it anywhere else that he'd worked. We finished the task we were on and then chatted about his question for a few minutes. This is a short summary of what I said. I would describe myself as context-driven . I don't take the same approach to testing every time, except in a meta way. I try to understand the important questions, who they are important to, and what the constraints on the work are. With that knowledge I look for productive, pragmatic, ways to explore whatever we're looking at to uncover valuable information or find a way to move on. I write test notes as I work in a format that I have found to be useful to me, colleagues, and stakeholders. For me, the notes should clearly state the mission and give a tl;dr summary of the findings and I like them to be public while I'm working not just w

    Make, Fix, and Test

    A few weeks ago, in A Good Tester is All Over the Place , Joep Schuurkes described a model of testing work based on three axes: do testing yourself or support testing by others be embedded in a team or be part of a separate team do your job or improve the system It resonated with me and the other testers I shared it with at work, and it resurfaced in my mind while I was reflecting on some of the tasks I've picked up recently and what they have involved, at least in the way I've chosen to address them. Here's three examples: Documentation Generation We have an internal tool that generates documentation in Confluence by extracting and combining images and text from a handful of sources. Although useful, it ran very slowly or not at all so one of the developers performed major surgery on it. Up to that point, I had never taken much interest in the tool and I could have safely ignored this piece of work too because it would have been tested by

    Build Quality

      The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "When the build is green, the product is of sufficient quality to release" An interesting take, and one I wouldn't agree with in general. That surprises you? Well, ho

    Express, Listen, and Field

    Last weekend I participated in the LLandegfan Exploratory Workshop on Testing (LLEWT) 2024, a peer conference in a small parish hall on Anglesey, north Wales. The topic was communication and I shared my sketchnotes and a mind map from the day a few days ago. This post summarises my experience report.  Express, Listen, and Field Just about the most hands-on, practical, and valuable training I have ever done was on assertiveness with a local Cambridge coach, Laura Dain . In it she introduced Express, Listen, and Field (ELF), distilled from her experience across many years in the women’s movement, business, and academia.  ELF: say your key message clearly and calmly, actively listen to the response, and then focus only on what is relevant to your needs. I blogged a little about it back in 2017 and I've been using it ever since. Assertiveness In a previous role, I was the manager of a test team and organised training for the whole team