Skip to main content

Team Values: Teasing Them Out





The testers at Linguamatics decided to explore the adoption of a set of team values and this short series of posts describes how we got to them through extended and open discussion.

If you find the posts read like one of those "what I did in the holidays" essays you used to be forced to write at school then I'll have achieved my aim. I don't have a recipe to be followed here, only the story of what we did, in the order we did it, with a little commentary and hindsight.
--00--
So far we've thought of a load of reasons that a team might want values, and loosely grouped them. That's the Why. Next step: the What.

In a second meeting following the same lightly-facilitated format as the first, we asked ourselves what kinds of values might be good fits to each of our four motivational categories: encourage, emphasise, empower, and explain.

We were flexible about what constituted a value, accepting philosophies, actions, outcomes, or whatever. We aimed to get the ideas out without too much filtering and, again, we used a mind map to document our balanced, open discussion.

It was sometimes hard to fit suggestions into our categories, both because the categories overlapped and because none of the categories seemed appropriate. We didn't sweat it; so long as the suggestion was recorded it could go into any of the four, or a fifth, Other, that we added as we went. These are a few examples:
  • Encourage: communication, integrity
  • Emphasise: expertise, risk-driven
  • Empower: continuous improvement, safety
  • Explain: transparency, possible outcomes
  • Other: openness, non-judgemental

We mostly didn't record detailed justifications for assigning a value to a category. At the time that felt right: did we really want to break the flow of the conversation or increase the paperwork burden? (My answer: no.)

A consequence of this is that I can no longer tell why, for instance, we didn't we make openness and transparency a single entry. Maybe there was a good reason, maybe it was oversight, or maybe I transcribed things incorrectly. Regardless, what we had was good enough to move forwards in the moment, and with momentum.

Following another period of reflection we gathered again to choose the subset of the values which would become our focus. We had generated 40 or 50 and I judged that a simple discussion would not easily get us to consensus. So I created a spreadsheet listing all of the values and we split into small groups to rank them, expecting that the top-ranked values would encode those that we felt most strongly about, for whatever reason.

Interestingly, and painfully for me, each of the groups attacked this task with a unique twist: some provided a strict ranking that represented their combined opinion, some scored individually which meant multiple values could have the same ranking, some simply selected the values they felt were important without any scoring at all, some chose a few values and some chose many, some merged values together and scored the combination, and the merged values were not necessarily the same across groups.

The effort in that session was deep and the conversations were sometimes quite intense. We'd hoped to combine the results on the day but, due to the complexities of the varied ranking schemes, we didn't even try. So, instead, we shared all of the results back to each other, with a little commentary from each group, and ended the meeting.

For me, ranking at this point was productive but in retrospect I think it could have been easier to have proposed or imposed a more specific ranking scheme. Of course, this might have necessitated a meta conversation around ranking, which itself could have been difficult to resolve.

Systemic discrepancies aside, there was real, countable, actionable data generated in those ranking conversations and, to do justice to it, I invested time in looking for ways to aggregate it fairly. In the end I presented several different normalisation strategies back to the team.

The simplest merely counted the number of times a value was ranked at all, while more complex attempts scaled everyone's rankings to have a total "score" of 1, with weighting for merged items. I also created a graphic which showed visually which group had ranked which values, and which values had been merged.

This visualisation was helpful because it clearly showed clusters in our categorisations and in our choice of values. The numerical analyses were interesting because they turned out to be quite consistent about the smaller set of values that scored well. They were:
  • Risk-driven
  • Collaboration
  • Communication
  • Experimentation
  • Non-judgemental
  • Continuous improvement
  • Expertise

After agreeing that the analysis was reasonable we decided to take this set of seven values and begin to develop a non-shallow understanding of what we meant by them.

The same groups as before each came up with short definitions for a value or two and we debated them together. Lesson learned: nuance is everything.  My role here reverted back to light facilitation, recording outcomes, keeping time, and aiming to ensure that all voices had an opportunity to be heard.

It was delightful to witness the conversations we had across the weeks. Even if we had eventually failed to arrive at a set of values, the dialogue and the engagement with which we were having it (most of our team attended most of the meetings), felt like a valuable way to share perspectives and build empathy with one another.

Outside of the meeting cycle I composed revised definitions from my notes, each with associated heuristics for their use that had emerged from the debate. Here's an example:
Risk-driven: Use relevant data to try to identify the important risks, their likelihoods, and their impacts, and deal with them in a reasonable priority order.
  • Collect data from multiple sources, including stakeholders, the context into which the product will be placed, changes to the software, ourselves, ...
  • Consider risks to the company reputation, the bottom line, the product, the customer, the customer data, ...
  • Strive not to do this work in isolation from others.
  • Recognise that we won't always find all of the risks, and we won't always assess those we do identify correctly.
  • Be alert to the possibility that our analysis can change based on new data.

In the next review we agreed that we'd captured reasonably well something that the group as a whole could agree with. Yay! We then tried and failed to reduce the set further. Boo! But pragmatism reigned and we decided that we'd live with the large set for a while before thinking again.

Stopping seemed like a good choice to me. Despite the enthusiasm and continued strong level of participation across the team, I was wary of us becoming fatigued with the effort. Now felt like a good time to put the values into use, gauge our comfort with them and then iterate on the basis of experience.

In the next post I'll talk about that experience and how it resulted in us cutting our list down to three core values.
Image: https://flic.kr/p/oGMUQ

Comments

Popular posts from this blog

Can Code, Can't Code, Is Useful

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "If testers can’t code, they’re of no use to us" My first reaction is to wonder what you expect from your testers. I am immediately interested in your working context and the way

Testing (AI) is Testing

Last November I gave a talk, Random Exploration of a Chatbot API , at the BCS Testing, Diversity, AI Conference .  It was a nice surprise afterwards to be offered a book from their catalogue and I chose Artificial Intelligence and Software Testing by Rex Black, James Davenport, Joanna Olszewska, Jeremias Rößler, Adam Leon Smith, and Jonathon Wright.  This week, on a couple of train journeys around East Anglia, I read it and made sketchnotes. As someone not deeply into this field, but who has been experimenting with AI as a testing tool at work, I found the landscape view provided by the book interesting, particularly the lists: of challenges in testing AI, of approaches to testing AI, and of quality aspects to consider when evaluating AI.  Despite the hype around the area right now there's much that any competent tester will be familiar with, and skills that translate directly. Where there's likely to be novelty is in the technology, and the technical domain, and the effect of

Testers are Gate-Crashers

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Testers are the gatekeepers of quality" Instinctively I don't like the sound of that, but I wonder what you mean by it. Perhaps one or more of these? Testers set the quality sta

Am I Wrong?

I happened across Exploratory Testing: Why Is It Not Ideal for Agile Projects? by Vitaly Prus this week and I was triggered. But why? I took a few minutes to think that through. Partly, I guess, I feel directly challenged. I work on an agile project (by the definition in the article) and I would say that I use exclusively exploratory testing. Naturally, I like to think I'm doing a good job. Am I wrong? After calming down, and re-reading the article a couple of times, I don't think so. 😸 From the start, even the title makes me tense. The ideal solution is a perfect solution, the best solution. My context-driven instincts are reluctant to accept the premise, and I wonder what the author thinks is an ideal solution for an agile project, or any project. I notice also that I slid so easily from "an approach is not ideal" into "I am not doing a good job" and, in retrospect, that makes me smile. It doesn't do any harm to be reminded that your cognitive bias

Play to Play

I'm reading Rick Rubin's The Creative Act: A Way of Being . It's spiritual without being religious, simultaneously vague and specific, and unerring positive about the power and ubiquity of creativity.  We artists — and we are all artists he says — can boost our creativity by being open and welcoming to knowledge and experiences and layering them with past knowledge and experiences to create new knowledge and experiences.  If that sounds a little New Age to you, well it does to me too, yet also fits with how I think about how I work. This is in part due to that vagueness, in part due to the human tendency to pattern-match, and in part because it's true. I'm only about a quarter of the way through the book but already I am making connections to things that I think and that I have thought in the past. For example, in some ways it resembles essay-format Oblique Strategy cards and I wrote about the potential value of them to testers 12 years ago. This week I found the f

Meet Me Halfway?

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Stop answering my questions with questions." Sure, I can do that. In return, please stop asking me questions so open to interpretation that any answer would be almost meaningless and certa

Test Now

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "When is the best time to test?" Twenty posts in , I hope you're not expecting an answer without nuance? You are? Well, I'll do my best. For me, the best time to test is when there

Rage Against the Machinery

  I often review and collaborate on unit tests at work. One of the patterns I see a lot is this: there are a handful of tests, each about a page long the tests share a lot of functionality, copy-pasted the test data is a complex object, created inside the test the test data varies little from test to test. In Kotlin-ish pseudocode, each unit test might look something like this: @Test fun `test input against response for endpoint` () { setupMocks() setupTestContext() ... val input = Object(a, OtherObject(b, c), AnotherObject(d)) ... val response = someHttpCall(endPoint, method, headers, createBodyFromInput(input) ) ... val expected = Object(w, OtherObject(x, y), AnotherObject (z)) val output = Object(process(response.getField()), otherProcess(response.getOtherField()), response.getLastField()) assertEquals(expected, output) } ... While these tests are generally functional, and I rarely have reason to doubt that they

A Qualified Answer

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn ,   Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Whenever possible, you should hire testers with testing certifications"  Interesting. Which would you value more? (a) a candidate who was sent on loads of courses approved by some organisation you don't know and ru

README

    This week at work my team attended a Myers Briggs Type Indicator workshop. Beforehand we each completed a questionnaire which assigned us a personality type based on our position on five behavioural preference axes. For what it's worth, this time I was labelled INFJ-A and roughly at the mid-point on every axis.  I am sceptical about the value of such labels . In my less charitable moments, I imagine that the MBTI exercise gives us each a box and, later when work shows up, we try to force the work into the box regardless of any compatiblity in size and shape. On the other hand, I am not sceptical about the value of having conversations with those I work with about how we each like to work or, if you prefer it, what shape our boxes are, how much they flex, and how eager we are to chop problems up so that they fit into our boxes. Wondering how to stretch the workshop's conversational value into something ongoing I decided to write a README for me and