Skip to main content

Granularity Familiarity

 

I saw Maaret's Pyhäjärvi's post on LinkedIn  the other day. This line chimed strongly with me:
Sense of lack of time. Someone asked how to have joy of discovery when feeling always pinched with time. We have in many cases lost control over time, and I have done work I have not necessarily appreciated on making time flexible - always seeing there is a next day and having no schedules and small slices of work.
I feel like I do my work at various granularities across multiple dimensions and so, to begin to get this idea straight in my head, I tried to list some of them.

It was harder than I thought it would be because so much of this is instinctive, intuitive, and in the moment. Given that, here goes draft 0.1. Hopefully I'll begin to feel more familiar with the idea and be able to revise the model later on.

Scope

Parcel of work. My team's practices are reasonably common, I think. Tasks are portioned into into Jira tickets which progress out of a backlog through various states of refinement, into states of being worked on, and then into the state of being done. Perhaps less common is that there is no explicit state of being tested. 

My testing, and other work, can happen anywhere from sub-ticket, through the ticket itself, across multiple tickets, or outside of any ticket. Additionally, I don't test the work in every ticket.

Learning. Some tasks offer the chance to grow my knowledge and sometimes the constraints mean that learning is not a priority. Given free choice, though, I like to sit at the upper end of the learning scale, picking up what I can whenever I can. This means that I'll try new ways to accomplish understood tasks and take on tasks that I don't know how to do, just for the learning opportunity.

Breadth and depth. The direction I take an investigation varies greatly. I might see that some big new project is coming up and dedicate a few hours to understanding the domain it sits in before there is even a ticket. I think of this as a broad, landscaping, task. I want to know the big pieces, how they relate to one another, the shapes of existing solutions, the terminology that's typically used, and so on.

Deep work could be done while trying to track down, for example, an intermittent customer issue in production. There are typically many variables to consider and they might interact so I'll be looking at multiple techniques and repeated experiments to get an idea of what's going on. 

In contrast, sometimes a code review will be sufficient shallow and narrow testing.

Tech stack. The main service my team works on consumes components made by other teams and is a dependency for components owned by other teams. I look for ways to test at all of the levels in that stack. When we make changes in our service, I might test only against it,  up the stack, down the stack, or end-to-end with observation wherever I think it's interesting and possible.

People

Collaborators. I don't do all of this work by myself. I do work alone, but I also spend profitable and enjoyable time pairing with others, in small groups, in my team and cross-team.

Impact. Where will the outcome of this work be experienced? It is just for me, a colleague, my team, the group my team sits in, somewhere else in the company? I can think about the work I am doing and see where else I think it makes sense to share. 

At Ada we have Community Days every couple of weeks and they are a convenient way to share by broadcasting although the location of the impact is hard to predict that way. I use my personal Confluence page to keep a list of internal presentations I've done so they can have the same kind of impact at other timescales. 

I think about the work I've done and try to find people or teams that I can make contact with to share it. I talk about my work to colleagues, looking for people they recommend might like to hear about it. This isn't egotistical. It's about finding like-minded people who care to do good work in the same space as me.

Relationship building.  I build relationships with others. I ask for help from others. I make myself available to help others. Actually, I actively look for ways in which I can help others. I keep an eye open for things that others do that could be helpful to me. I share results that I have found with people I think will be interested. Different activities give the opportunity to develop relationships to different extents, from simply making first contact, through adding a new layer, to making a radical change.

I joined the coffee chat rota in my first week at work and am now, 18 months later, starting to meet people for the second time. Some of them I have not spoken to since the first virtual cuppa, but others have been valuable contacts when I've needed something or vice versa.

You might say that a coffee chat is not work, and you'd probably be right. However, it is a small relationship investment, a small time investment, a low-risk investment with a potential longer-term payoff.

Time

Stake. What kind of gamble am I prepared to take on this piece of work? The stakes can be effort, annoyance or inconvenience to others, cashing in credit earned, IOUs, time spent, opportunity cost, delay, and so on. I wonder how I can reduce the stakes and with what kind of trade-offs.

Level of effort. I can spend from no time, to a few minutes, hours, days, or even weeks testing a thing. Depending on how you care to group activities, I have had single investigations that have lasted months.

To payoff. I start tasks that I expect will pay off now, this sprint, in the medium or long term, or that are only spikes or proofs-of-concept that I understand might not pay off at all. All actions are gambles and I tend to take smaller, more incremental steps (reduced scope, less time, lower risk) to reduce my risk when the potential reward is less well understood.

A task that pays off now might be some basic checking of a small fix that went through today, where I already understand the issues. At the other end of the spectrum, I've been working for over a year on weekly ensemble testing with a group of medical doctors in my company. There's no set agenda for this, we bring a topic to the Friday afternoon sessions and then work on it together.

I am sharing my knowledge, approaches, insights, tooling, connections and so on. How, when, and where this will bring benefit is very hard to judge. The participants all find value in it, so we keep going.

vs Ticket status. I can test while a ticket is in the backlog, in development, in review, and in production. Where a ticket is in its life-cycle is not the only factor in whether I am looking at the area.

Reflections

This is a first-cut, and I have thoughts that I don't know what to do with. Here's some:
  • When I start, I don't always know what granularity I will work at.
  • The work evolves in these dimensions (and probably others) as I learn about it.
  • When I externalise the idea of relationship-building I feel like it might be perceived of as manipulative. Perhaps I can find a better way to describe it.
  • I am not very granular when it comes to visibility. I will pretty much always talk about what I'm trying and, because I'm an inveterate note keeper, there will be notes in public.
  • I typically track my work as threads. Any given investigation may be more than one thread.
  • Some work impacts on people and process more than product. I'm not sure the model reflects that.
  • How do I get to work this way when others feel the tyranny of tickets? I believe I have built the trust of the teams I work in.

Comments

Popular posts from this blog

Can Code, Can't Code, Is Useful

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "If testers can’t code, they’re of no use to us" My first reaction is to wonder what you expect from your testers. I am immediately interested in your working context and the way

Meet Me Halfway?

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Stop answering my questions with questions." Sure, I can do that. In return, please stop asking me questions so open to interpretation that any answer would be almost meaningless and certa

Testing (AI) is Testing

Last November I gave a talk, Random Exploration of a Chatbot API , at the BCS Testing, Diversity, AI Conference .  It was a nice surprise afterwards to be offered a book from their catalogue and I chose Artificial Intelligence and Software Testing by Rex Black, James Davenport, Joanna Olszewska, Jeremias Rößler, Adam Leon Smith, and Jonathon Wright.  This week, on a couple of train journeys around East Anglia, I read it and made sketchnotes. As someone not deeply into this field, but who has been experimenting with AI as a testing tool at work, I found the landscape view provided by the book interesting, particularly the lists: of challenges in testing AI, of approaches to testing AI, and of quality aspects to consider when evaluating AI.  Despite the hype around the area right now there's much that any competent tester will be familiar with, and skills that translate directly. Where there's likely to be novelty is in the technology, and the technical domain, and the effect of

Testers are Gate-Crashers

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Testers are the gatekeepers of quality" Instinctively I don't like the sound of that, but I wonder what you mean by it. Perhaps one or more of these? Testers set the quality sta

Postman Curlections

My team has been building a new service over the last few months. Until recently all the data it needs has been ingested at startup and our focus has been on the logic that processes the data, architecture, and infrastructure. This week we introduced a couple of new endpoints that enable the creation (through an HTTP POST) and update (PUT) of the fundamental data type (we call it a definition ) that the service operates on. I picked up the task of smoke testing the first implementations. I started out by asking the system under test to show me what it can do by using Postman to submit requests and inspecting the results. It was the kinds of things you'd imagine, including: submit some definitions (of various structure, size, intent, name, identifiers, etc) resubmit the same definitions (identical, sharing keys, with variations, etc) retrieve the submitted definitions (using whatever endpoints exist to show some view of them) compare definitions I submitted fro

Build Quality

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "When the build is green, the product is of sufficient quality to release" An interesting take, and one I wouldn't agree with in general. That surprises you? Well, ho

Make, Fix, and Test

A few weeks ago, in A Good Tester is All Over the Place , Joep Schuurkes described a model of testing work based on three axes: do testing yourself or support testing by others be embedded in a team or be part of a separate team do your job or improve the system It resonated with me and the other testers I shared it with at work, and it resurfaced in my mind while I was reflecting on some of the tasks I've picked up recently and what they have involved, at least in the way I've chosen to address them. Here's three examples: Documentation Generation We have an internal tool that generates documentation in Confluence by extracting and combining images and text from a handful of sources. Although useful, it ran very slowly or not at all so one of the developers performed major surgery on it. Up to that point, I had never taken much interest in the tool and I could have safely ignored this piece of work too because it would have been tested by

Am I Wrong?

I happened across Exploratory Testing: Why Is It Not Ideal for Agile Projects? by Vitaly Prus this week and I was triggered. But why? I took a few minutes to think that through. Partly, I guess, I feel directly challenged. I work on an agile project (by the definition in the article) and I would say that I use exclusively exploratory testing. Naturally, I like to think I'm doing a good job. Am I wrong? After calming down, and re-reading the article a couple of times, I don't think so. 😸 From the start, even the title makes me tense. The ideal solution is a perfect solution, the best solution. My context-driven instincts are reluctant to accept the premise, and I wonder what the author thinks is an ideal solution for an agile project, or any project. I notice also that I slid so easily from "an approach is not ideal" into "I am not doing a good job" and, in retrospect, that makes me smile. It doesn't do any harm to be reminded that your cognitive bias

Test Now

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "When is the best time to test?" Twenty posts in , I hope you're not expecting an answer without nuance? You are? Well, I'll do my best. For me, the best time to test is when there

Vanilla Flavour Testing

I have been pairing with a new developer colleague recently. In our last session he asked me "is this normal testing?" saying that he'd never seen anything like it anywhere else that he'd worked. We finished the task we were on and then chatted about his question for a few minutes. This is a short summary of what I said. I would describe myself as context-driven . I don't take the same approach to testing every time, except in a meta way. I try to understand the important questions, who they are important to, and what the constraints on the work are. With that knowledge I look for productive, pragmatic, ways to explore whatever we're looking at to uncover valuable information or find a way to move on. I write test notes as I work in a format that I have found to be useful to me, colleagues, and stakeholders. For me, the notes should clearly state the mission and give a tl;dr summary of the findings and I like them to be public while I'm working not just w