Skip to main content

Collect, Arrange, and Slice

 

Last month I started thinking about slicing, my instinctive approach to looking for perspectives on a problem, an opinion, an observation, or anything else. This time around I've got an example to talk about.

On Fridays I ensemble with a group of medical quality engineers and medical knowledge engineers. We learn from each other, about testing and about the domain. On and off recently we've looked at a project of theirs which aims to understand better what work they do, how they do it, and why it's that way, and then write it up for internal and external consumption.

In one early session, with a wider group in the company, there was an extremely open and exciting conversation about what should be covered in this effort. 

It was the kind of discussion that greenfield projects often have, before scope is nailed down, where the world seems ripe with possibility, no difficulties have been identified, and there is no talk of who will taking responsibility for the implementation.

Partway through I was asked to facilitate so I shared a mind map I'd begun to make while people were talking. I then opened the map to everyone to add their own ideas, encouraging them to actually do it, and reassuring them that it can be helpful, when we don't understand the scope, to get down as much as possible that seems plausible and then choose.

That data was deep and broad but lacked organisation and cohesion, so after the meeting I spent a while arranging it. Mostly this consisted of taking similar ideas and clustering them, and adding categories and sub-categories as I felt they made sense. 

I did this on a copy of the original map to make comparison possible, help tell the story of what will be a medium-term project, and give an informal audit trail.

In the next session, a few days later, a group of three of us looked for a way to extract some concrete scoping proposals from the sorted mind map. This is a slicing problem: where do you put the knife? Categorisations help at this point, because they provide natural edges to cut along. 

We had created a category containing possible audiences for the report or reports. It consisted of three entries, which seemed like a manageable number of slices, so we started there. 

For each audience type my colleagues provided a couple of areas of interest, and we annotated the map to show which nodes would be relevant to each topic, producing six slices through our data.

In my previous post, trying to think about how I do this, I ended with these words:

... collect, arrange, slice. I don't know quite what I mean by it yet ...

I'm still not sure they are the right terms, or everything that I do, but you can see that structure in the work we did on this project: collect by casting the net wide, arrange into structures that expose some potentially useful seams, slice down the seams.

I feel like this is obvious but perhaps that's just because it's what I've learned to do. I'm in reasonable company thinking about it, though. Hillel Wayne wrote Collecting and Curating Material is Good and we Should do it More a few days after my last post. 

He breaks things down slightly differently to cover his research process but there's clear similarity:

  • Collection: gathering material that’s out there and putting it in one place.
  • Curation: identifying which gathered material is useful for knowledge-building.
  • Analysis: taking the curated material, breaking them down, and studying what they’re "saying".
  • Synthesis: taking the analytic information and processing it into an overall idea.

Note that on our project as I've described it so far we're not doing "the work" yet. Rather, we're trying to choose which work to do and how to do it, in a proportionate way.

Although in the retelling it feels linear, this process is exploratory. At each step we took a perspective and tried it, judging how far to go before switching to another approach. The skill, as with exploratory testing, is to try something, observe something, conclude something, and repeat for as long as is reasonable given the constraints. There's no general formula for that.

Collect, arrange, slice itself looks linear when written but don't be fooled. You can collect a little, arrange a little, collect some more, slice some, see whether that seems productive, rearrange, slice again, collect again. Or anything else: whatever seems like it will best serve your mission.

Writing all this down has moved my thinking along a little and I've collected more stuff for later arrangement:

  • Writing blog posts is often collect, arrange, slice.
  • In this case of this post, I rewrote it three times until I was happy with the slice.
  • Fieldstones involve collection and arrangement.
  • Arrange the work to get to a conclusion (of some kind) in the time available.
  • This may mean renegotiating the mission along the way.
  • Perhaps this is slicing the mission.
  • Slices can be thicker or thinner. Breadth and depth.
  • Might a slice create a pivot point, like in a pivot table?
  • The process is recursive, naturally.
  • We are trying to get to a point where there is something that can be attacked directly.
  • Mnemonics like SFIDPOT are pre-sliced arrangements.
  • They can be helpful to bootstrap collection ...
  • ... or to just abbreviate the whole process if time is important.
  • Wide reading or collaboration with others of diverse expertise expands the collection and arrangement possibilities.
  • It's OK for there to be contradictory data in the collection and arrangement ...
  • ... we're not necessarily trying to make a single model of a domain ...
  • ... we're trying to make a helpful model of our thoughts about the problem.

That'll do for today.
Image: https://flic.kr/p/qYXP6A

Comments

Popular posts from this blog

Meet Me Halfway?

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Stop answering my questions with questions." Sure, I can do that. In return, please stop asking me questions so open to interpretation that any answer would be almost meaningless and certa

Can Code, Can't Code, Is Useful

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "If testers can’t code, they’re of no use to us" My first reaction is to wonder what you expect from your testers. I am immediately interested in your working context and the way

Not Strictly for the Birds

  One of my chores takes me outside early in the morning and, if I time it right, I get to hear a charming chorus of birdsong from the trees in the gardens down our road, a relaxing layered soundscape of tuneful calls, chatter, and chirrupping. Interestingly, although I can tell from the number and variety of trills that there must be a large number of birds around, they are tricky to spot. I have found that by staring loosely at something, such as the silhouette of a tree's crown against the slowly brightening sky, I see more birds out of the corner of my eye than if I scan to look for them. The reason seems to be that my peripheral vision picks up movement against the wider background that direct inspection can miss. An optometrist I am not, but I do find myself staring at data a great deal, seeking relationships, patterns, or gaps. I idly wondered whether, if I filled my visual field with data, I might be able to exploit my peripheral vision in that quest. I have a wide monito

Postman Curlections

My team has been building a new service over the last few months. Until recently all the data it needs has been ingested at startup and our focus has been on the logic that processes the data, architecture, and infrastructure. This week we introduced a couple of new endpoints that enable the creation (through an HTTP POST) and update (PUT) of the fundamental data type (we call it a definition ) that the service operates on. I picked up the task of smoke testing the first implementations. I started out by asking the system under test to show me what it can do by using Postman to submit requests and inspecting the results. It was the kinds of things you'd imagine, including: submit some definitions (of various structure, size, intent, name, identifiers, etc) resubmit the same definitions (identical, sharing keys, with variations, etc) retrieve the submitted definitions (using whatever endpoints exist to show some view of them) compare definitions I submitted fro

ChatGPTesters

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00--  "Why don’t we replace the testers with AI?" We have a good relationship so I feel safe telling you that my instinctive reaction, as a member of the Tester's Union, is to ask why we don&

Vanilla Flavour Testing

I have been pairing with a new developer colleague recently. In our last session he asked me "is this normal testing?" saying that he'd never seen anything like it anywhere else that he'd worked. We finished the task we were on and then chatted about his question for a few minutes. This is a short summary of what I said. I would describe myself as context-driven . I don't take the same approach to testing every time, except in a meta way. I try to understand the important questions, who they are important to, and what the constraints on the work are. With that knowledge I look for productive, pragmatic, ways to explore whatever we're looking at to uncover valuable information or find a way to move on. I write test notes as I work in a format that I have found to be useful to me, colleagues, and stakeholders. For me, the notes should clearly state the mission and give a tl;dr summary of the findings and I like them to be public while I'm working not just w

Make, Fix, and Test

A few weeks ago, in A Good Tester is All Over the Place , Joep Schuurkes described a model of testing work based on three axes: do testing yourself or support testing by others be embedded in a team or be part of a separate team do your job or improve the system It resonated with me and the other testers I shared it with at work, and it resurfaced in my mind while I was reflecting on some of the tasks I've picked up recently and what they have involved, at least in the way I've chosen to address them. Here's three examples: Documentation Generation We have an internal tool that generates documentation in Confluence by extracting and combining images and text from a handful of sources. Although useful, it ran very slowly or not at all so one of the developers performed major surgery on it. Up to that point, I had never taken much interest in the tool and I could have safely ignored this piece of work too because it would have been tested by

Build Quality

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "When the build is green, the product is of sufficient quality to release" An interesting take, and one I wouldn't agree with in general. That surprises you? Well, ho

The Best Laid Test Plans

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "What's the best format for a test plan?" I'll side-step the conversation about what a test plan is and just say that the format you should use is one that works for you, your coll

My Frame, Your Thing

I was talking with a colleague the other week and we got onto the topic of framing our work. This is one of my suggestions: I want to help whoever I'm working with build the best version of their thing, whatever 'best' means for them, given the constraints they have. That's it. Chef's kiss. I like it because it packs in, for example: exploration of ideas, software, process, business choices, and legal considerations conversations about budget, scope, resources, dreams, and priorities communicating findings, hypotheses, and suggestions helping to break down the work, organise the work, and facilitate the work making connections, pulling information from outside, and sharing information from inside It doesn't mean that I have no core expertise to bring, no scope for judgement, no agency, and no way to be creative or express myself, and it specifically does not mean that I'm going to pick up all the crap that no-one else wants to do.  Of course, I might pick up