Skip to main content

A Remote Possibility

 

Last year, in the days before SARS-CoV-2, I wrote a guide to peer conferences for the Association for Software Testing. It didn't mention running a peer conference remotely. This year, I found myself setting up a peer conference between AST and the BCS Special Interest Group in Software Testing which had to be run remotely.

Much of the guide still holds, and in some respects organisational concerns are simplified without travel, accommodation, and catering to worry about. But — and it's a Sir Mix-a-Lot-sized but — we've all done enough calls now to understand how hard it is to get the vibe right during a lengthy video meeting with more than a couple of participants.

So what did we consider, what did we do, and how did it work out?

On the purely logistical front, we had a few decisions to make. AST and BCS have worldwide memberships so choosing a time that didn't disadvantage some members was impossible. In the end, we ran a one-day conference, on a Sunday, from 3pm to 10pm BST. If we'd run across multiple days we could potentially have changed the hours each day to spread the pain around. However, as with in-person conferences, we were sensitive to the tension between giving enough space to explore the topic and excluding those with other important demands on their time.

We decided to keep the number of participants reasonably low, conscious of the fact that it can be easy to zone out as numbers increase. The flipside of this is the risk that we might end up with too small a group to have a varied discussion. On this occasion our drop-out rate was 20% (about normal) and I didn't feel that the level of conversation was lacking. Side note: we asked everyone to keep their cameras on as much as possible to give us all a sense of being together and able to see, as well as hear, reactions.

The structure of this peer conference was LAWST-style: several presentations, each followed by an "open season" discussion. It's usual in these kinds of events for the first couple of discussions to take a disproportionate amount of time as many general topics are aired. For our conference, we decided to timebox presentations at 10 minutes and open season at 35 which meant we could easily stick to a schedule with 10 minute breaks every hour — something we felt was important for health reasons and to keep energy levels high — and be sure to get more than a couple of presentations in. We scheduled a long break at around half-time and we shared the schedule at the start of the day so that all participants knew what was coming.

As it wasn't going to be possible for everyone to present we needed a way to choose presentations. I circulated abstracts a few days before the conference and set up a Google Doc for dot voting. In retrospect, I probably over-engineered the doc a little by asking people to drag images when it would have been simple and just as functional to have them type "X" against the talks they wanted to see. 

Finally on the logistical side, we anticipated that some kind of administrative communication channel for the organisers would be needed. In the real world a quick glance, gesture, or note slid over the table would all be possible. In the virtual world we felt we needed something specific that we could be watching all the time, so we set one up in Slack (see below). Ultimately we hardly used it but I'd still have one next time just in case it was needed.

Which brings us to the software we used. In advance we thought our requirements included these things: video conferencing software that could stay on all day; the ability to have global, multi-user, and 1-1 chat; multiple channels for chat; threads in chat; the host able to mute and unmute participants; participants able to share their screens; participants able to see all other participants at all times; and breakout rooms.

Zoom satisfied many of these requirements, is familiar to most of us these days, and was readily available, so was a straightforward choice. What it didn't give us was the flexibility we wanted around chat but all of those gaps were filled by another familiar tool, Slack.

As it happened, the only listed feature we didn't use was breakout rooms. Our intention was to set them up during breaks but in the moment we never felt the need. Some side conversation happened in Slack and I think we mostly regarded the breaks as a welcome relief away from our keyboards. 

The facilitator, Paul Holland, didn't mute anyone as I recall, but he did unmute people a couple of times. This may have been helped by agreeing on general microphone etiquette: the presenter's mic would be up throughout open season but everyone else would mute unless their comment was live.

The final, and crucial, component that we considered was facilitation. It's traditional for AST events to manage discussion in open season with K-cards, where participants hold up coloured cards to show that they want to contribute to the discussion, and how:

  • Green: I have a new thread. 
  • Yellow: I want to say something on the current thread.
  • Red: I must speak now (on topic or admin).
  • Purple: I think this conversation has gone down a rat hole.

We did wonder about trying to use physical cards over video but felt that it would be too hard for the facilitator to monitor and also difficult for the participants to know they'd been seen. 

So instead we decided to experiment with electronic cards and Slack threads. It quickly evolved it into this:

  • We had a dedicated Slack channel for open season.
  • We had the convention of using a different coloured icon for each of green, yellow, and red K-cards
  • ... and we documented and updated the conventions as we went:

  • At the start of each presentation we placed a prominent comment into the channel to separate it from previous threads:

  • During the presentation and open season, participants added green cards with a brief comment into the channel:

  • During open season, the facilitator made one of the threads current by commenting into it with a traffic light icon and a note, "Active thread"
  • ... and while that thread was live, participants dropped yellow cards into the thread:

  • The facilitator picked comments to be live and invited the commenter to speak
  • ... and conversation continued in the thread until all comments were addressed.
  • The facilitator then picked a new thread from the channel and started again.

A couple of emergent behaviours were interesting and really improved things:

  • We started off intending to use the words "new", "same", "NOW!" for the K-cards, but participants quickly switched to icons. You can see this change in Paul's text about cards above.
  • We didn't ask for a note with a card, but it felt very natural to put one.
  • We initially asked participants to publish thread comments into the main channel too, but it was too noisy.
  • We found that some comments were made into the thread without cards. These were generally interesting asides that didn't merit conversation but increased the discussion's bandwidth.
  • We saw that side conversations took place inside the thread, again without cards, to explore some points of mutual interest to a few participants.
  • We started putting references and links to related material in a general channel rather than with the threads.

Paul's facilitation really helped with these aspects; he noted when people were trying things and suggested that we follow some of the patterns generally.

Although we had an icon for the red card we didn't need it on the day and we didn't define a rat hole card at all, although Eric Proegler managed to improvise one:


The conference went really well, with great conversation, room for everyone to make their points, and a real buzz from the participants. The thought we put into the organisation was well worth it, but I loved how adaptable we were was on the day too. 

When I do this again I will be happy to do use Slack threads for K-cards. I'd also like to find a way to introduce side conversations or breakout discussions but I'd want a model that didn't dampen down any of the vibe and momentum built up  in the conversation.

The participants at this peer conference were Lalitkumar Bhamare, Fiona Charles, Janet Gregory, Paul Holland, Nicola Martin, Eric Proegler, Huib Schoots, Adam Leon Smith, James Thomas, and Amit Wertheimer. Thank you to Adam Leon Smith, Eric Proegler, and Paul Holland for help with the organisation.

Comments

Popular posts from this blog

Can Code, Can't Code, Is Useful

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "If testers can’t code, they’re of no use to us" My first reaction is to wonder what you expect from your testers. I am immediately interested in your working context and the way

Meet Me Halfway?

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Stop answering my questions with questions." Sure, I can do that. In return, please stop asking me questions so open to interpretation that any answer would be almost meaningless and certa

Not Strictly for the Birds

  One of my chores takes me outside early in the morning and, if I time it right, I get to hear a charming chorus of birdsong from the trees in the gardens down our road, a relaxing layered soundscape of tuneful calls, chatter, and chirrupping. Interestingly, although I can tell from the number and variety of trills that there must be a large number of birds around, they are tricky to spot. I have found that by staring loosely at something, such as the silhouette of a tree's crown against the slowly brightening sky, I see more birds out of the corner of my eye than if I scan to look for them. The reason seems to be that my peripheral vision picks up movement against the wider background that direct inspection can miss. An optometrist I am not, but I do find myself staring at data a great deal, seeking relationships, patterns, or gaps. I idly wondered whether, if I filled my visual field with data, I might be able to exploit my peripheral vision in that quest. I have a wide monito

Postman Curlections

My team has been building a new service over the last few months. Until recently all the data it needs has been ingested at startup and our focus has been on the logic that processes the data, architecture, and infrastructure. This week we introduced a couple of new endpoints that enable the creation (through an HTTP POST) and update (PUT) of the fundamental data type (we call it a definition ) that the service operates on. I picked up the task of smoke testing the first implementations. I started out by asking the system under test to show me what it can do by using Postman to submit requests and inspecting the results. It was the kinds of things you'd imagine, including: submit some definitions (of various structure, size, intent, name, identifiers, etc) resubmit the same definitions (identical, sharing keys, with variations, etc) retrieve the submitted definitions (using whatever endpoints exist to show some view of them) compare definitions I submitted fro

Testers are Gate-Crashers

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Testers are the gatekeepers of quality" Instinctively I don't like the sound of that, but I wonder what you mean by it. Perhaps one or more of these? Testers set the quality sta

Vanilla Flavour Testing

I have been pairing with a new developer colleague recently. In our last session he asked me "is this normal testing?" saying that he'd never seen anything like it anywhere else that he'd worked. We finished the task we were on and then chatted about his question for a few minutes. This is a short summary of what I said. I would describe myself as context-driven . I don't take the same approach to testing every time, except in a meta way. I try to understand the important questions, who they are important to, and what the constraints on the work are. With that knowledge I look for productive, pragmatic, ways to explore whatever we're looking at to uncover valuable information or find a way to move on. I write test notes as I work in a format that I have found to be useful to me, colleagues, and stakeholders. For me, the notes should clearly state the mission and give a tl;dr summary of the findings and I like them to be public while I'm working not just w

Build Quality

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "When the build is green, the product is of sufficient quality to release" An interesting take, and one I wouldn't agree with in general. That surprises you? Well, ho

Make, Fix, and Test

A few weeks ago, in A Good Tester is All Over the Place , Joep Schuurkes described a model of testing work based on three axes: do testing yourself or support testing by others be embedded in a team or be part of a separate team do your job or improve the system It resonated with me and the other testers I shared it with at work, and it resurfaced in my mind while I was reflecting on some of the tasks I've picked up recently and what they have involved, at least in the way I've chosen to address them. Here's three examples: Documentation Generation We have an internal tool that generates documentation in Confluence by extracting and combining images and text from a handful of sources. Although useful, it ran very slowly or not at all so one of the developers performed major surgery on it. Up to that point, I had never taken much interest in the tool and I could have safely ignored this piece of work too because it would have been tested by

The Best Laid Test Plans

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "What's the best format for a test plan?" I'll side-step the conversation about what a test plan is and just say that the format you should use is one that works for you, your coll

Test Now

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "When is the best time to test?" Twenty posts in , I hope you're not expecting an answer without nuance? You are? Well, I'll do my best. For me, the best time to test is when there