Skip to main content

Open Testing with Confluence

 
I am a believer in open-notebook testing. I make my work visible to anyone who wants to look at it, while it is in progress.

Why? Well, I dislike information silos, publishing keeps me thoughtful about my work and my standards high, and sometimes someone will spot something I've missed or mistaken.

But I also want my testing to be friction-free. In this context that has two aspects: I need to be able to (a) record and (b) share what I'm doing with as little impact on my work as I can manage.

I've written before about the way I take notes in a text editor using a simple markup language. In my previous job I ran a little script on my testing notes and pasted the output straight into the Mediawiki instance we used.

Unfortunately, in my new job we use Confluence. Also unfortunately, I found that support for even its own markup language was unreliable and so I had to find a new route.

What I've iterated my way to over the last four months is, again, a simple markup language and a script, but this time the markup is based on Markdown and the script uploads the notes itself, along with images, attachments, and labels.

Here's a snippet to illustrate the kinds of things I do:

## Annotations

I've used the WIP! annotation already, but I have others:

OK! Yes, this worked!

FAIL! No, this didn't work.

?? Question, or something to come back to investigate.

!! Problem, or surprising finding.

TODO! Another task, maybe in the testing or in the notes.

And here's how it renders in Confluence:

When I'm working I'll make a directory for a new task, create a file for my notes, and start writing as I test. A default file will usually have the following:

  • A date stamp in the title so that when the pages are published I can easily see when they're from.
  • First section is Mission, so it's clear what the work is attempting to do.
  • Next section is Summary, for stakeholders, a high-level perspective on the activities, results, risks, next steps. I'll mark this work in progress until I'm done testing.

As I work, I'll Cmd-Tab into the text editor (I'm using VS Code at the moment) to pop in a note or take screenshots that I'll drop next to file for later upload.

Periodically, I'll upload the notes to Confluence so that what I've got so far is visible, and so that I can reference it in e.g. Jira tickets or Slack conversations.

This process is not static. I alter it as my needs alter. For example, this week I changed the markup that I use for inserting links because I found it too easy to make a mistake. Next week I might change it again because now it's close to Markdown's table notation.

You might think that you couldn't possibly write a tool like mine? Well, you might be surprised at how dumb my script is. I have bludgeoned my way to making it work with lots of trial and error and I don't care that it's not beautiful or efficient. It is valuable for what it's cost me.

What value have I got from it? To start with, it fulfils my philosophical requirements: it is easy for me to record and share with very low friction. I make notes in an environment tuned very specifically for my needs but share in an environment tuned for the general good. Also, it has saved me person-years worth of frustration with editing in Confluence.

The value is not just to me. Others like my testing notes and find them useful. Not just the people I'm working with either, those who are searching in Confluence can come across them too. I have recently added the ability to put labels into my text file and have them respected by Confluence, so now my notes can also be automatically added to groups of related pages.

To be clear, though, while the tooling is helpful, being able to take the right notes at the right cost at the right time and right level for the right people is a skill. I've spent a long time working on that and expect to continue doing so while refining the tooling to reduce whatever friction I encounter. 

Here's the full demo page I made for this post, and a zip of the script and the source documents that it was created from:


Highlighting: Pinetools

Comments

  1. [[Pingback]]

    Curated as a part of #19th Issue of Software Testing Notes newsletter.

    https://softwaretestingnotes.substack.com/p/issue-19-software-testing-notes

    ReplyDelete

Post a Comment

Popular posts from this blog

Can Code, Can't Code, Is Useful

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "If testers can’t code, they’re of no use to us" My first reaction is to wonder what you expect from your testers. I am immediately interested in your working context and the way

Meet Me Halfway?

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Stop answering my questions with questions." Sure, I can do that. In return, please stop asking me questions so open to interpretation that any answer would be almost meaningless and certa

Not Strictly for the Birds

  One of my chores takes me outside early in the morning and, if I time it right, I get to hear a charming chorus of birdsong from the trees in the gardens down our road, a relaxing layered soundscape of tuneful calls, chatter, and chirrupping. Interestingly, although I can tell from the number and variety of trills that there must be a large number of birds around, they are tricky to spot. I have found that by staring loosely at something, such as the silhouette of a tree's crown against the slowly brightening sky, I see more birds out of the corner of my eye than if I scan to look for them. The reason seems to be that my peripheral vision picks up movement against the wider background that direct inspection can miss. An optometrist I am not, but I do find myself staring at data a great deal, seeking relationships, patterns, or gaps. I idly wondered whether, if I filled my visual field with data, I might be able to exploit my peripheral vision in that quest. I have a wide monito

Testing (AI) is Testing

Last November I gave a talk, Random Exploration of a Chatbot API , at the BCS Testing, Diversity, AI Conference .  It was a nice surprise afterwards to be offered a book from their catalogue and I chose Artificial Intelligence and Software Testing by Rex Black, James Davenport, Joanna Olszewska, Jeremias Rößler, Adam Leon Smith, and Jonathon Wright.  This week, on a couple of train journeys around East Anglia, I read it and made sketchnotes. As someone not deeply into this field, but who has been experimenting with AI as a testing tool at work, I found the landscape view provided by the book interesting, particularly the lists: of challenges in testing AI, of approaches to testing AI, and of quality aspects to consider when evaluating AI.  Despite the hype around the area right now there's much that any competent tester will be familiar with, and skills that translate directly. Where there's likely to be novelty is in the technology, and the technical domain, and the effect of

Postman Curlections

My team has been building a new service over the last few months. Until recently all the data it needs has been ingested at startup and our focus has been on the logic that processes the data, architecture, and infrastructure. This week we introduced a couple of new endpoints that enable the creation (through an HTTP POST) and update (PUT) of the fundamental data type (we call it a definition ) that the service operates on. I picked up the task of smoke testing the first implementations. I started out by asking the system under test to show me what it can do by using Postman to submit requests and inspecting the results. It was the kinds of things you'd imagine, including: submit some definitions (of various structure, size, intent, name, identifiers, etc) resubmit the same definitions (identical, sharing keys, with variations, etc) retrieve the submitted definitions (using whatever endpoints exist to show some view of them) compare definitions I submitted fro

Testers are Gate-Crashers

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Testers are the gatekeepers of quality" Instinctively I don't like the sound of that, but I wonder what you mean by it. Perhaps one or more of these? Testers set the quality sta

Vanilla Flavour Testing

I have been pairing with a new developer colleague recently. In our last session he asked me "is this normal testing?" saying that he'd never seen anything like it anywhere else that he'd worked. We finished the task we were on and then chatted about his question for a few minutes. This is a short summary of what I said. I would describe myself as context-driven . I don't take the same approach to testing every time, except in a meta way. I try to understand the important questions, who they are important to, and what the constraints on the work are. With that knowledge I look for productive, pragmatic, ways to explore whatever we're looking at to uncover valuable information or find a way to move on. I write test notes as I work in a format that I have found to be useful to me, colleagues, and stakeholders. For me, the notes should clearly state the mission and give a tl;dr summary of the findings and I like them to be public while I'm working not just w

Build Quality

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "When the build is green, the product is of sufficient quality to release" An interesting take, and one I wouldn't agree with in general. That surprises you? Well, ho

Make, Fix, and Test

A few weeks ago, in A Good Tester is All Over the Place , Joep Schuurkes described a model of testing work based on three axes: do testing yourself or support testing by others be embedded in a team or be part of a separate team do your job or improve the system It resonated with me and the other testers I shared it with at work, and it resurfaced in my mind while I was reflecting on some of the tasks I've picked up recently and what they have involved, at least in the way I've chosen to address them. Here's three examples: Documentation Generation We have an internal tool that generates documentation in Confluence by extracting and combining images and text from a handful of sources. Although useful, it ran very slowly or not at all so one of the developers performed major surgery on it. Up to that point, I had never taken much interest in the tool and I could have safely ignored this piece of work too because it would have been tested by

The Best Laid Test Plans

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "What's the best format for a test plan?" I'll side-step the conversation about what a test plan is and just say that the format you should use is one that works for you, your coll