Skip to main content

Farewell AST

After four years, three of them as Vice President, I'm standing down from the board of the Association for Software Testing.

Let me say up front that I am an unapologetic romantic about my craft. (And, yeah, I called it a craft. Sue me.) I believe in what AST stands for, its mission, and in context-driven testing, so it's been an absolute privilege to be involved in running the organisation. It's also been fun, and full of difficult situations and choices, and hard work on top of family life and a day job.

There also was the small matter of the global Covid pandemic to deal with. The immediate impact was on CAST, our annual conference, and in some ways the beating heart of the AST. We had to variously cancel, reschedule, and move CAST online and we are still experiencing the after-effects as we organise the 2023 in-person event.

So why am I leaving? Well, first, I'm not leaving the organisation, only the board. I am a life member and I hope to remain an active one. But, second, I have decided not to stand for re-election because I'm tired and my family situation has become more complicated recently. I can't commit the kind of time and energy that I would like to, to do the kind of job I want to do.

And what kind of job would that be? I reflected on the question while making my decision and also a couple of years ago when I wondering whether to stand for a second term.

When I joined the board I saw that we needed to strengthen our operations in some key areas to remain functional: membership management, internal operations, and value proposition. These are fundamental to any member organisation, particularly one staffed by volunteers. 

Given that, I have focussed on foundational changes; things that I hope will help AST to operate with reduced friction, and to demonstrate who we are, what we do, and why, to existing and potential members. 

I've listed some of that work at the end here and in retrospect it looks like a lot. Frankly, it is a lot and I'm very proud of my contributions. But it's also not just my work. I definitely brought energy and the skills to analyse, advocate for, plan, co-ordinate, and implement some projects, but my colleagues on the board and volunteer members also have those kinds of skills and provided all kinds of input, feedback, assistance, and support along the way too.

Not only that, but the AST board is run on a consensus basis. We talk about what we're trying to do and why, and what our options are, and come to a group decision. My sense is that this gives the group greater cohesion and helps us all to feel involved despite the physical distances between us. 

There are also significant areas I didn't touch directly, such as CAST and BBST training, that others were taking care of in parallel. And I don't want to give the impression that AST is a basket case. It's not, but it does have a legacy orgbase with all of the things-are-the-way-they-are-because-they-got-that-way quirks that you'd expect.

I'm not so vain as to think that the work I did is done done. I reckon AST has a more stable, workable, underpinning than before I joined, but I'm looking forward to seeing how the testers who take the organisation forward can reinforce, replace, extend, and build on top of it.

I wish good luck to all the continuing board members, and to the candidates in the current elections. I hope the next board has the same level of enjoyment and satisfaction that I've had doing this job.

Oh yeah, there's one other thing I got from AST that I'm exceptionally grateful for: being introduced to tater tots at a Tex-Mex joint in Atlanta during CAST. Man, I adore those crunchy potato love bombs and, after I've had a break, if anyone is interested in forming the Association for Scarfing Tots give me a shout.

--00--

This is a list of stuff I've done at AST that I made while I was thinking about this post.

Membership Management

It may seem obvious to say that members are the lifeblood of a membership organisation. It bears repeating, though, and it motivated a lot of my work. When I joined the board we didn't have much visibility of our who our members were and who they had been. I built and incrementally improved semi-automated regular reports that have helped us to understand that better.

This lead to the revision of our membership offering, from a one-price-fits-all approach to pay-what-you-can-afford with multiple price points for the same member benefits. Significant effort on this project was trying to first model, and then later monitor, the kinds of benefits we hoped for, such as increased membership from countries with lower average incomes or testers with less-senior role titles.

It's hard enough proposing potential approaches, trying to understand how they might work relative to one another and the existing system, and negotiating agreement from the board. Add on top of that the problems of migrating to the chosen system and keeping all relevant parties informed along the way and you'll see that this was a big undertaking.

To make things worse, the software we were using for membership management had been ancient for a long time and wasn't serving us well, but the mammoth task of researching alternatives, choosing one, and then migrating to it had been postponed repeatedly. I took it on, and now we have a functional and more modern system for managing membership, subscriptions, and events.

Organisation

Being nearly 20 years old, AST has had time to accumulate a lot of cruft anywhere that we store stuff. We've all seen this kind of thing, probably on a wiki or file share at work or searching for docs for some tool online. It's kind of bearable until it reaches a critical mass where it's hard to find what you need, where multiple approaches to the same task have been used in different places, and where it's clear that whoever was here before you didn't try to clean up. I cleaned up our Google drive.

Our web site had similar problems and I worked on that too. I used what analytics we had to try to optimise pages that were being hit most frequently and some basic SEO to try to promote the pages we'd prefer to be hit more often. 

I ran a volunteer project to remove the worst of the mess as well. We deleted over half of the pages, rationalised the site structure, standardised the layout of related pages, and made some templates for pages that we create on a regular basis. We discovered a bunch of unexpected problems along the way, such as broken spam filtering plug-ins on our contact forms, which we fixed as well.

As a remote-first organisation, and one in which the personnel changes regularly, and one in which some tasks only happen annually, it can be hard to find opportunities to share knowledge about how we do things. This is one of the causes of the cruft problem. 

To help overcome that, I wrote a lot of runbooks. I was very mindful that maintaining the runbooks shouldn't become a heavy maintenance burden in itself, so I tried hard to keep them to necessary and sufficient information, presented clearly. I also organised the runbooks for discoverability, using Google Drive symlinks to place them in a central location and in relevant folders for their task.

I've demonstrated and encouraged more use of the data we have for decision-making. Over the last two or three years we have developed a very rich financial model of our CAST conferences which is helping us to make more informed choices when it comes to costs and ticket pricing. Building this model and tuning it for sustainability is part of our one and three year goals, a simple planning and prioritisation framework we introduced at my suggestion. 

One of the perennial problems of volunteer organisations is the discrepancy between the number of good ideas and the time, energy, and resources to implement them. Having a process that explicitly nominates staggered goals, and reviewing our activities against them regularly, has helped to keep our focus on what we consider important.

Value Proposition

To help AST explain its value to potential members I researched the kinds of things that other, similar, membership organisations claim. This lead to a wide-ranging analysis of what we felt AST could claim and to what extent, what we felt that we couldn't claim and didn't want to, and what we couldn't currently claim but would like to. We were able to plan initiatives, such as initially writing a clearer set of membership benefits, based on the outcome of that work.

A pipeline of incoming members are one concern, but we also need to find ways to show existing members that we are here and doing worthwhile things. I think AST suffers from not making the most of the good work that it does. For example, running an interesting webinar but advertising it minimally or late, not telling members that it happened and what the key take-aways were, and not making it available on our YouTube channel despite having the recording. 

I tried to help change that culture by writing material for our monthly newsletter and prompting for it to be used, showing how Hootsuite could be invoked relatively easily to improve publicity reach, fixing permissions issues with our YouTube channel, and documenting how to add videos with minimal effort.

We also don't share or get the most from what we've done in the past. I implemented automation with Zapier to fetch an item from our archives and post it on social media every day. I also kicked off a volunteer project, which is still running, to create a GitHub repo and retrospectively collect presentations from CAST conferences and AST webinars in it.

I've attempted to set up some ways in which long-term value can be created too. I wrote an e-book, Peers Exchanging Ideas, about running peer conferences, publised on AST's GitHub repo. We used it for, and updated it after, a joint peer conference with BCS SIG which also produced a paper, Should the Public Care About Software Testing.

I came up with the idea for a crowd-sourced book, Navigating the World as a Context-Driven Tester, and collaborated with a long-term AST member, Lee Hawkins, to work out the details about how we'd like it to run. Lee's done a tremendous job over the last two years, and we've had contributions from over 60 testers from within and outside AST.

I set up and ran Steel Yourselves, an innovative webinar format in which testers challenge themselves to argue for a case they disagree with. I experimented with publicity materials and promotion too, introducing Canva for flyers and some simple configuration for emails to attendees before and after.
Image: https://flic.kr/p/kyG3nr

Comments

Popular posts from this blog

Can Code, Can't Code, Is Useful

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "If testers can’t code, they’re of no use to us" My first reaction is to wonder what you expect from your testers. I am immediately interested in your working context and the way

Testing (AI) is Testing

Last November I gave a talk, Random Exploration of a Chatbot API , at the BCS Testing, Diversity, AI Conference .  It was a nice surprise afterwards to be offered a book from their catalogue and I chose Artificial Intelligence and Software Testing by Rex Black, James Davenport, Joanna Olszewska, Jeremias Rößler, Adam Leon Smith, and Jonathon Wright.  This week, on a couple of train journeys around East Anglia, I read it and made sketchnotes. As someone not deeply into this field, but who has been experimenting with AI as a testing tool at work, I found the landscape view provided by the book interesting, particularly the lists: of challenges in testing AI, of approaches to testing AI, and of quality aspects to consider when evaluating AI.  Despite the hype around the area right now there's much that any competent tester will be familiar with, and skills that translate directly. Where there's likely to be novelty is in the technology, and the technical domain, and the effect of

Testers are Gate-Crashers

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Testers are the gatekeepers of quality" Instinctively I don't like the sound of that, but I wonder what you mean by it. Perhaps one or more of these? Testers set the quality sta

Am I Wrong?

I happened across Exploratory Testing: Why Is It Not Ideal for Agile Projects? by Vitaly Prus this week and I was triggered. But why? I took a few minutes to think that through. Partly, I guess, I feel directly challenged. I work on an agile project (by the definition in the article) and I would say that I use exclusively exploratory testing. Naturally, I like to think I'm doing a good job. Am I wrong? After calming down, and re-reading the article a couple of times, I don't think so. 😸 From the start, even the title makes me tense. The ideal solution is a perfect solution, the best solution. My context-driven instincts are reluctant to accept the premise, and I wonder what the author thinks is an ideal solution for an agile project, or any project. I notice also that I slid so easily from "an approach is not ideal" into "I am not doing a good job" and, in retrospect, that makes me smile. It doesn't do any harm to be reminded that your cognitive bias

Play to Play

I'm reading Rick Rubin's The Creative Act: A Way of Being . It's spiritual without being religious, simultaneously vague and specific, and unerring positive about the power and ubiquity of creativity.  We artists — and we are all artists he says — can boost our creativity by being open and welcoming to knowledge and experiences and layering them with past knowledge and experiences to create new knowledge and experiences.  If that sounds a little New Age to you, well it does to me too, yet also fits with how I think about how I work. This is in part due to that vagueness, in part due to the human tendency to pattern-match, and in part because it's true. I'm only about a quarter of the way through the book but already I am making connections to things that I think and that I have thought in the past. For example, in some ways it resembles essay-format Oblique Strategy cards and I wrote about the potential value of them to testers 12 years ago. This week I found the f

Meet Me Halfway?

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Stop answering my questions with questions." Sure, I can do that. In return, please stop asking me questions so open to interpretation that any answer would be almost meaningless and certa

Test Now

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "When is the best time to test?" Twenty posts in , I hope you're not expecting an answer without nuance? You are? Well, I'll do my best. For me, the best time to test is when there

Rage Against the Machinery

  I often review and collaborate on unit tests at work. One of the patterns I see a lot is this: there are a handful of tests, each about a page long the tests share a lot of functionality, copy-pasted the test data is a complex object, created inside the test the test data varies little from test to test. In Kotlin-ish pseudocode, each unit test might look something like this: @Test fun `test input against response for endpoint` () { setupMocks() setupTestContext() ... val input = Object(a, OtherObject(b, c), AnotherObject(d)) ... val response = someHttpCall(endPoint, method, headers, createBodyFromInput(input) ) ... val expected = Object(w, OtherObject(x, y), AnotherObject (z)) val output = Object(process(response.getField()), otherProcess(response.getOtherField()), response.getLastField()) assertEquals(expected, output) } ... While these tests are generally functional, and I rarely have reason to doubt that they

A Qualified Answer

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn ,   Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Whenever possible, you should hire testers with testing certifications"  Interesting. Which would you value more? (a) a candidate who was sent on loads of courses approved by some organisation you don't know and ru

README

    This week at work my team attended a Myers Briggs Type Indicator workshop. Beforehand we each completed a questionnaire which assigned us a personality type based on our position on five behavioural preference axes. For what it's worth, this time I was labelled INFJ-A and roughly at the mid-point on every axis.  I am sceptical about the value of such labels . In my less charitable moments, I imagine that the MBTI exercise gives us each a box and, later when work shows up, we try to force the work into the box regardless of any compatiblity in size and shape. On the other hand, I am not sceptical about the value of having conversations with those I work with about how we each like to work or, if you prefer it, what shape our boxes are, how much they flex, and how eager we are to chop problems up so that they fit into our boxes. Wondering how to stretch the workshop's conversational value into something ongoing I decided to write a README for me and