Skip to main content

CAST 2022 Recap

 

Tl;dr: it was brilliant being at CAST 2022, the conference of the Association for Software Testing, in San Diego last week.

Even if I didn't get out on the water like Tina, it was brilliant to be at an in-person conference again, with the time and space and atmosphere and context to really talk to people about the presentations, about themselves, about their work, and about all the software testing things.

It was also brilliant to be at a conference with the theme hands-on, set up to actually be hands-on. We wanted attendees to leave with a set of tools that they'd at least held in their own hands and felt the weight and balance of.

Day one kicked off with three extended tutorials from experienced practitioners. Dawn Haynes gave us A Survey of Test Design Ideas which covered test design, heuristics, sampling, checklists, and other aids to generating possible areas to test.

For those in a rush to get going with automation against web sites, Boris Wrubel had Test Automation 101 for Coders. His abstract said that he would "set up a test automation framework with Selenium and Cucumber in less than one hour" but when I spoke to him just beforehand he reckoned he could actually do it inside 30 minutes (!) using downloadable project archetypes.

The third tutorial, and the one I attended (I'll link to more detailed notes for all the sessions I was in), was Usability for Everyone from Cordellia Yokum. In five hours we covered an enormous amount of ground, motivating the consideration of accessibility concerns (which includes search engine optimisation if you're looking for a business case), talking about the various standards that exist in the space, and looking at tooling for helping to assess accessibility.

The learning was given time to sink in at the conference reception where, over finger food and drinks, everybody got a chance to chat and build relationships. I love that the speakers were there enjoying the relaxed atmosphere too.  

Then, as the late afternoon blurred into the evening, we had dinner follwed by a testathon. 

A testathon? Yes, a testathon! 

Groups formed into teams to explore a website for an hour or so and then present their findings. Reports were judged by the tutorial presenters and the conference chair and credit was given for how well the report described the state of the product, risk analysis, test coverage, and presentation style.

Some great reports were given, $1000 prize money was shared out, including $500 to the winning team Twan's Swans (pictured), and then it was on to games night.

The second day was a mixture of workshops and track talks bookended by keynotes, again all tied together by the hands-on theme and all interesting. There's something to be said for single-track conferences: you don't have the difficult decisions to make!

Before proceedings started officially, if you were up early, you could have Lean Coffee over breakfast. I was (thank you transatlantic travel) and I did (and it was great fun)!

First of the talks was Cindy Lawless who had a whole room ensemble testing the classic buggy triangle before talking us through how to (and how not to) write a test report about the work we'd just done. No artefact counts and no dumb charts. Instead, include a high-level summary, description of coverage, the strategy used, and risks found and remaining! Making Test Cases Suck Less, indeed!

The first of the day's workshops were next. In one room, we had It's All About the Money, Performance Testing and Infrastructure Costs with Twan Koot. In the other, Ben Simo on Testing Without Requirements

Twan asked participants to think harder about costs as they build and run their tests. The tests monitor performance, why not also monitor how much you're paying and take action to save money? He gave some lessons in how that can be accomplished.

Meanwhile, Ben described how his thinking on on requirements has evolved over the years. From expecting requirements up front he'll now test first help understand what the constraints for possible requirements could be. After the tour through his backstory we explored an application using FEW HICCUPPS as a guide, and then discussed what we'd found.

After lunch, Sergio Riveros used exercises to illustrate why we should be building accessible, inclusive applications, and also why compromises will always need to be made. At the same time, Amber Vanderburg, Innovation Ninja, was describing strategies she uses to help boost creativity and collaboration in her teams: keep people informed and create safe spaces to have constructive conversations about how everyone feels, and why.

Another pair of workshops next, and another tricky choice. I picked Tariq King's Hands-On with AI for Testing and Testing AI but I could easily have gone for Breadth, Depth, and Prioritization: Planning and Presenting Test Coverage with Eric Proegler and Stephanie Dukes.

Eric and Stephanie asked the participants to model test coverage of an application and then split people into groups to prioritise their testing according to different criteria such as regression, exploration, maintenance, documentation, automation, time, and cost. In parallel, Tariq had his session training ML models and then applying testing thinking to try to understand and fool them.  Tons of energy and humour too, as you'd expect from him.

If it sounds like a long day, it didn't feel like it. There was still plenty of energy around the conference for the last two track talks from Curtis Pettit and Dawn Haynes.

It's important to know your role at work and Curtis helped the testers in his session to score ourselves on tester-relevant characteristics with Critical Role: Filling Out Your Character Sheet. At the same time, Dawn was euthenising an old campaginer in Death to Test Cases! She asked if we used them, when, how, and why. Then we each wrote a case, came back to the group to talked about weaknesses and potential value, and then wrote a better version. 

Last but not least, Eric Proegler stepped up for his second stint of the day, and the closing keynote, Anyone Can Performance Test?!? Hands-on to the end, with a few clicks, and using public tools, Eric got the whole conference runing and interpreting basic performance tests on his site as he told the story of the performance testing profession.

And that was it. We had a brilliant time and now Tina can move all her tickets to Done. 

See you there next year?
Images: Chris Kenst, Tariq King, Tariq King, Tristan Lombard, Joel Montvelisky, Joel Montvelisky, Tina Toucan, Tina Toucan.

Comments

Popular posts from this blog

Can Code, Can't Code, Is Useful

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "If testers can’t code, they’re of no use to us" My first reaction is to wonder what you expect from your testers. I am immediately interested in your working context and the way

Meet Me Halfway?

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Stop answering my questions with questions." Sure, I can do that. In return, please stop asking me questions so open to interpretation that any answer would be almost meaningless and certa

Not Strictly for the Birds

  One of my chores takes me outside early in the morning and, if I time it right, I get to hear a charming chorus of birdsong from the trees in the gardens down our road, a relaxing layered soundscape of tuneful calls, chatter, and chirrupping. Interestingly, although I can tell from the number and variety of trills that there must be a large number of birds around, they are tricky to spot. I have found that by staring loosely at something, such as the silhouette of a tree's crown against the slowly brightening sky, I see more birds out of the corner of my eye than if I scan to look for them. The reason seems to be that my peripheral vision picks up movement against the wider background that direct inspection can miss. An optometrist I am not, but I do find myself staring at data a great deal, seeking relationships, patterns, or gaps. I idly wondered whether, if I filled my visual field with data, I might be able to exploit my peripheral vision in that quest. I have a wide monito

Postman Curlections

My team has been building a new service over the last few months. Until recently all the data it needs has been ingested at startup and our focus has been on the logic that processes the data, architecture, and infrastructure. This week we introduced a couple of new endpoints that enable the creation (through an HTTP POST) and update (PUT) of the fundamental data type (we call it a definition ) that the service operates on. I picked up the task of smoke testing the first implementations. I started out by asking the system under test to show me what it can do by using Postman to submit requests and inspecting the results. It was the kinds of things you'd imagine, including: submit some definitions (of various structure, size, intent, name, identifiers, etc) resubmit the same definitions (identical, sharing keys, with variations, etc) retrieve the submitted definitions (using whatever endpoints exist to show some view of them) compare definitions I submitted fro

Vanilla Flavour Testing

I have been pairing with a new developer colleague recently. In our last session he asked me "is this normal testing?" saying that he'd never seen anything like it anywhere else that he'd worked. We finished the task we were on and then chatted about his question for a few minutes. This is a short summary of what I said. I would describe myself as context-driven . I don't take the same approach to testing every time, except in a meta way. I try to understand the important questions, who they are important to, and what the constraints on the work are. With that knowledge I look for productive, pragmatic, ways to explore whatever we're looking at to uncover valuable information or find a way to move on. I write test notes as I work in a format that I have found to be useful to me, colleagues, and stakeholders. For me, the notes should clearly state the mission and give a tl;dr summary of the findings and I like them to be public while I'm working not just w

Make, Fix, and Test

A few weeks ago, in A Good Tester is All Over the Place , Joep Schuurkes described a model of testing work based on three axes: do testing yourself or support testing by others be embedded in a team or be part of a separate team do your job or improve the system It resonated with me and the other testers I shared it with at work, and it resurfaced in my mind while I was reflecting on some of the tasks I've picked up recently and what they have involved, at least in the way I've chosen to address them. Here's three examples: Documentation Generation We have an internal tool that generates documentation in Confluence by extracting and combining images and text from a handful of sources. Although useful, it ran very slowly or not at all so one of the developers performed major surgery on it. Up to that point, I had never taken much interest in the tool and I could have safely ignored this piece of work too because it would have been tested by

ChatGPTesters

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00--  "Why don’t we replace the testers with AI?" We have a good relationship so I feel safe telling you that my instinctive reaction, as a member of the Tester's Union, is to ask why we don&

Build Quality

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "When the build is green, the product is of sufficient quality to release" An interesting take, and one I wouldn't agree with in general. That surprises you? Well, ho

The Best Laid Test Plans

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "What's the best format for a test plan?" I'll side-step the conversation about what a test plan is and just say that the format you should use is one that works for you, your coll

Testers are Gate-Crashers

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Testers are the gatekeepers of quality" Instinctively I don't like the sound of that, but I wonder what you mean by it. Perhaps one or more of these? Testers set the quality sta