Skip to main content

CAST 2022 Recap

 

Tl;dr: it was brilliant being at CAST 2022, the conference of the Association for Software Testing, in San Diego last week.

Even if I didn't get out on the water like Tina, it was brilliant to be at an in-person conference again, with the time and space and atmosphere and context to really talk to people about the presentations, about themselves, about their work, and about all the software testing things.

It was also brilliant to be at a conference with the theme hands-on, set up to actually be hands-on. We wanted attendees to leave with a set of tools that they'd at least held in their own hands and felt the weight and balance of.

Day one kicked off with three extended tutorials from experienced practitioners. Dawn Haynes gave us A Survey of Test Design Ideas which covered test design, heuristics, sampling, checklists, and other aids to generating possible areas to test.

For those in a rush to get going with automation against web sites, Boris Wrubel had Test Automation 101 for Coders. His abstract said that he would "set up a test automation framework with Selenium and Cucumber in less than one hour" but when I spoke to him just beforehand he reckoned he could actually do it inside 30 minutes (!) using downloadable project archetypes.

The third tutorial, and the one I attended (I'll link to more detailed notes for all the sessions I was in), was Usability for Everyone from Cordellia Yokum. In five hours we covered an enormous amount of ground, motivating the consideration of accessibility concerns (which includes search engine optimisation if you're looking for a business case), talking about the various standards that exist in the space, and looking at tooling for helping to assess accessibility.

The learning was given time to sink in at the conference reception where, over finger food and drinks, everybody got a chance to chat and build relationships. I love that the speakers were there enjoying the relaxed atmosphere too.  

Then, as the late afternoon blurred into the evening, we had dinner follwed by a testathon. 

A testathon? Yes, a testathon! 

Groups formed into teams to explore a website for an hour or so and then present their findings. Reports were judged by the tutorial presenters and the conference chair and credit was given for how well the report described the state of the product, risk analysis, test coverage, and presentation style.

Some great reports were given, $1000 prize money was shared out, including $500 to the winning team Twan's Swans (pictured), and then it was on to games night.

The second day was a mixture of workshops and track talks bookended by keynotes, again all tied together by the hands-on theme and all interesting. There's something to be said for single-track conferences: you don't have the difficult decisions to make!

Before proceedings started officially, if you were up early, you could have Lean Coffee over breakfast. I was (thank you transatlantic travel) and I did (and it was great fun)!

First of the talks was Cindy Lawless who had a whole room ensemble testing the classic buggy triangle before talking us through how to (and how not to) write a test report about the work we'd just done. No artefact counts and no dumb charts. Instead, include a high-level summary, description of coverage, the strategy used, and risks found and remaining! Making Test Cases Suck Less, indeed!

The first of the day's workshops were next. In one room, we had It's All About the Money, Performance Testing and Infrastructure Costs with Twan Koot. In the other, Ben Simo on Testing Without Requirements

Twan asked participants to think harder about costs as they build and run their tests. The tests monitor performance, why not also monitor how much you're paying and take action to save money? He gave some lessons in how that can be accomplished.

Meanwhile, Ben described how his thinking on on requirements has evolved over the years. From expecting requirements up front he'll now test first help understand what the constraints for possible requirements could be. After the tour through his backstory we explored an application using FEW HICCUPPS as a guide, and then discussed what we'd found.

After lunch, Sergio Riveros used exercises to illustrate why we should be building accessible, inclusive applications, and also why compromises will always need to be made. At the same time, Amber Vanderburg, Innovation Ninja, was describing strategies she uses to help boost creativity and collaboration in her teams: keep people informed and create safe spaces to have constructive conversations about how everyone feels, and why.

Another pair of workshops next, and another tricky choice. I picked Tariq King's Hands-On with AI for Testing and Testing AI but I could easily have gone for Breadth, Depth, and Prioritization: Planning and Presenting Test Coverage with Eric Proegler and Stephanie Dukes.

Eric and Stephanie asked the participants to model test coverage of an application and then split people into groups to prioritise their testing according to different criteria such as regression, exploration, maintenance, documentation, automation, time, and cost. In parallel, Tariq had his session training ML models and then applying testing thinking to try to understand and fool them.  Tons of energy and humour too, as you'd expect from him.

If it sounds like a long day, it didn't feel like it. There was still plenty of energy around the conference for the last two track talks from Curtis Pettit and Dawn Haynes.

It's important to know your role at work and Curtis helped the testers in his session to score ourselves on tester-relevant characteristics with Critical Role: Filling Out Your Character Sheet. At the same time, Dawn was euthenising an old campaginer in Death to Test Cases! She asked if we used them, when, how, and why. Then we each wrote a case, came back to the group to talked about weaknesses and potential value, and then wrote a better version. 

Last but not least, Eric Proegler stepped up for his second stint of the day, and the closing keynote, Anyone Can Performance Test?!? Hands-on to the end, with a few clicks, and using public tools, Eric got the whole conference runing and interpreting basic performance tests on his site as he told the story of the performance testing profession.

And that was it. We had a brilliant time and now Tina can move all her tickets to Done. 

See you there next year?
Images: Chris Kenst, Tariq King, Tariq King, Tristan Lombard, Joel Montvelisky, Joel Montvelisky, Tina Toucan, Tina Toucan.

Comments

Popular posts from this blog

Can Code, Can't Code, Is Useful

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "If testers can’t code, they’re of no use to us" My first reaction is to wonder what you expect from your testers. I am immediately interested in your working context and the way

Testing (AI) is Testing

Last November I gave a talk, Random Exploration of a Chatbot API , at the BCS Testing, Diversity, AI Conference .  It was a nice surprise afterwards to be offered a book from their catalogue and I chose Artificial Intelligence and Software Testing by Rex Black, James Davenport, Joanna Olszewska, Jeremias Rößler, Adam Leon Smith, and Jonathon Wright.  This week, on a couple of train journeys around East Anglia, I read it and made sketchnotes. As someone not deeply into this field, but who has been experimenting with AI as a testing tool at work, I found the landscape view provided by the book interesting, particularly the lists: of challenges in testing AI, of approaches to testing AI, and of quality aspects to consider when evaluating AI.  Despite the hype around the area right now there's much that any competent tester will be familiar with, and skills that translate directly. Where there's likely to be novelty is in the technology, and the technical domain, and the effect of

Testers are Gate-Crashers

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Testers are the gatekeepers of quality" Instinctively I don't like the sound of that, but I wonder what you mean by it. Perhaps one or more of these? Testers set the quality sta

Am I Wrong?

I happened across Exploratory Testing: Why Is It Not Ideal for Agile Projects? by Vitaly Prus this week and I was triggered. But why? I took a few minutes to think that through. Partly, I guess, I feel directly challenged. I work on an agile project (by the definition in the article) and I would say that I use exclusively exploratory testing. Naturally, I like to think I'm doing a good job. Am I wrong? After calming down, and re-reading the article a couple of times, I don't think so. 😸 From the start, even the title makes me tense. The ideal solution is a perfect solution, the best solution. My context-driven instincts are reluctant to accept the premise, and I wonder what the author thinks is an ideal solution for an agile project, or any project. I notice also that I slid so easily from "an approach is not ideal" into "I am not doing a good job" and, in retrospect, that makes me smile. It doesn't do any harm to be reminded that your cognitive bias

Test Now

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "When is the best time to test?" Twenty posts in , I hope you're not expecting an answer without nuance? You are? Well, I'll do my best. For me, the best time to test is when there

README

    This week at work my team attended a Myers Briggs Type Indicator workshop. Beforehand we each completed a questionnaire which assigned us a personality type based on our position on five behavioural preference axes. For what it's worth, this time I was labelled INFJ-A and roughly at the mid-point on every axis.  I am sceptical about the value of such labels . In my less charitable moments, I imagine that the MBTI exercise gives us each a box and, later when work shows up, we try to force the work into the box regardless of any compatiblity in size and shape. On the other hand, I am not sceptical about the value of having conversations with those I work with about how we each like to work or, if you prefer it, what shape our boxes are, how much they flex, and how eager we are to chop problems up so that they fit into our boxes. Wondering how to stretch the workshop's conversational value into something ongoing I decided to write a README for me and

A Qualified Answer

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn ,   Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Whenever possible, you should hire testers with testing certifications"  Interesting. Which would you value more? (a) a candidate who was sent on loads of courses approved by some organisation you don't know and ru

ChatGPT Whoppers

Over Christmas I thought I'd have a look at ChatGPT . Not to "break" it, or find more examples of its factual incorrectness , but to explore it sympathetically, for fun. And it was fun. In particular, the natural language generation and understanding capabilities of the system are really impressive. However, even without trying it's not hard to expose weaknesses in the tool. So much so that I doubt I would have bothered to blog about what I found, except that I enjoyed the accidental semantic connection between a handful of my observations. I asked for ASCII art to celebrate my 600th blog post on software testing and got this whopper! . .: :: :; ;: .;; .;;: ::;: :;;: ;;;:

Farewell AST

After four years, three of them as Vice President, I'm standing down from the board of the Association for Software Testing . Let me say up front that I am an unapologetic romantic about my craft. (And, yeah , I called it a craft. Sue me.) I believe in what AST stands for, its mission , and in context-driven testing , so it's been an absolute privilege to be involved in running the organisation. It's also been fun, and full of difficult situations and choices, and hard work on top of family life and a day job. There also was the small matter of the global Covid pandemic to deal with. The immediate impact was on CAST, our annual conference , and in some ways the beating heart of the AST. We had to variously cancel, reschedule, and move CAST online and we are still experiencing the after-effects as we organise the 2023 in-person event . So why am I leaving? Well, first, I'm not leaving the organisation, only the board. I am a life member and

Having a Test.blast()

Last week I attended a meetup on API testing with Mark Winteringham . In it, he talked through some HTTP and REST basics, introduced us to Postman by making requests against his Restful Booker bed and breakfast application, and encouraged us to enter the Test.bash() 2022 API Challenge which was about to close. The challenge is to make a 20-minute video for use at the Ministry of Testing's October Test.bash() showing the use of automation to check that it's possible to create a room using the Restful Booker API. I talk and write about exploring with automation a lot (next time is 14th October 2022, for an Association for Software Testing webinar ) and I thought it might be interesting to show that I am not a great developer and spend plenty of time Googling, copy-pasting, and introducing and removing typos. So I did and my video is now available in the Ministry of Testing Dojo . The script I hacked during the video is up in GitHub . My