Skip to main content

Posts

Showing posts with the label Sketchnotes

I Wish I Could Sprechen Sie Deutsch

I'm slowly learning German ... for fun, believe it or not.  To limit my time commitment I've been mostly studying with lessons on apps or in-person supplemented by YouTube, ChatGPT, DeepL, blogs, an ancient CD-based box set that my well-meaning parents bought in a charity shop, subtitled German-language films and ... Peppa Wutz.   I've pushed ahead in some areas, for example by completing Babbel lessons all the way to  B2  even though I'm nowhere near speaking at that level, and starting again at  A1  even though I'm well past it. Being exposed to the advanced content can give some useful context, and redoing the basics can cement understanding, reinforce learning, and make connections that were missed the first time around. What I wasn't doing was rote learning verb conjugations, grammatical structures, or vocab lists. And that was fine for a long time, at least until I got good enough to have simple conversations...

LLEWT 2025

I attended LLEWT 2025 at the weekend. LLEWT is a peer conference  hosted by Chris Chant, Joep Schuurkes, and  Elizabeth Zagroba in Llandegfan  on the island of Anglesey, North Wales. This year's theme was Rules and constraints to ensure better quality: Think of things like WIP limits, zero bug policies, trunk-based development, not allowing any form of interprocess communication except through service interfaces that are externalizable, or just firing all your testers so the devs have to step up. (Yes, not all of these are a good idea all of the time.) Some terms related to this theme are forcing functions, poka-yoke, and behavior-shaping constraints. Basically we're looking for any rule or constraint you put in place to get to better quality. (Some systems thinking might be required.) The format of LLEWT encourages proposals for experience reports on the theme, takes feedback on the proposals, an...

LLEWT 2024

This weekend I was at LLEWT 2024, a peer conference on Anglesey , north Wales, discussing communication. Given the day jobs of the participants, it was no surprise that the experience reports and the conversations that followed them mostly focussed on software development contexts.  Notes from my presentation are in Express, Listen, and Field . I made sketchnotes (below) for each presentation and a mindmap (above) to try to summarise the whole. Without much reflection yet, I guess I would pull these common high-level threads from the day: There are multiple reasons that communication fails  ... like, duh! ... but having multiple strategies for framing a message can help ... and having multiple tactics for delivering a message can help too. Understanding what you want from an interaction is key ... so setting the context to make that more likely is wise ... which might mean meta-conversation, being transparent, or changing your approach...

Testing (AI) is Testing

Last November I gave a talk, Random Exploration of a Chatbot API , at the BCS Testing, Diversity, AI Conference .  It was a nice surprise afterwards to be offered a book from their catalogue and I chose Artificial Intelligence and Software Testing by Rex Black, James Davenport, Joanna Olszewska, Jeremias Rößler, Adam Leon Smith, and Jonathon Wright.  This week, on a couple of train journeys around East Anglia, I read it and made sketchnotes. As someone not deeply into this field, but who has been experimenting with AI as a testing tool at work, I found the landscape view provided by the book interesting, particularly the lists: of challenges in testing AI, of approaches to testing AI, and of quality aspects to consider when evaluating AI.  Despite the hype around the area right now there's much that any competent tester will be familiar with, and skills that translate directly. Where there's likely to be novelty is in the technology, and the technical domain, and the eff...

BCS: Testing, AI, and Diversity

I spoke at the BCS Software Testing Specialist Group's Testing, AI, and Diversity Conference yesterday. That's me at the top, holding an imaginary marrow. There'll be videos of all of the talks online shortly but in the meantime, here's my sketchnotes. Sam De Silva , An overview of the draft EU AI Regulation. Adam Leon Smith , Software Testing Standards - why do we need them and when are they useful? Alan Giles , Giving ‘The User’ A Face - Accessibility Testing Using Personas. Andy Shaw, Mental Health, Testing and Me. Deborah Reid , Accessibility 101. Laveena Ramchandani , Testing Data Science Models.   Jonathon Wright , Shift Right into the Metaverse with Digital Twin Testing.  Image: Jonathon Wright on Twitter

What a Performance

  At CAST 2022 Eric Proegler delivered a keynote speech packed with information, audience participation, and humour.   Anyone Can Performance Test?!? he said, and then showed a roomful of sceptical testers that he wasn't lying by having us hammer the WOPR web site while he monitored the back end. In a second round we gathered stats from our browsers's dev tools and shared the findings into a Google sheet. From that we were able to start seeing some potential patterns in the speed at which pages were loading on the client side for different kinds of devices and networks. We also got a hint that those accessing the server at different times might be experiencing different performance. Eric then introduced us to WebPageTest where, for free, you can request a device, a connection type, a geographical location, and a URL, push a button, and get back data on page load times, accessibility scores, and other metrics for your chosen combination.  He continued to monitor W...

Listener at Work

  That's me sketchnoting at CAST 2022 , taken by Pradeep Soundararajan . He tweeted the image and got a reply that made me smile:  My grin was wide for a few reasons: First, there is no way I'd flatter myself with that title. Second, this is not really work; being at CAST is an absolute pleasure. Third, I actually redrew those notes shortly afterwards because I'd only filled half the page when the talk ended. I'll forgive myself the last point. I'd thought it was another 90-minute session when in fact it was only 45. Doh! But that highlights an interesting thing about sketchnoting: how to determine what to put on the page, and where, and when.  Several people at the conference asked me about that and I said the same kinds of things I wrote in Beginning Sketchnoting a few years ago. My basic approach hasn't changed very much at all. I have a small repertoire of images and I draw them shabbily with the biros I have to hand. What has changed since 2018 is my c...

The How of the Why

At CAST 2022 Amber Vanderburg told us that when you want to make a change, particularly something innovative, start with the Why. Why? So that everyone involved can understand the motivation and direction of travel. I've learned that myself over the years too, from Simon Sinek , and it has the added benefit that conversations about the merits of the Why can be had before any implementation starts. Of course, being honest about what the Why is is important too, if you care to maintain trust over time. Amber suggested a few things that help to navigate the change once it's underway. The How of the Why, if you like: proactive conversation: some kind of formal structure, some way for all voices to be heard, perhaps with a time box to stop things dragging on. awareness of fundamental attribution error: where someone's apparent motive (e.g. antagonism) may not be their actual motive (e.g. sharing potential alternatives). strength and weakness alignment: being clear with one ano...

You Wish!

We got a slice of Ben Simo 's backstory in his CAST 2022 workshop, Testing Without Requirements, last week.  It was a trip through working environments in which requirements had varying levels of detail, through Ben's evolution in thinking around the kinds of constraints that imposed on his work, and into a model of the testing landscape which recognises the dynamic nature of what is known and unknown during testing.  A couple of takeaways: Stated requirements are not all of the requirements ... and we can test that. Firm requirements are not always available ... but we can test to solidify them. The requirements that matter meet a human need ... and we can test to learn what they are. I guess I could summarise the presentation as: Can we capture all and only the user wishes? You wish! In the interactive section of the workshop we used the FEW HICCUPPS mnemonic to help us to identify unstated r...

Your Recipe for Suck Less

In Cindy Lawless 's keynote at CAST 2022 we spent a few minutes testing a simple application as a group and then collaborated on a test report. The report was the kind of thing a team lead might be expected to provide to management before a software release in a traditional software shop but, with Cindy's guidance, we avoided dumb artefact counts, meaningless charts, and level-of-effort analysis rather than risk analysis. Cindy's report recipe is straightforward: summary, strategy, coverage, risks, bugs and other concerns.  She gave a nice summary of how this can be a simple and clean Slack message rather than the cumbersome slide deck that is often requested, and copy-pasted, although we did build a deck in the session. I think there are a handful of key points to take away: Management won't read it (all). Make the important stuff is concise, clear, and prominent.  The important stuff is what's important to them . What could affect business value? Don't let th...

Access all Areas

I attended Cordellia Yokum 's Usability for everyone: Are you excluding some users from accessing your website? workshop at CAST 2022 .  As I can't do justice to a packed and interactive five-hour session in a short post like this, I'm going to simply drop bullet lists and links from the notes I took here. Accessibility Not just about disability Auditory, cognitive, neurological, physical, visual US: one in four have some disability Small screens, elderly users, slow internet, poor lighting conditions, etc SEO benefits from being able to access content Accessibility is not a project. Like Quality, improve it incrementally and continually Assistive technologies Screen magnifiers (e.g. web browser zoom) Text readers (e.g. blind and partially-sighted, ADHD, dyslexia sufferers) Speech recognition software  Head pointers, motion tracking (often for paraplegic users) Single switch entry (e.g. sip and puff mouth operation; keyboards with "quick buttons") POUR Perceivable...

Red Testing Hood

Angie Jones, The Build That Cried Broken Like the boy who cried wolf, the build that’s repeatedly red for no good reason will not be trusted. Worse, it will likely end up in a persistent red state as people stop looking at the regular failures. Eventually something important will also fail ... and no-one will notice. At CAST 2021 , Angie told us this and other stories about a team she was on, one who found themselves in that bad place but used their wits to escape and lived happily ever after.  Well, perhaps that’s an exaggeration: they tolerated living in a short-term bearable state where a reliable kernel of tests supported development and deployment and a set of flaky tests were worked on to the side. Separating off the flakes was a good move but it was supported by others, including assigning a team member to investigate all failures and setting limits on the percentage of tests t...

AI Needs Testers

Tariq King, Towards Better Software: How Testers Can Revolutionize AI and Machine Learning Tariq issued a powerful call to arms in his talk at CAST 2021 . Yes, AI/ML is new technology and, yes, it is solving new problems, and so, yes, testers may find themselves out of their comfort zones in some respect. But there are parallels between machine learning and testing that should give testers confidence that they have something valuable to contribute right now: mapping inputs to outcomes for black boxes, partitioning a possibility space, and data wrangling.  Learning is part and parcel of testing, so why think we can’t learn what’s needed for working with AI and ML systems? And don’t forget that testers come preloaded with universally-valuable skills such as exploring, questioning, and assessing. Without some kind of challenge, Tariq says, AI is going to continue powering the ...

Laugh Don't Cry

Laurie Sirois, Quality Isn’t Funny Why didn’t we find this bug in testing?  Without a sense of humour, hearing that kind of question repeatedly could bring people to tears. At CAST 2021 Laurie Sirois encouraged us to deploy laughter to defuse those tense situations, improve our relationships, and grow other people’s perceptions of us. As a manager in particular, lightening the mood can do wonders for a team’s morale, creativity, and sense of safety. Care needs to be taken over the style and timing of the humour used. Sarcasm and inside jokes might work well with trusted peers but may not be appropriate when delivering feedback. Even self-deprecating  humour can make others uncomfortable in the wrong context. Sounds challenging? Don't worry. If you’re not a natural stand-up simply smiling more frequently is a good start and it turns out that sharing surprising insights (the aha!) can...

Scale Model

Greg Sypolt, Building a Better Tomorrow with Model-Based Testing As he was telling us at CAST 2021 , Greg ’s team have built a model-based testing system and integrated it into continuous integration infrastructure which has scaled to be capable of exhaustively exercising the 30 or 40 deployment scenarios that each of their products supports. The models bring advantages such as describing system behaviour in reusable chunks that are independent of implementation details, making maintenance straightforward, and broadening coverage. Sounds great, and is, but it comes at a price. Getting buy-in for this kind of approach — from both management and the team — can be tricky and  there’s a lot of up-front effort, unfamiliar concepts and technology, and steep learning curves. The models Greg needs can be quite simple because each product is basically a linear navigation through a seque...

Testing Hats Can Be White

Rajni Hatti, Ethical Hacking for Testers Testers should not feel excluded from exploring security concerns just because specialists are available or tooling (such as the ZAP scanner ) is running in continuous integration.  Why? Rajni gave three reasons at CAST 2021 : Testers tend to have a big-picture perspective and so perhaps ideas about where there might be vulnerabilities outside of standard attack vectors.  Testers are more likely to be involved in the design of features and so able to ask security questions or influence the priority of security in development backlogs. Security is a process not a product, and so regular review throughout the cycle is desirable versus throwing a build over the wall to some other team. Naturally, there is opportunity cost associated with additional security testing, so th...

Don't Just Check, Please

Ben Simo, Computer-Assisted Testing Ben kicked off CAST 2021 with a brief history lesson, tracing the use of the term checking as a tactic in software testing back to at least Daniel McCracken ’s Digital Computer Programming from 1957 and through into his own recent model . Checking for him is a confirmatory activity, focusing on knowns and attempting to demonstrate that what was known yesterday has not changed today.  Checking need not be performed by machine but it’s a common target for automation because it comes with a straightforward decision function: the previously-known state.  In fact, this is for many all of what “test automation” is or can be and numerous regression test frameworks exist to facilitate that kind of work.  Ben would, I think, reject the both the term and the limited thinking about where computer tooling can be valuable for testers. In his...

Context Above All

The other night I attended We Need To Talk About Testing , a panel discussion featuring Cassandra Leung , Alaine Miller , Richard Bradshaw , and Rob Meaney , hosted by Codecraft . With an audience of software crafters rather than testers, it made sense that the conversation was guided through a set of greatest hits topics including automation, the need for humans in testing, confirmatory vs exploratory, testability, the testing triangle, and observability.   The panelists spoke eloquently about all of those things from positions that demonstrated both expertise and experience, and a sense of humour.  I couldn't help thinking about Brian's mum when Richard urged us not to drive testing with the automation pyramid: it's not a strategy, it's a triangle . I was familiar with the majority of the content but I do enjoy listening to knowledgeable people speaking on a subject they care about. One of the things I particularly like abou...

Fix The Right Bugs

  Earlier this year I did the Black Box Software Testing course in Bug Advocacy with the Association for Software Testing , and loved it. That and other BBST courses are run in collaboration by both AST and Altom , and four members of the Altom team ( Oana Casapu , Denisa Nistor , Raluca Popa , and Ru Cindrea ) recently did a webinar, Bug Advocacy in the Time of Agile and Automation , on the RIMGEN mnemonic in the context of hard to reproduce bugs sitting around in a team's backlog. Cem Kaner says that our mission as testers includes getting the right bugs off the backlog and fixed. The webinar described how we can work towards that by thinking about how to Reproduce, Isolate, Maximise, Generalise, and Externalise the issue, then reporting what we found using a Neutral tone.

Capping it Off

I'm lucky that my current role at Ada Health gives me, and the rest of the staff, a fortnightly community day for sharing and learning. I've done my, erm, share of sharing, but today I took advantage of the learning on offer to attend a workshop on our approach to making medical terminology accessible to non-experts, a presentation on how we manage our medical knowledgebase, another on the single sign-on infrastructure we're using in our customer integrations, and a riskstorming workshop using TestSphere to assess an air fryer. So that would have been a great day by itself, but I, erm, capped it off by attending Capgemini's TestJam event, to see the keynotes by Janet Gregory and Lisi Hocke . Janet talked about holistic testing, or the kinds of critical review, discovery, and mitigation activities that can take place at any point in the software development (and deployment, and release) life cycle. The foundation for all of this is good communication and relationship...