Skip to main content

A CEWT Aspiration

Testing Ideas. The topic was deliberately ambiguous, not only because it reflects the situations in which we frequently work, but also to provoke and admit a wide range of discussion at the first Cambridge Exploratory Workshop on Testing. And while we got that, we also got some common themes. From my perspective, three stood out:
  • personas
  • analogy
  • timing
There's a wealth of thought on exploiting personas to guide testing (and even design) by, for instance, defining a set of user profiles that represent important parts of a product's user or customer base and then trying to put yourself in their mindset, have their concerns, use the product they way they would to accomplish the aims that they would have. The use of related tools like de Bono's Six Thinking Hats has  also been well covered in testing.

Across the presentations and discussion we got some interesting thoughts on what other kinds of personas could provoke ideas too:
  • putting yourself in the position of your colleagues; testing in the style of others
  • putting yourself in the position of some aspect of your own personality
  • putting yourself in the position of the software
and also on being aware potential limitations of aspects of an individual's own persona.  For example, it was suggested that perhaps those testers who like to plunge in and quit might need to exercise caution when testing a new idea (perhaps suggested in a meeting) because of the risks of being seen as too confrontational or negative (regardless of whether or not that is an intent) and risk alienating whoever suggested it and maybe losing or holding back something worthwhile. Testers who prefer to wait and reflect might be seen as allowing ideas to grow.

I wonder whether personas are a way of cutting across testing heuristics we're already familiar with such as SFDPOT and HICCUPPS. Maybe we can part-define certain kinds of users as being particularly interested in some of the areas identified by those mnemonics. For example, choosing to test like a marketing colleague might focus on product claims and (apparent) satisfaction of current user desires but care less about how data flows around the system.

Karo's idea of putting yourself in the position of the software is one I found really interesting. It feels related to the notion of offers that James Lyndsay's recent improv/testing workshop suggested. In that context, the software interacts with the tester by making offers (click on this button, enter text into this field, retrieve some data ...) and, in this one, the software is not only offering but additionally an agent with its own needs. The potential for a perspective change when you're blocked for ideas seems huge.

We talked about when and whether it is important to consciously adopt a persona and then more broadly about the advantages of consciously adopting any approach that you might feel you already just do naturally or is common sense. Once you're aware of it as a technique, you can choose to apply it in certain circumstances, can gain inspiration from the fact that it's in your toolbox when you're after a new angle of attack. If you only have access to something unconsciously, then you're always waiting to see whether you happen across it. Which isn't to say that you shouldn't exploit the stuff that just happens, but try to watch how you do what you do - and how others do it.

My own presentation was on analogy and gave a specific example - joking and testing - that I've been exploring for a while.  Analogies are incomplete and so are heuristic, but offer the potential for great value in bootstrapping, building and exploring models of the system under test.

Gabrielle made analogies between particular life skills and experiences and testing. For her, the risks associated with riding a motorbike, and the things she does to mitigate those risks, map well onto risk in the domain of testing; similarly, softer skills such as handling interactions with the various managers she's had at the charity shop she volunteers at.

Both Mike and Liz's talks included the question of where and when testing takes place in projects they've worked on. Giving testers permission to apply themselves in the design phase of a project, and getting buy-in from others on the project, was flagged up. Testing the gaps between stories and testing for gaps between stories seemed to be a couple of places where this is generally going to be relatively uncontroversial and could show how testers can add value.

We touched on the fact that ideas are sometimes kept deliberately ambiguous in order to keep all the stakeholders on board with a project. Each can feel that the thing being built will do what they want while it is still only an idea couched sufficiently vaguely.

The problem of when to try to squash the space of possibilities into a specific actuality was thought potentially difficult and links back to the point above about exercising caution when testing of ideas lest the progenitor of the ideas becomes disillusioned. It was suggested that presenting evidence of the current status and letting the stakeholders themselves recognise the way the wind is blowing might be a useful approach.

The timeliness of an idea and how that affects the traction it gets was something Neil talked about. He gave an example of a simple utility to collect logs and other trace files from multiple locations on a machine after a failure was observed. It was created by a tester who saw a need and taught themselves to code just enough to implement a solution.

The solution was shared with colleagues, who taught themselves enough code to modify and extend it and so on. It not only saved time, but its existence improved the skill set of the team as they inspired one another to do more with it.  Management saw the tool and asked for the test team to develop it for inclusion in the product for collecting the same kind of data on the customer side.

An idea nurtured can bring unexpected value to unexpected people from unexpected places along the way.  Neil emphasised this by talking about how he's taken the local Lean Coffee meetups format into his own team meetings and got positive results.

There were stacks of things we didn't cover much but might have with more time or had the discussion gone different ways. This is a just flavour of them:
  • opportunity costs of ideas. Building a utility, learning to code etc are useful but what else wasn't done as a result?
  • sharing ideas. Persuading others that your ideas are worth pursuing. Persuading others that you're no longer convinced by an idea.
  • ownership of ideas. Who owns them? Who gets credit for them? How much does it matter?
  • meta-ideas. Trying to analyse where your ideas come from and looking for ways to get ideas from other places.
  • how to choose between ideas. Often the problem isn't coming up with ideas, it's a surfeit of them.
  • prototypes and pretotypes. Getting a physical thing in front of people can elicit more, more useful responses than describing the idea of the thing.
It's a tenet of Lateral Thinking that so long as the end result idea is valid in some way, the route to it doesn't matter so much. One of the things that motivate me to be involved in this kind of workshop and the other local meetups and to blog is the increasing realisation that idea begets idea begets idea begets idea begets idea (you get the idea) and even though everything along that chain might not be useful to me, I can often end up somewhere that is.

The act of having those ideas, making those associations, creates an environment in which having further ideas is easier. And more ideas means, on average, more good ideas. And I think having a local workshop was a good idea. Let's try and do it again.
Image; https://flic.kr/p/7cKrAC

Comments

Popular posts from this blog

Can Code, Can't Code, Is Useful

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "If testers can’t code, they’re of no use to us" My first reaction is to wonder what you expect from your testers. I am immediately interested in your working context and the way

Meet Me Halfway?

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Stop answering my questions with questions." Sure, I can do that. In return, please stop asking me questions so open to interpretation that any answer would be almost meaningless and certa

Not Strictly for the Birds

  One of my chores takes me outside early in the morning and, if I time it right, I get to hear a charming chorus of birdsong from the trees in the gardens down our road, a relaxing layered soundscape of tuneful calls, chatter, and chirrupping. Interestingly, although I can tell from the number and variety of trills that there must be a large number of birds around, they are tricky to spot. I have found that by staring loosely at something, such as the silhouette of a tree's crown against the slowly brightening sky, I see more birds out of the corner of my eye than if I scan to look for them. The reason seems to be that my peripheral vision picks up movement against the wider background that direct inspection can miss. An optometrist I am not, but I do find myself staring at data a great deal, seeking relationships, patterns, or gaps. I idly wondered whether, if I filled my visual field with data, I might be able to exploit my peripheral vision in that quest. I have a wide monito

Testing (AI) is Testing

Last November I gave a talk, Random Exploration of a Chatbot API , at the BCS Testing, Diversity, AI Conference .  It was a nice surprise afterwards to be offered a book from their catalogue and I chose Artificial Intelligence and Software Testing by Rex Black, James Davenport, Joanna Olszewska, Jeremias Rößler, Adam Leon Smith, and Jonathon Wright.  This week, on a couple of train journeys around East Anglia, I read it and made sketchnotes. As someone not deeply into this field, but who has been experimenting with AI as a testing tool at work, I found the landscape view provided by the book interesting, particularly the lists: of challenges in testing AI, of approaches to testing AI, and of quality aspects to consider when evaluating AI.  Despite the hype around the area right now there's much that any competent tester will be familiar with, and skills that translate directly. Where there's likely to be novelty is in the technology, and the technical domain, and the effect of

Postman Curlections

My team has been building a new service over the last few months. Until recently all the data it needs has been ingested at startup and our focus has been on the logic that processes the data, architecture, and infrastructure. This week we introduced a couple of new endpoints that enable the creation (through an HTTP POST) and update (PUT) of the fundamental data type (we call it a definition ) that the service operates on. I picked up the task of smoke testing the first implementations. I started out by asking the system under test to show me what it can do by using Postman to submit requests and inspecting the results. It was the kinds of things you'd imagine, including: submit some definitions (of various structure, size, intent, name, identifiers, etc) resubmit the same definitions (identical, sharing keys, with variations, etc) retrieve the submitted definitions (using whatever endpoints exist to show some view of them) compare definitions I submitted fro

Testers are Gate-Crashers

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Testers are the gatekeepers of quality" Instinctively I don't like the sound of that, but I wonder what you mean by it. Perhaps one or more of these? Testers set the quality sta

Vanilla Flavour Testing

I have been pairing with a new developer colleague recently. In our last session he asked me "is this normal testing?" saying that he'd never seen anything like it anywhere else that he'd worked. We finished the task we were on and then chatted about his question for a few minutes. This is a short summary of what I said. I would describe myself as context-driven . I don't take the same approach to testing every time, except in a meta way. I try to understand the important questions, who they are important to, and what the constraints on the work are. With that knowledge I look for productive, pragmatic, ways to explore whatever we're looking at to uncover valuable information or find a way to move on. I write test notes as I work in a format that I have found to be useful to me, colleagues, and stakeholders. For me, the notes should clearly state the mission and give a tl;dr summary of the findings and I like them to be public while I'm working not just w

Build Quality

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "When the build is green, the product is of sufficient quality to release" An interesting take, and one I wouldn't agree with in general. That surprises you? Well, ho

Make, Fix, and Test

A few weeks ago, in A Good Tester is All Over the Place , Joep Schuurkes described a model of testing work based on three axes: do testing yourself or support testing by others be embedded in a team or be part of a separate team do your job or improve the system It resonated with me and the other testers I shared it with at work, and it resurfaced in my mind while I was reflecting on some of the tasks I've picked up recently and what they have involved, at least in the way I've chosen to address them. Here's three examples: Documentation Generation We have an internal tool that generates documentation in Confluence by extracting and combining images and text from a handful of sources. Although useful, it ran very slowly or not at all so one of the developers performed major surgery on it. Up to that point, I had never taken much interest in the tool and I could have safely ignored this piece of work too because it would have been tested by

The Best Laid Test Plans

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "What's the best format for a test plan?" I'll side-step the conversation about what a test plan is and just say that the format you should use is one that works for you, your coll