Skip to main content

Software Sisyphus


The Association for Software Testing is crowd-sourcing a book, Navigating the World as a Context-Driven Tester, which aims to provide responses to common questions and statements about testing from a context-driven perspective.

It's being edited by Lee Hawkins who is posing questions on Twitter,  LinkedIn, Mastodon, Slack, and the AST mailing list and then collating the replies, focusing on practice over theory.

I've decided to contribute by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be.

Perhaps you'd like to join me?

 --00--

"How can I possibly test 'all the stuff' every iteration?"

Whoa! There's a lot to unpack there, so let me break it down a little:

  1. who is suggesting that "all the stuff" needs to be tested?
  2. why are they suggesting it?
  3. what do they mean by "all"?
  4. what do they mean by "stuff"
  5. why are you on the hook for this task?

OK, to summarise your answers then: you're the tester on the team and it's your product owner and the developers who think this but you agree with them (1) because ... you are the tester on the team (2, 5). No-one is specific about "all" but you understand it to mean that the others do not want to do any (3). "Stuff" is similar, and it's your responsibility to work it out (4).

I don't think this is a healthy or sustainable situation and, while it's easy to say that it needs to change I know that making a change can be difficult, particularly if you don't have an ally on the team. 

But the thing that worries me the most is that you think that you should test all the stuff yet you can't say what you mean by "all the stuff". Your first ally should be yourself.

So let me propose a way that you could think about your role. Up front, I'll note that contexts differ so it might not fit yours perfectly although I think it works reasonably well in general.

Remember that it's never possible to test "all the stuff" because there are so many variables involved in running any piece of software anywhere that there's always another test that could be performed. What is possible is to decide what is the important stuff to test given what we know about stakeholder concerns, risks to business value, time available, the software, and other relevant factors.

This changes over time. On an iteration level, it changes because the software is being developed. But other things can change too, perhaps the stakeholders change their minds, or a deadline moves closer, or the infrastructure your product runs on is upgraded, or an information gap is identified, or ...

Noticing these changes is not necessarily trivial but consciously looking for them and building a network that will share them are both generally productive ways to increase the chances of doing it.

Once identified, the task becomes working out what is an appropriate amount of time and effort to spend reviewing the changes. Sometimes that will include not looking at them at all. It's also important to think about how to look at them and what kind of outcome is desired from that activity.

For example, one time the requirement might be a broad landscape view of some new feature time-boxed at a couple of days with a verbal report to the team about the risks identified. On another occasion it might be a quick and very tightly-focused investigation into combinations of input values with the goal of extending the coverage of an existing parameterised unit test. Different people will be better suited to different types of task, and multiple pairs of eyes will likely be better than one.

Talking of automation, keep an eye on the big picture too. If there's some time-consuming repetitive testing tasks that are mechanical and boring to do, then they're likely to be done badly or not at all. Look for ways to subcontract that work to test suites and free a human up to do something they're better suited to.

Communication is key for teams to cohere. I think it starts with self-communication: understand what you are trying to achieve in a piece of work, and why, and what is out of scope. This will help to keep focus when working and show others that you are someone who thinks about what you are doing.

If you can find a version of that view of a testing role that you feel comfortable with, then you will be in a better place to interact with others about your work. You'll be able to suggest that someone else should pick up the task of checking that bug fix, or that you'd like to pair with someone to review the coverage of this test suite and see whether it can be extended to remove a day's manual effort at the end of each sprint, or that you think it would be a good idea to get together as a team to think about edge cases before coding the next feature so that more robust testing can be done during development.

It probably won't be easy, by the sounds of your situation, but it's certainly easier than the Sisyphean task of testing everything all the time. It'll be more fun too.
Image: Bing Image Creator

Comments

Popular posts from this blog

Can Code, Can't Code, Is Useful

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "If testers can’t code, they’re of no use to us" My first reaction is to wonder what you expect from your testers. I am immediately interested in your working context and the way

Meet Me Halfway?

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Stop answering my questions with questions." Sure, I can do that. In return, please stop asking me questions so open to interpretation that any answer would be almost meaningless and certa

Not Strictly for the Birds

  One of my chores takes me outside early in the morning and, if I time it right, I get to hear a charming chorus of birdsong from the trees in the gardens down our road, a relaxing layered soundscape of tuneful calls, chatter, and chirrupping. Interestingly, although I can tell from the number and variety of trills that there must be a large number of birds around, they are tricky to spot. I have found that by staring loosely at something, such as the silhouette of a tree's crown against the slowly brightening sky, I see more birds out of the corner of my eye than if I scan to look for them. The reason seems to be that my peripheral vision picks up movement against the wider background that direct inspection can miss. An optometrist I am not, but I do find myself staring at data a great deal, seeking relationships, patterns, or gaps. I idly wondered whether, if I filled my visual field with data, I might be able to exploit my peripheral vision in that quest. I have a wide monito

Postman Curlections

My team has been building a new service over the last few months. Until recently all the data it needs has been ingested at startup and our focus has been on the logic that processes the data, architecture, and infrastructure. This week we introduced a couple of new endpoints that enable the creation (through an HTTP POST) and update (PUT) of the fundamental data type (we call it a definition ) that the service operates on. I picked up the task of smoke testing the first implementations. I started out by asking the system under test to show me what it can do by using Postman to submit requests and inspecting the results. It was the kinds of things you'd imagine, including: submit some definitions (of various structure, size, intent, name, identifiers, etc) resubmit the same definitions (identical, sharing keys, with variations, etc) retrieve the submitted definitions (using whatever endpoints exist to show some view of them) compare definitions I submitted fro

Vanilla Flavour Testing

I have been pairing with a new developer colleague recently. In our last session he asked me "is this normal testing?" saying that he'd never seen anything like it anywhere else that he'd worked. We finished the task we were on and then chatted about his question for a few minutes. This is a short summary of what I said. I would describe myself as context-driven . I don't take the same approach to testing every time, except in a meta way. I try to understand the important questions, who they are important to, and what the constraints on the work are. With that knowledge I look for productive, pragmatic, ways to explore whatever we're looking at to uncover valuable information or find a way to move on. I write test notes as I work in a format that I have found to be useful to me, colleagues, and stakeholders. For me, the notes should clearly state the mission and give a tl;dr summary of the findings and I like them to be public while I'm working not just w

ChatGPTesters

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00--  "Why don’t we replace the testers with AI?" We have a good relationship so I feel safe telling you that my instinctive reaction, as a member of the Tester's Union, is to ask why we don&

Make, Fix, and Test

A few weeks ago, in A Good Tester is All Over the Place , Joep Schuurkes described a model of testing work based on three axes: do testing yourself or support testing by others be embedded in a team or be part of a separate team do your job or improve the system It resonated with me and the other testers I shared it with at work, and it resurfaced in my mind while I was reflecting on some of the tasks I've picked up recently and what they have involved, at least in the way I've chosen to address them. Here's three examples: Documentation Generation We have an internal tool that generates documentation in Confluence by extracting and combining images and text from a handful of sources. Although useful, it ran very slowly or not at all so one of the developers performed major surgery on it. Up to that point, I had never taken much interest in the tool and I could have safely ignored this piece of work too because it would have been tested by

Build Quality

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "When the build is green, the product is of sufficient quality to release" An interesting take, and one I wouldn't agree with in general. That surprises you? Well, ho

The Best Laid Test Plans

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "What's the best format for a test plan?" I'll side-step the conversation about what a test plan is and just say that the format you should use is one that works for you, your coll

My Frame, Your Thing

I was talking with a colleague the other week and we got onto the topic of framing our work. This is one of my suggestions: I want to help whoever I'm working with build the best version of their thing, whatever 'best' means for them, given the constraints they have. That's it. Chef's kiss. I like it because it packs in, for example: exploration of ideas, software, process, business choices, and legal considerations conversations about budget, scope, resources, dreams, and priorities communicating findings, hypotheses, and suggestions helping to break down the work, organise the work, and facilitate the work making connections, pulling information from outside, and sharing information from inside It doesn't mean that I have no core expertise to bring, no scope for judgement, no agency, and no way to be creative or express myself, and it specifically does not mean that I'm going to pick up all the crap that no-one else wants to do.  Of course, I might pick up