Skip to main content

Why Do They Test Software?


My friend Rachel Kibler asked me the other day "do you have a blog post about why we test software?" and I was surprised to find that, despite having touched on the topic many times, I haven't.

So then I thought I'd write one.

And then I thought it might be fun to crowdsource so I asked in the Association for Software Testing member's Slack, on LinkedIn, and on Twitter for reasons, one sentence each.

And it was fun! 

Here are the varied answers, a couple lightly edited, with thanks to everyone who contributed.

Edit: I did a bit of analysis of the responses in Reasons to be Cheerful, Part 2.

--00--

Software is complicated, and the people that use it are even worse. — Andy Hird

Because there is what software does, what people say it does, and what other people want it to do, and those are often not the same. — Andy Hird

Because someone asked/told us to — Lee Hawkins

To learn, and identify risks — Louise Perold

sometimes: reducing the risk of harming people — Ilari Henrik Aegerter

since part of software is a complex system: To reveal unknown unknowns — Ilari Henrik Aegerter

but unfortunately also: as a masochistic self-medication practice — Ilari Henrik Aegerter

my definition: “Testing is the art of finding out what software can do, where it fails to do what it claims, and what else the product does that might be surprising” — Ilari Henrik Aegerter

We test software because between what business wants and what engineers deliver, a lot of information gets lost/filtered/unexplored. And it is important to find that information and bring it on the table for everybody to know what to do next —  Lalit Bhamare

I took this one from a list of Software Testing Myths: "Testing is a measure of quality. The number of defects you find indicates the quality of the product." — Dusty Juhl

Testing is important because of risks we know about and risks we uncover during the activity. — Rachel Kibler

Testing is funtastic. — Aleksandar Simic

Living for testing, testing for living. — Aleksandar Simic

It depends on what I'm testing at the moment.  Lately I test to ensure we are releasing the product/feature that our company wanted to release, and that users will enjoy. — Joel Montvelisky

For me testing has always been a service,  as such the most important thing is to fulfill the needs we were brought to provide.  — Joel Montvelisky

If I needed to come up with a general umbrella reason for my testing… it would need to be around reducing the risk of disappointing / harming the people who will eventually work with our product  — Joel Montvelisky

It costs less than not testing. (in terms of reputation, hotfixes, etc) — Amit Wertheimer

It provides some ease-of-mind to the decision takers, and sleeping well is valuable. — Amit Wertheimer

In both cases, it's not always true, and if so - we should not test. If there's a way to gain enough confidence to sleep well, or have a way to avoid problems without testing, we should definitely explore it. — Amit Wertheimer

I work in testing because, in college, while I did well in my programing classes I wasn't the top of my class, whereas I was the top of my software testing classes. Since, virtually no-one else even had testing classes I could be a rock star there. — Curtis Pettit

I stay in testing because, its more fun, I'm still better at it, and I can avoid most of the non-programming problems that devs have, fighting with builds, monitoring tools, ect. While still writing as much code as I like. — Curtis Pettit

Because we prefer most of the feedback on our software to be deliberate feedback. Deliberate as in: influence and/or control over the what/when/how/... allows it to be more timely, more information-rich, more actionable. — Joep Schuurkes

We test software, to help make design decisions. — The Full Snack Tester (Ben Dowen)

We test software, to gain evidence through observation that help use make judgements about software quality. — The Full Snack Tester (Ben Dowen)

We test software, so we can identify friction and misbehaviours before our users. — The Full Snack Tester (Ben Dowen)

I test for compliance to organizational and regulatory expectations — Perze Ababa

tests help me follow and document where the data flows and what the system does to each data whenever there’s a handoff — Perze Ababa

To be less embarrassed after release. — Lena (Pejgan) Wiberg

To reduce the risk for at least some lawsuits. — Lena (Pejgan) Wiberg

To be able to be able to sleep better at night — Lena (Pejgan) Wiberg

Because it’s really fun, like detective work — Lena (Pejgan) Wiberg

We test, to learn the difference, if any, from how we expect software to behave and how it actually behaves in operation. — The Full Snack Tester (Ben Dowen)

We test software, to investigate potential risks and understand if our mitigation and avoidance of those risks are working. — The Full Snack Tester (Ben Dowen)

I test to understand the product as it exists today. — Chris Kenst

The necessity of a project to test software? To find it's problems. Finding no problems by a certain pattern is also a valid result, just more unlikely to happen.  — ☮️🕊️☯️📢Sebastian, Life Tester [Sebastian linked to a Michael Bolton thread on the topic]

I test mainly to have confidence for refactoring and extension. — Benjamin Bischoff

We test software because we want to learn as much as we can about it and we are specifically keen to find out if there are any potential problems associated with it. — David Högberg

To praraphrase @NicolaLindgren: We test software to be able to affect the perception of the product’s quality. — David Högberg

We test only because the risk of not testing is deemed to high a price to pay — Stu C

I test my code to gain confidence that what it does in reality matches my expectations. — Samuel Nitsche

I do it often as automation to have a signal for unexpected change in the future. — Samuel Nitsche

I do it to document the intentions I had when writing the software. — Samuel Nitsche

To learn something that we want to know — jonhussey

We test to reduce uncertaincy — Declan O'Riordan

We test software to find out if there are differences between the product we’ve got, the product we think we have, and the product we want. — Michael Bolton

Whenever clever people try to do clever things, there's an element of risk. Someone might not understand exactly what the customer and the business want—and the customers or the business might not even know for sure. — Michael Bolton

Programmers might commit errors in implementing the code; smart people make mistakes too. — Michael Bolton

Finding those problems is important when things are serious. When health, life, safety, money, opportunity, reputation — individual and social values — could be at risk. — Michael Bolton [Michael elaborated further in his answer]

To discover the weird and wonderful quirks of the software in question! — Martin Pihl

Testing is a recognition of the fact that 'to err is human' ... The timing of, and degree to which we test, is determined by the potential risk of the change, combined with the impact of what might happen if we don't test. — Parshotam Toora

Entropy. There are an infinite number of ways in which software, that is part of a complex system, can fail. There are only a few ways in which it can succeed. Testing is one approach to find out how it will/can/may fail so you can address that issue approprioately. — Dennis de Booij

We test everything, don’t we? New electronics, relationships, software, habits.. Sort of finding something that you put a value or you will. 😄 — Yasemin Bostancı

We test software to gain confidence on what we are actually doing is what we expected to do as a team. — Trisha Chetani

We test the software because we as a company do not want our customers to find the same issue. — Trisha Chetani

We test software because the company brand name and image are not at stake. We want to convey that our software application requires lower maintenance cost and hence results in more accurate, consistent, and reliable software. — Trisha Chetani

We test software because users can do what they want to do. — Trisha Chetani

We test software because the company can reduce the cost which comes from when people find a lot of issues in production (could be any environment) and are not able to use the software in later stages of the development cycle. — Trisha Chetani

We test software because the company can gain in a way customer satisfaction by producing the quality release of each software version. — Trisha Chetani

We test the software because we enhance the software development process and make it easier for the company to add or remove new features by having confidence. — Trisha Chetani

One of those big questions: “why we develop software “. To serve humanity and us in a symbiotic relationship towards improvement as a race. (Too pophetic for a Sunday 😅). — Robin Gupta

To uncover product and project risk. — Ravi Malayappan

To empirically find out more information about the product. — Ravi Malayappan

In regulatory environments you simply have to do it because the government said so :)! — Ravi Malayappan

Because we want to know where we are with the product to help us make decisions about what to do next with the product. — Pavel Å aman

To increase the quality "hopefully" to reach 5 star product. — Anees Nasry

Emphasize the "SAFE" feeling of customer while using the product. — Anees Nasry

Expand my perspective of how people (Product Owners, Developrs, clients,...) think in different domains. — Anees Nasry

As it's one of the important life's hack (in my humble opinion) — Anees Nasry

Because those involved in software development are wonderful and human and therefore very naturally fallible. — Andrew Kelly

Not being perfect carries risk so we test to discover, investigate and manage that risk. — Andrew Kelly

If you feel pride in the product you want to make sure your end users get the best experience possible. — Georg Neumann

Because developers aren't good testers. :) — Maninder Singh
Image: https://flic.kr/p/4YeBmg

Comments

Andrew Burrows said…
No one quoted Bruce Eckel so I will, I think this classic captures the simple truth behind why we test anything. "If it's not tested, it's broken".
Unknown said…
This article was curated as a part of the #51st Issue of Software Testing Notes Newsletter.
https://softwaretestingnotes.substack.com/p/issue-51-software-testing-notes
zx12bob said…
So, most testers have no idea why they spend their lives working at testing things.

Popular posts from this blog

Meet Me Halfway?

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Stop answering my questions with questions." Sure, I can do that. In return, please stop asking me questions so open to interpretation that any answer would be almost meaningless and certa

Can Code, Can't Code, Is Useful

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "If testers can’t code, they’re of no use to us" My first reaction is to wonder what you expect from your testers. I am immediately interested in your working context and the way

The Best Programmer Dan Knows

  I was pairing with my friend Vernon at work last week, on a tool I've been developing. He was smiling broadly as I talked him through what I'd done because we've been here before. The tool facilitates a task that's time-consuming, inefficient, error-prone, tiresome, and important to get right. Vern knows that those kinds of factors trigger me to change or build something, and that's why he was struggling not to laugh out loud. He held himself together and asked a bunch of sensible questions about the need, the desired outcome, and the approach I'd taken. Then he mentioned a talk by Daniel Terhorst-North, called The Best Programmer I Know, and said that much of it paralleled what he sees me doing. It was my turn to laugh then, because I am not a good programmer, and I thought he knew that already. What I do accept, though, is that I am focussed on the value that programs can give, and getting some of that value as early as possible. He sent me a link to the ta

Beginning Sketchnoting

In September 2017 I attended  Ian Johnson 's visual note-taking workshop at  DDD East Anglia . For the rest of the day I made sketchnotes, including during Karo Stoltzenburg 's talk on exploratory testing for developers  (sketch below), and since then I've been doing it on a regular basis. Karo recently asked whether I'd do a Team Eating (the Linguamatics brown bag lunch thing) on sketchnoting. I did, and this post captures some of what I said. Beginning sketchnoting, then. There's two sides to that: I still regard myself as a beginner at it, and today I'll give you some encouragement and some tips based on my experience, to begin sketchnoting for yourselves. I spend an enormous amount of time in situations where I find it helpful to take notes: testing, talking to colleagues about a problem, reading, 1-1 meetings, project meetings, workshops, conferences, and, and, and, and I could go on. I've long been interested in the approaches I've evol

Not Strictly for the Birds

  One of my chores takes me outside early in the morning and, if I time it right, I get to hear a charming chorus of birdsong from the trees in the gardens down our road, a relaxing layered soundscape of tuneful calls, chatter, and chirrupping. Interestingly, although I can tell from the number and variety of trills that there must be a large number of birds around, they are tricky to spot. I have found that by staring loosely at something, such as the silhouette of a tree's crown against the slowly brightening sky, I see more birds out of the corner of my eye than if I scan to look for them. The reason seems to be that my peripheral vision picks up movement against the wider background that direct inspection can miss. An optometrist I am not, but I do find myself staring at data a great deal, seeking relationships, patterns, or gaps. I idly wondered whether, if I filled my visual field with data, I might be able to exploit my peripheral vision in that quest. I have a wide monito

ChatGPTesters

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00--  "Why don’t we replace the testers with AI?" We have a good relationship so I feel safe telling you that my instinctive reaction, as a member of the Tester's Union, is to ask why we don&

Postman Curlections

My team has been building a new service over the last few months. Until recently all the data it needs has been ingested at startup and our focus has been on the logic that processes the data, architecture, and infrastructure. This week we introduced a couple of new endpoints that enable the creation (through an HTTP POST) and update (PUT) of the fundamental data type (we call it a definition ) that the service operates on. I picked up the task of smoke testing the first implementations. I started out by asking the system under test to show me what it can do by using Postman to submit requests and inspecting the results. It was the kinds of things you'd imagine, including: submit some definitions (of various structure, size, intent, name, identifiers, etc) resubmit the same definitions (identical, sharing keys, with variations, etc) retrieve the submitted definitions (using whatever endpoints exist to show some view of them) compare definitions I submitted fro

Vanilla Flavour Testing

I have been pairing with a new developer colleague recently. In our last session he asked me "is this normal testing?" saying that he'd never seen anything like it anywhere else that he'd worked. We finished the task we were on and then chatted about his question for a few minutes. This is a short summary of what I said. I would describe myself as context-driven . I don't take the same approach to testing every time, except in a meta way. I try to understand the important questions, who they are important to, and what the constraints on the work are. With that knowledge I look for productive, pragmatic, ways to explore whatever we're looking at to uncover valuable information or find a way to move on. I write test notes as I work in a format that I have found to be useful to me, colleagues, and stakeholders. For me, the notes should clearly state the mission and give a tl;dr summary of the findings and I like them to be public while I'm working not just w

Make, Fix, and Test

A few weeks ago, in A Good Tester is All Over the Place , Joep Schuurkes described a model of testing work based on three axes: do testing yourself or support testing by others be embedded in a team or be part of a separate team do your job or improve the system It resonated with me and the other testers I shared it with at work, and it resurfaced in my mind while I was reflecting on some of the tasks I've picked up recently and what they have involved, at least in the way I've chosen to address them. Here's three examples: Documentation Generation We have an internal tool that generates documentation in Confluence by extracting and combining images and text from a handful of sources. Although useful, it ran very slowly or not at all so one of the developers performed major surgery on it. Up to that point, I had never taken much interest in the tool and I could have safely ignored this piece of work too because it would have been tested by

Build Quality

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "When the build is green, the product is of sufficient quality to release" An interesting take, and one I wouldn't agree with in general. That surprises you? Well, ho