Skip to main content

More TCs, Vicar?


Several times in recent months I've found myself being asked how many test cases there are in automated suites at Linguamatics. On each occasion I have had to admit that I don't know and, in fact, I'm not particularly motivated to calculate that particular metric.

Depending on how the conversations went, I've asked in return what my questioner thinks we can understand about the quality, coverage, relevance, value and so on of testing by knowing the number of cases. (And let's leave vocabulary aside for now.)

Are you a counter? Try this scenario: you wrote sum, a program that adds two numbers, and have asked me to test it...

Sure! I can test that for you. To do it, I'll write a test case generator. Here it is:
#!/bin/bash

echo "#!/bin/bash"
for i in `seq 1 $1`
do
    s=$(($i + $i))
    echo "r=\`./sum $i $i\`; if [ \$r -eq $s ]; then echo \"OK\"; else echo \"NO\"; fi"
done
A somewhat rudimentary framework? Absolutely, but it'll do for this exercise. Let's have it make three test cases:
$ sh checker.sh 3 > run.sh
$ more run.sh
#!/bin/bash
r=`./sum 1 1`; if [ $r -eq 2 ]; then echo "OK"; else echo "NO"; fi
r=`./sum 2 2`; if [ $r -eq 4 ]; then echo "OK"; else echo "NO"; fi
r=`./sum 3 3`; if [ $r -eq 6 ]; then echo "OK"; else echo "NO"; fi

And run them:
$ sh run.sh
OK
OK
OK
Yeah, it passes! #GoDev!

What's that? You say there's loads of numbers in the world and three isn't very big in comparison. Well, pray tell, how many cases do you want? Ten? A hundred? A thousand? A round thousand, yeah!
$ sh checker.sh 1000 > run.sh
$ sh run.sh
OK
...
OK
Now we're really sure your program works, right? Right?

Perhaps you feel uncomfortable about something I did? Say why into the comments. #GoTesting!
Image: https://flic.kr/p/amnCN7
Syntax highlighting: http://markup.su/highlighter

Comments

  1. Hi James,

    Thanks for this interesting post. I agree with most of what you're saying, and specifically take the point that we can never test everything and will likely never know when to stop.

    I do have two thoughts about the information the number of test cases can provide us:

    1) The number of test cases is not an indication of quality (at least on it's own). However, it could act as a measure for the maintainability of your suite. More specifically, the more test cases you have, the bigger the effort you need to maintain it.

    2) I think your example is an over simplification of the problem (and I do understand that it's on purpose). Depending on the tested algorithm, I think the number of test cases can vary. We've seen plenty of cases where a one value works and the other doesn't. If the algorithm is complicated enough, several test cases that do the same but with different values are required. The number of test cases is not a direct indication of the way we test, sometimes it can be a (very) crude estimation. For example, suppose you only have one test case for an algorithm that computes the volume of a 3D shape (for example), clearly you have a problem with the way you test it.

    Thanks,

    Gilad

    ReplyDelete
  2. fI can easily, trivially, write an A+B program that will be "correct" for the first million or billion test cases and then fail on the next case. Evern Charles Babbage knew that, for gosh sakes. The sheer number of test cases tells nothing about the program, but it may tell a lot about the tester.
    https://leanpub.com/perfectsoftware

    ReplyDelete
  3. Ha, lovely.

    I wonder, though - have you encountered a useful measurement for estimating the "completeness" of the test suite? The most I heard of are some forms of code-coverage (as requirement coverage is a bit tricky with all of those implicit requirements), but I'm getting the feeling that code-coverage is a negative only indicator - it may mean something when it's not 100% (something such as "look over there! you may have missed an interesting test!"), but having it at 100% means next to nothing.

    @Gilad - Your first point is interesting, but it seems more like an excuse to cut down the number of tests instead of a real issue. I am assuming that you are referring to unit tests (as system tests, and even most integration tests, should not be affected by maintenance and refactoring), and I strongly believe that if you write unit-tests that are testing the intended functionality of the code instead of the initial structure of it, refactoring won't be as painful even with a fairly large number of tests - Obviously, there is no point in running tests that don't add value to the test suite, but if you came up with a thousand valid & valuable test cases, I don't thing the refactoring should scare you as much - assuming that you test functionality and not functions.

    As I have been a bit vague about testing functionality vs testing functions, perhaps an example is at place:
    let's take the "sum" function, and look at the following implementation:
    sum(int a, int b){
    if (a%2==0 and b%2==0) return sumEvenNumbers(a,b);
    else if (a%2==0 || b%2==0) return sumEvenAndOdd(a,b);
    else return sumOddNumbers(a,b);
    }

    had you tested functionality, you would have your test cases calling only "sum", and exerting adds and evens and zeros as you saw fit.
    Had you to test functions, you would have created tests for "sumEvenNumbers", and "sumOddNumbers" - which would take quite a while to remove if you ever chose to refactor sum to simply return "a+b"


    For your second point - I don't necessarily agree (again) - Yes, having a single test case for a complex functionality is a "smell" that you should look into - but what if you are using an external library to calculate the volume of that shape? your test case would be checking only that there are no visible library conflicts and that you can invoke this functionality. Even for complex algorithms that you write yourself, there may be cases where you will not need more than one test case - For instance, if 3dVolume() takes too long to run (lets say - 10 minutes) ,I may prefer to run only a single test case but have it randomly choose the input, so over time I get better coverage while not holding the line in performing long tests) - those are rare cases, but not every smell is "you clearly have a problem", it only "you probably have a problem"

    ReplyDelete
  4. @James,

    Sorry, I was too cumbersome with the way I presented things. I had 2 points:

    a) I agree that looking at the number of test cases is not a good way (at all!) to measure the quality of the test suite.
    b) I think that going to the other extreme, not looking at the the number of test cases, might lead to a situation where important information is disregarded, such as the maintainability of the test suite or problem indications.

    My second example from my previous post was about (b). The possibility of the usage of an external library (or a complicated algorithm that can be tested with one test case) is a valid point, but I was trying to give a specific example for a specific case where the the number of test cases do give an extra value, therefore suggesting that completely ignoring the number of test cases can lead to a loss of information.

    Thanks,

    Gilad

    ReplyDelete

Post a Comment

Popular posts from this blog

Can Code, Can't Code, Is Useful

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "If testers can’t code, they’re of no use to us" My first reaction is to wonder what you expect from your testers. I am immediately interested in your working context and the way

Meet Me Halfway?

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Stop answering my questions with questions." Sure, I can do that. In return, please stop asking me questions so open to interpretation that any answer would be almost meaningless and certa

Not Strictly for the Birds

  One of my chores takes me outside early in the morning and, if I time it right, I get to hear a charming chorus of birdsong from the trees in the gardens down our road, a relaxing layered soundscape of tuneful calls, chatter, and chirrupping. Interestingly, although I can tell from the number and variety of trills that there must be a large number of birds around, they are tricky to spot. I have found that by staring loosely at something, such as the silhouette of a tree's crown against the slowly brightening sky, I see more birds out of the corner of my eye than if I scan to look for them. The reason seems to be that my peripheral vision picks up movement against the wider background that direct inspection can miss. An optometrist I am not, but I do find myself staring at data a great deal, seeking relationships, patterns, or gaps. I idly wondered whether, if I filled my visual field with data, I might be able to exploit my peripheral vision in that quest. I have a wide monito

Testing (AI) is Testing

Last November I gave a talk, Random Exploration of a Chatbot API , at the BCS Testing, Diversity, AI Conference .  It was a nice surprise afterwards to be offered a book from their catalogue and I chose Artificial Intelligence and Software Testing by Rex Black, James Davenport, Joanna Olszewska, Jeremias Rößler, Adam Leon Smith, and Jonathon Wright.  This week, on a couple of train journeys around East Anglia, I read it and made sketchnotes. As someone not deeply into this field, but who has been experimenting with AI as a testing tool at work, I found the landscape view provided by the book interesting, particularly the lists: of challenges in testing AI, of approaches to testing AI, and of quality aspects to consider when evaluating AI.  Despite the hype around the area right now there's much that any competent tester will be familiar with, and skills that translate directly. Where there's likely to be novelty is in the technology, and the technical domain, and the effect of

Postman Curlections

My team has been building a new service over the last few months. Until recently all the data it needs has been ingested at startup and our focus has been on the logic that processes the data, architecture, and infrastructure. This week we introduced a couple of new endpoints that enable the creation (through an HTTP POST) and update (PUT) of the fundamental data type (we call it a definition ) that the service operates on. I picked up the task of smoke testing the first implementations. I started out by asking the system under test to show me what it can do by using Postman to submit requests and inspecting the results. It was the kinds of things you'd imagine, including: submit some definitions (of various structure, size, intent, name, identifiers, etc) resubmit the same definitions (identical, sharing keys, with variations, etc) retrieve the submitted definitions (using whatever endpoints exist to show some view of them) compare definitions I submitted fro

Testers are Gate-Crashers

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Testers are the gatekeepers of quality" Instinctively I don't like the sound of that, but I wonder what you mean by it. Perhaps one or more of these? Testers set the quality sta

Vanilla Flavour Testing

I have been pairing with a new developer colleague recently. In our last session he asked me "is this normal testing?" saying that he'd never seen anything like it anywhere else that he'd worked. We finished the task we were on and then chatted about his question for a few minutes. This is a short summary of what I said. I would describe myself as context-driven . I don't take the same approach to testing every time, except in a meta way. I try to understand the important questions, who they are important to, and what the constraints on the work are. With that knowledge I look for productive, pragmatic, ways to explore whatever we're looking at to uncover valuable information or find a way to move on. I write test notes as I work in a format that I have found to be useful to me, colleagues, and stakeholders. For me, the notes should clearly state the mission and give a tl;dr summary of the findings and I like them to be public while I'm working not just w

Build Quality

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "When the build is green, the product is of sufficient quality to release" An interesting take, and one I wouldn't agree with in general. That surprises you? Well, ho

Make, Fix, and Test

A few weeks ago, in A Good Tester is All Over the Place , Joep Schuurkes described a model of testing work based on three axes: do testing yourself or support testing by others be embedded in a team or be part of a separate team do your job or improve the system It resonated with me and the other testers I shared it with at work, and it resurfaced in my mind while I was reflecting on some of the tasks I've picked up recently and what they have involved, at least in the way I've chosen to address them. Here's three examples: Documentation Generation We have an internal tool that generates documentation in Confluence by extracting and combining images and text from a handful of sources. Although useful, it ran very slowly or not at all so one of the developers performed major surgery on it. Up to that point, I had never taken much interest in the tool and I could have safely ignored this piece of work too because it would have been tested by

The Best Laid Test Plans

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "What's the best format for a test plan?" I'll side-step the conversation about what a test plan is and just say that the format you should use is one that works for you, your coll