Thursday, December 3, 2015

More TCs, Vicar?

Several times in recent months I've found myself being asked how many test cases there are in automated suites at Linguamatics. On each occasion I have had to admit that I don't know and, in fact, I'm not particularly motivated to calculate that particular metric.

Depending on how the conversations went, I've asked in return what my questioner thinks we can understand about the quality, coverage, relevance, value and so on of testing by knowing the number of cases. (And let's leave vocabulary aside for now.)

Are you a counter? Try this scenario: you wrote sum, a program that adds two numbers, and have asked me to test it...

Sure! I can test that for you. To do it, I'll write a test case generator. Here it is:

echo "#!/bin/bash"
for i in `seq 1 $1`
    s=$(($i + $i))
    echo "r=\`./sum $i $i\`; if [ \$r -eq $s ]; then echo \"OK\"; else echo \"NO\"; fi"
A somewhat rudimentary framework? Absolutely, but it'll do for this exercise. Let's have it make three test cases:
$ sh 3 >
$ more
r=`./sum 1 1`; if [ $r -eq 2 ]; then echo "OK"; else echo "NO"; fi
r=`./sum 2 2`; if [ $r -eq 4 ]; then echo "OK"; else echo "NO"; fi
r=`./sum 3 3`; if [ $r -eq 6 ]; then echo "OK"; else echo "NO"; fi

And run them:
$ sh
Yeah, it passes! #GoDev!

What's that? You say there's loads of numbers in the world and three isn't very big in comparison. Well, pray tell, how many cases do you want? Ten? A hundred? A thousand? A round thousand, yeah!
$ sh 1000 >
$ sh
Now we're really sure your program works, right? Right?

Perhaps you feel uncomfortable about something I did? Say why into the comments. #GoTesting!
Syntax highlighting:


  1. Hi James,

    Thanks for this interesting post. I agree with most of what you're saying, and specifically take the point that we can never test everything and will likely never know when to stop.

    I do have two thoughts about the information the number of test cases can provide us:

    1) The number of test cases is not an indication of quality (at least on it's own). However, it could act as a measure for the maintainability of your suite. More specifically, the more test cases you have, the bigger the effort you need to maintain it.

    2) I think your example is an over simplification of the problem (and I do understand that it's on purpose). Depending on the tested algorithm, I think the number of test cases can vary. We've seen plenty of cases where a one value works and the other doesn't. If the algorithm is complicated enough, several test cases that do the same but with different values are required. The number of test cases is not a direct indication of the way we test, sometimes it can be a (very) crude estimation. For example, suppose you only have one test case for an algorithm that computes the volume of a 3D shape (for example), clearly you have a problem with the way you test it.



  2. fI can easily, trivially, write an A+B program that will be "correct" for the first million or billion test cases and then fail on the next case. Evern Charles Babbage knew that, for gosh sakes. The sheer number of test cases tells nothing about the program, but it may tell a lot about the tester.

  3. Ha, lovely.

    I wonder, though - have you encountered a useful measurement for estimating the "completeness" of the test suite? The most I heard of are some forms of code-coverage (as requirement coverage is a bit tricky with all of those implicit requirements), but I'm getting the feeling that code-coverage is a negative only indicator - it may mean something when it's not 100% (something such as "look over there! you may have missed an interesting test!"), but having it at 100% means next to nothing.

    @Gilad - Your first point is interesting, but it seems more like an excuse to cut down the number of tests instead of a real issue. I am assuming that you are referring to unit tests (as system tests, and even most integration tests, should not be affected by maintenance and refactoring), and I strongly believe that if you write unit-tests that are testing the intended functionality of the code instead of the initial structure of it, refactoring won't be as painful even with a fairly large number of tests - Obviously, there is no point in running tests that don't add value to the test suite, but if you came up with a thousand valid & valuable test cases, I don't thing the refactoring should scare you as much - assuming that you test functionality and not functions.

    As I have been a bit vague about testing functionality vs testing functions, perhaps an example is at place:
    let's take the "sum" function, and look at the following implementation:
    sum(int a, int b){
    if (a%2==0 and b%2==0) return sumEvenNumbers(a,b);
    else if (a%2==0 || b%2==0) return sumEvenAndOdd(a,b);
    else return sumOddNumbers(a,b);

    had you tested functionality, you would have your test cases calling only "sum", and exerting adds and evens and zeros as you saw fit.
    Had you to test functions, you would have created tests for "sumEvenNumbers", and "sumOddNumbers" - which would take quite a while to remove if you ever chose to refactor sum to simply return "a+b"

    For your second point - I don't necessarily agree (again) - Yes, having a single test case for a complex functionality is a "smell" that you should look into - but what if you are using an external library to calculate the volume of that shape? your test case would be checking only that there are no visible library conflicts and that you can invoke this functionality. Even for complex algorithms that you write yourself, there may be cases where you will not need more than one test case - For instance, if 3dVolume() takes too long to run (lets say - 10 minutes) ,I may prefer to run only a single test case but have it randomly choose the input, so over time I get better coverage while not holding the line in performing long tests) - those are rare cases, but not every smell is "you clearly have a problem", it only "you probably have a problem"

  4. @James,

    Sorry, I was too cumbersome with the way I presented things. I had 2 points:

    a) I agree that looking at the number of test cases is not a good way (at all!) to measure the quality of the test suite.
    b) I think that going to the other extreme, not looking at the the number of test cases, might lead to a situation where important information is disregarded, such as the maintainability of the test suite or problem indications.

    My second example from my previous post was about (b). The possibility of the usage of an external library (or a complicated algorithm that can be tested with one test case) is a valid point, but I was trying to give a specific example for a specific case where the the number of test cases do give an extra value, therefore suggesting that completely ignoring the number of test cases can lead to a loss of information.