We all like a bargain and, if there was such a thing, we'd tuck into the free lunches with gusto too. But bargains needs to be viewed with caution and free lunch is usually only about half right - it probably is lunch.
Automated testing is often thought to be a bargain or a free lunch but even limited experience will have shown you that it's not something for nothing and not even always cheap, especially in terms of maintenance. You should think carefully about whether regression tests are appropriate, what you hope to achieve, how, using what resources and attempt some level of cost/benefit analysis (using whatever data, estimates, gut feeling you have available) before you start on an automation project to try to maximise its value.
But existing infrastructure can have unexploited value. For example, it is sometimes possible to double up regression test suites and test something other than what was originally intended.
In my company we have both a traditional software development team and a resource development team that produces plug-ins, such as configuration files for processing different source data formats. A software regression test will usually use a static set of data inputs with a variable executable, like so:
The harness will essentially run the executable on the inputs to generate results, then diff these against the golds (the expected results) to generate the report. A change needs investigation to see whether it's intended or accidental.
However, we can use exactly the same machinery to provide a regression test for the resource development team to use by making the configuration files variable too - perhaps we'll check the latest revision out of version control before running:
Now we have two variables in the suite but as we're already running the suite with a single variable (the executable) we can easily tell whether changes are due to the executable. Any changes from the second suite are due to the configuration files (or in rare cases a bad interaction of the new files and the new build). The cost of this? The expense of checking out the latest configs and one more run of the suite in your overnight regression set.
The same harness can be used with a static executable. When the resource development team are creating materials for a previous release, being able to run against an unchanging branch build is valuable. But remember that you need to be sure your golds are relevant to that build. (You do branch your test code base in sync with the product, don't you?)
Another way to exploit existing infrastructure is when you have two product components in a producer-consumer relationship. The first creates a resource and the second uses it. If you have a regression suite for validation of the consumer, you can reuse it in the test suite for the producer.
By configuring the suites to consume or produce the "same" resource (i.e. one that if nothing has changed in the software will be equivalent) we can re-run the consumption test suite on a dynamically-produced resource to give feedback on whether the resource creation has been successful - although note that this only tests the dimensions that the consumer cares about.
If this shows changes we may have accidentally broken the producer. This has more value still because the resource, once changes are validated, become the next set of static resource for the consumption suite, which saves us having to regenerate it manually:
Note that this is different from test suites that rely on the output from another suite for their primary input. While that might be a valid approach (e.g. because it is expensive to generate the data), it creates a dependency that you might prefer not to have and timings may become an issue. For instance, if you're running all your suites nightly and the precursor suite takes twice as long as usual, any downstream suite that depends on its output being in place by a certain time may fail for lack of data.
You might never get your lunch paid for, but you could cover the cost of the starters if you always look for value: plan your test suites, cost them, build them efficiently and for extensibility when you can, run them as often as makes sense and look out for special offers like buy one, get one free.
Image: Salvatore Vuono / FreeDigitalPhotos.net
Automated testing is often thought to be a bargain or a free lunch but even limited experience will have shown you that it's not something for nothing and not even always cheap, especially in terms of maintenance. You should think carefully about whether regression tests are appropriate, what you hope to achieve, how, using what resources and attempt some level of cost/benefit analysis (using whatever data, estimates, gut feeling you have available) before you start on an automation project to try to maximise its value.
But existing infrastructure can have unexploited value. For example, it is sometimes possible to double up regression test suites and test something other than what was originally intended.
In my company we have both a traditional software development team and a resource development team that produces plug-ins, such as configuration files for processing different source data formats. A software regression test will usually use a static set of data inputs with a variable executable, like so:
The harness will essentially run the executable on the inputs to generate results, then diff these against the golds (the expected results) to generate the report. A change needs investigation to see whether it's intended or accidental.
However, we can use exactly the same machinery to provide a regression test for the resource development team to use by making the configuration files variable too - perhaps we'll check the latest revision out of version control before running:
Now we have two variables in the suite but as we're already running the suite with a single variable (the executable) we can easily tell whether changes are due to the executable. Any changes from the second suite are due to the configuration files (or in rare cases a bad interaction of the new files and the new build). The cost of this? The expense of checking out the latest configs and one more run of the suite in your overnight regression set.
The same harness can be used with a static executable. When the resource development team are creating materials for a previous release, being able to run against an unchanging branch build is valuable. But remember that you need to be sure your golds are relevant to that build. (You do branch your test code base in sync with the product, don't you?)
Another way to exploit existing infrastructure is when you have two product components in a producer-consumer relationship. The first creates a resource and the second uses it. If you have a regression suite for validation of the consumer, you can reuse it in the test suite for the producer.
By configuring the suites to consume or produce the "same" resource (i.e. one that if nothing has changed in the software will be equivalent) we can re-run the consumption test suite on a dynamically-produced resource to give feedback on whether the resource creation has been successful - although note that this only tests the dimensions that the consumer cares about.
If this shows changes we may have accidentally broken the producer. This has more value still because the resource, once changes are validated, become the next set of static resource for the consumption suite, which saves us having to regenerate it manually:
You might never get your lunch paid for, but you could cover the cost of the starters if you always look for value: plan your test suites, cost them, build them efficiently and for extensibility when you can, run them as often as makes sense and look out for special offers like buy one, get one free.
Image: Salvatore Vuono / FreeDigitalPhotos.net
Comments
Post a Comment