Recently, I needed to quickly explore an aspect of the behaviour of an application that takes a couple of text file inputs and produces standard output.
To get a feel for the task I set up one console with an editor open on two files (1.txt and 2.txt) and another console in which I ran the application this way:
$ more 1.txt; more 2.txt; diff -b 1.txt 2.txt a b c d e f a b c d e f 1c1,2 < a b c d e f --- > a b c > d e f $ more 1.txt; more 2.txt; diff -b 1.txt 2.txt a b c d e f a b c d e f $ more 1.txt; more 2.txt; diff -b 1.txt 2.txt a b c d e f a b cd e f 1c1 < a b c d e f --- > a b cd e fAs you can see I have a single command line that dumps both the inputs and the outputs. (And diff was not the actual application I was testing!)
After each run I changed some aspect of the inputs in the first console, pressed up and enter in the second console.
What am I achieving here? I have a simple runner and record of my experiments and an easy visual comparison across the whole set. It's quick to set up and in each iteration I'm in the experiment rather than the infrastructure of the experiment.
I could have, for example, created a ton of files and run them in some kind of scripted harness or laboriously by hand. But I was short of time and I wanted to spend the time I had on exploring - on responding to what I'd observed - and not on managing data or investing in stuff I wasn't sure would be valuable yet.
I still hear and see too much about manual and automated testing for my comfort. Is what I did here manual testing? Is it automation? Could a "manual tester" really not get their head around something like this? Could an "automation tester" really not stoop so low as to use something this unsophisticated?
Bottom line for me: there's a tool that is at my disposal to serve my needs at appropriate cost, with appropriate trade-offs, and in appropriate situations. Why wouldn't I use it?
Syntax highlighting: http://markup.su/highlighter