Sunday, March 28, 2021

Exploratory Tooling

Last week I started a new job. The team I've joined owns a back-end service and, along with all the usual onboarding process, inevitable IT hassles, and necessary context-gathering, one of my goals for my first week was to get a local instance of it running and explore the API.

Which I did.

Getting the service running was mostly about ensuring the right tools and dependencies were available on my machine. Fortunately the team has wiki checklists for that stuff, and my colleagues were extremely helpful when something was missing, out of date, or needed an extra configuration tweak.

Starting to explore the service was boosted by having ReDoc for the endpoints and a Postman collection of example requests against them. I was able to send requests, inspect responses, compare both to the doc, and then make adjustments to see what effects they had.

If that's testing of any kind, it's probably what I call pathetic testing:

There's this mental image I sometimes have: I'm exploring the product by running my fingers over it gently. Just the lightest of touches. Nothing that should cause any stress. In fact, I might hardly be using it for any real work at all. It's not testing yet really; not even sympathetic testing although you might call it pathetic testing because of itself it's unlikely to find issues.

One of the functions offered by the service is a low-latency search endpoint which enables autocompletion on the client side. You know the kind of thing; as the user types, suggestions appear for them to choose from. 

The doc for this is fine at a high level. I was interested to understand the behaviour at a lower level but found Postman (with my level of expertise) required too many actions between requests and made comparison across requests difficult. The feedback loop was too long for me.

So I wrote a tool. And if that sounds impressive don't be fooled: I was standing on the shoulders of giants.

What does my tool do? It's a shell script that runs curl to search for a term with a set of parameters, pipes the response through jq to parse the hits out of the JSON payload, and then uses sed to remove quoting. Here's the bare bones of it:




curl --location $URL | jq '.results | @csv' | sed 's/[\"\\]//g'
A series of runs might look like this, if I'm exploring consistency of results as I vary the search term starting with "te":
$ search x y te

$ search x y ter

$ search x y term

$ search x y terms

$ search x y termst

$ search x y terms

$ search x y term
Or perhaps like this if I'm exploring the parameters:
$ search x y test

$ search A y test

$ search A B test

There is nothing clever going on here technically but I get a major benefit practically: I have abstracted away everything that I don't need so that my next test can be composed and run with minimal friction. I can quickly cycle through variations and compare this and previous experiments easily. Feedback loop tightened.

Actually, when I said "I have abstracted away everything that I don't need" what I really meant was "I have a very specific mission here, which is to look at how search terms and parameters affect search results. Because I'm on that mission, I choose not to view all of the other data returned by the server on each of my requests. I may miss something interesting by doing that but I accept the trade-off".

That aside, there are numerous things that I could do with this tool now that I have it. For example:

  • Write a script with a list of search terms in it and call each of them in turn, collect the results and write them to a file that I could analyse in, say, Excel. 
  • Point it at a production server as well, and compare my test environment to production in every call.
  • Launch hundreds of these searches at the same time from another script as a simple-minded stress test.

Or I could just throw it away because it has served its purpose: facilitating familiarisation with a feature of our service at low cost and high speed, initiating source code inspection and conversations, and along the way helping me to find a few inconsistencies that I can feed back to the team.


  1. Love this example and use of a test tool. Fits a specific mission, helps you it’s purpose and then you move on.

  2. Nice example! I would have had it running in Postman no probs ;)