Skip to main content

Having a Test.blast()

Last week I attended a meetup on API testing with Mark Winteringham. In it, he talked through some HTTP and REST basics, introduced us to Postman by making requests against his Restful Booker bed and breakfast application, and encouraged us to enter the Test.bash() 2022 API Challenge which was about to close.

The challenge is to make a 20-minute video for use at the Ministry of Testing's October Test.bash() showing the use of automation to check that it's possible to create a room using the Restful Booker API.

I talk and write about exploring with automation a lot (next time is 14th October 2022, for an Association for Software Testing webinar) and I thought it might be interesting to show that I am not a great developer and spend plenty of time Googling, copy-pasting, and introducing and removing typos.

So I did and my video is now available in the Ministry of Testing Dojo. The script I hacked during the video is up in GitHub.

My code is not beautiful. But then my mission was not to write beautiful code, it was to make a first-pass implementation with the side-aim of using features of VS Code's Python extension. And I achieved that.

Interestingly, the challenge didn't set out how or where this code is to be used. That makes a difference to what I might do next. If I needed to plumb it into some existing project I could treat my attempt as a spike and code it more cleanly in whatever style the project uses. 

If I wanted to re-use pieces in some more exhaustive coverge of the API that I was building, I might factor out some helper functions such as getting the authentication token and making a client session object.

If I wanted to exploit automation to explore the room API I might leave it as it is for now, and hack new bits in as I had questions that I'd like to answer.

Let's take that last example. What kind of questions might I have that this crummy tool could help with? Well, I wondered whether the application could cope with multiple creation requests.

To begin with, I modified my script to loop and create 10 rooms then loop and check that the server returns them all. This is multiple_rooms.py and you can see that it's pretty similar to the original.

Next I wondered whether I could create rooms in parallel rather than sequentially. I had to hack the code about a little to accomplish this, although you can see that the bones of it are still create a token, create rooms, check rooms: multiple_rooms_parallel.py.

To facilitate the parallelism, I had to write a function to create a room (make_a_room) which is called on a list of room names. 

While I was playing with that function I noticed that checking for created rooms seemed to be non-deterministic. Sometimes all the rooms would be found, other times only some, and occasionally none. Using the web interface to Restful Booker it appeared that all of them were present so I traced the script execution in the Python debugger and found that the server did not always return all of the rooms the script had created. After a short delay, though, the list was fully-populated.

To expose what looks like an interesting issue more clearly (maybe for a bug report and fix-checker) I made a second function (check_rooms_exist) which fetches the list of rooms the server claims are there and checks against the list of rooms the test code thinks it created. I removed the assertions in this function to make success and fail easier to see in bulk. 

The script calls this checking function twice. Here's a typical run:

---- attempt 1 ----
2022-10-02T09:37:02.953068__884680 False
2022-10-02T09:37:02.953068__236820 True
2022-10-02T09:37:02.953068__774290 False
---- attempt 2 ----
2022-10-02T09:37:02.953068__884680 True
2022-10-02T09:37:02.953068__236820 True
2022-10-02T09:37:02.953068__774290 True

The first column is the room names composed of a timestamp (shared by all names in a run) and a random number (one per name in the run). The second column is whether or not the name exists in the list of rooms returned by the server.

In attempt 1, the first time check_rooms_exist is called, only one room apparently exists (True), although during room creation calls the server said that all three were created.

Then, in attempt 2 a second or so later, when the function is called again, the server says all of them are there.

Is this a problem? Maybe yes. Or no. From a user perspective perhaps it's good enough: when the room they just created occasionally doesn't appear in the list of rooms, refreshing the page is likely to show it.

But the script could now become a tool for testing tolerance of the observed behaviour. Let's say we're not prepared for the user to need to refresh twice and we assume this would take two seconds. Well, we can modify the script to create ever larger numbers of rooms and put a two-second delay between the two attempts to retrieve the list of rooms.

When we start to see False in the attempt 2 list, we will have an idea about the load required to go past acceptable performance and can decide how often it's likely to be seen in practice and whether we need to attempt to fix it.

We could experiment in other dimensions too. For example, would longer names, or descriptions, or more metadata make a difference to the delay? Would duplicate room names be accepted in parallel? What happens if we try to delete rooms before the server will admit that they are present?

The code for this parallel script is still reasonably ugly. But that's OK, it's an experimental tool not a production artefact. It's automation that can help us to explore, to ask questions, and then to answer them. I make and use tools like this all the time.

Thanks to Mark and Ministry of Testing. Playing around with this was a Test.blast(). I learned a bit more Python, and it was great to meet Mark in person after being interviewed by him for Testers' Island Discs (tracks, links) a couple of years ago.
Highlighting: pinetools

Comments

Popular posts from this blog

Can Code, Can't Code, Is Useful

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "If testers can’t code, they’re of no use to us" My first reaction is to wonder what you expect from your testers. I am immediately interested in your working context and the way

Testing (AI) is Testing

Last November I gave a talk, Random Exploration of a Chatbot API , at the BCS Testing, Diversity, AI Conference .  It was a nice surprise afterwards to be offered a book from their catalogue and I chose Artificial Intelligence and Software Testing by Rex Black, James Davenport, Joanna Olszewska, Jeremias Rößler, Adam Leon Smith, and Jonathon Wright.  This week, on a couple of train journeys around East Anglia, I read it and made sketchnotes. As someone not deeply into this field, but who has been experimenting with AI as a testing tool at work, I found the landscape view provided by the book interesting, particularly the lists: of challenges in testing AI, of approaches to testing AI, and of quality aspects to consider when evaluating AI.  Despite the hype around the area right now there's much that any competent tester will be familiar with, and skills that translate directly. Where there's likely to be novelty is in the technology, and the technical domain, and the effect of

Testers are Gate-Crashers

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Testers are the gatekeepers of quality" Instinctively I don't like the sound of that, but I wonder what you mean by it. Perhaps one or more of these? Testers set the quality sta

Am I Wrong?

I happened across Exploratory Testing: Why Is It Not Ideal for Agile Projects? by Vitaly Prus this week and I was triggered. But why? I took a few minutes to think that through. Partly, I guess, I feel directly challenged. I work on an agile project (by the definition in the article) and I would say that I use exclusively exploratory testing. Naturally, I like to think I'm doing a good job. Am I wrong? After calming down, and re-reading the article a couple of times, I don't think so. 😸 From the start, even the title makes me tense. The ideal solution is a perfect solution, the best solution. My context-driven instincts are reluctant to accept the premise, and I wonder what the author thinks is an ideal solution for an agile project, or any project. I notice also that I slid so easily from "an approach is not ideal" into "I am not doing a good job" and, in retrospect, that makes me smile. It doesn't do any harm to be reminded that your cognitive bias

Play to Play

I'm reading Rick Rubin's The Creative Act: A Way of Being . It's spiritual without being religious, simultaneously vague and specific, and unerring positive about the power and ubiquity of creativity.  We artists — and we are all artists he says — can boost our creativity by being open and welcoming to knowledge and experiences and layering them with past knowledge and experiences to create new knowledge and experiences.  If that sounds a little New Age to you, well it does to me too, yet also fits with how I think about how I work. This is in part due to that vagueness, in part due to the human tendency to pattern-match, and in part because it's true. I'm only about a quarter of the way through the book but already I am making connections to things that I think and that I have thought in the past. For example, in some ways it resembles essay-format Oblique Strategy cards and I wrote about the potential value of them to testers 12 years ago. This week I found the f

Meet Me Halfway?

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Stop answering my questions with questions." Sure, I can do that. In return, please stop asking me questions so open to interpretation that any answer would be almost meaningless and certa

Test Now

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "When is the best time to test?" Twenty posts in , I hope you're not expecting an answer without nuance? You are? Well, I'll do my best. For me, the best time to test is when there

Rage Against the Machinery

  I often review and collaborate on unit tests at work. One of the patterns I see a lot is this: there are a handful of tests, each about a page long the tests share a lot of functionality, copy-pasted the test data is a complex object, created inside the test the test data varies little from test to test. In Kotlin-ish pseudocode, each unit test might look something like this: @Test fun `test input against response for endpoint` () { setupMocks() setupTestContext() ... val input = Object(a, OtherObject(b, c), AnotherObject(d)) ... val response = someHttpCall(endPoint, method, headers, createBodyFromInput(input) ) ... val expected = Object(w, OtherObject(x, y), AnotherObject (z)) val output = Object(process(response.getField()), otherProcess(response.getOtherField()), response.getLastField()) assertEquals(expected, output) } ... While these tests are generally functional, and I rarely have reason to doubt that they

A Qualified Answer

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn ,   Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Whenever possible, you should hire testers with testing certifications"  Interesting. Which would you value more? (a) a candidate who was sent on loads of courses approved by some organisation you don't know and ru

README

    This week at work my team attended a Myers Briggs Type Indicator workshop. Beforehand we each completed a questionnaire which assigned us a personality type based on our position on five behavioural preference axes. For what it's worth, this time I was labelled INFJ-A and roughly at the mid-point on every axis.  I am sceptical about the value of such labels . In my less charitable moments, I imagine that the MBTI exercise gives us each a box and, later when work shows up, we try to force the work into the box regardless of any compatiblity in size and shape. On the other hand, I am not sceptical about the value of having conversations with those I work with about how we each like to work or, if you prefer it, what shape our boxes are, how much they flex, and how eager we are to chop problems up so that they fit into our boxes. Wondering how to stretch the workshop's conversational value into something ongoing I decided to write a README for me and