Skip to main content

Testability? Shhh!


Last weekend I was talking Testability at DEWT 9. Across the two days I accumulated nodes on the mind map above from the presentations, the facilitated discussions, and the informal conversations. As you can see it's a bit dense (!) and I'll perhaps try to rationalise or reorganise it later on. For now, though, this post is a brain dump based on the map and a few other notes. 

--00--

Testability is the ease with which the product permits itself to be tested (its intrinsic testability)
  • ... but also factors outside of the product that enable or threaten its testing (so extrinsic)
  • ... meaning that people, knowledge, opportunity, time, environments, and so on can be part of testability.

Desired outcomes of increasing testability might include
  • easier, more efficient, testing and learning
  • better feedback, risk identification and mitigation
  • building  trust and respect across project teams

The term testability can be unhelpful to non-testers
  • ... and also to testers (!)
  • ... and so consider casting testability conversations in terms of outcomes
  • ... and business value.

Actions that might be taken with the intention of increasing testability include
  • changing the product (e.g. logging, control functions)
  • collecting information (e.g. stakeholder requirements, user desires)
  • applying tooling (e.g. for deployment, log analysis)
  • acquiring expertise (e.g. for the customer domain, your own product range)
  • obtaining more time (e.g. by moving deadlines, cutting low priority tasks)

Situations in which testability might be decreased include
  • side-effecting an attempt to increase testability (e.g. introduce bugs, waste time on useless tools)
  • losing motivation (e.g. because of poor working conditions, ill health)
  • being asked to test to an unreasonable standard (e.g. to great depth in little time, in unhelpful circumstances)
  • recognising that the existing test strategy misses important risks (e.g. there are previously unknown dependencies)

Blockers or challenges to testability might include
  • only talking to other testers about it
  • an inability to persuade others why it would be valuable
  • bad testing
  • previous failures

When requests for testability are denied, consider
  • getting creative
  • going under the radar
  • finding allies
  • the main mission

It might be appropriate to sacrifice testability when
  • adding it would risk the health, safety, or commitment of the team, product, or company
  • trading one kind of testability against another (e.g. adding dependencies vs increasing coverage)
  • no-one would use the information that it would bring
  • another term will be more acceptable and help to achieve the same goal
  • another action for the same or lower costs will achieve the same goal
  • a business argument cannot be made for it (this point may summarise all of the above)

Intrinsic changes for testability are features
  • ... and should be reviewed alongside other requested product changes
  • ... in whatever process is appropriate in context (of the product and the proposed change)
  • ... by whoever is empowered to make those kinds of decisions.

Extrinsic changes for testability are less easily typed
  • ... but should still be reviewed by appropriate people
  • ... to an appropriate level
  • ... in relevant process for the context.

Unsurprisingly, there are some things that I want to think about more ...
If attributes of testers contribute to testability then it's possible to increase/decrease testability by changing the tester. Logically I am comfortable with that but emotionally I find the language tough to accept and would like to find a different way to express it

I found intrinsic and extrinsic testability a useful distinction but too coarse because, for instance, I haven't distinguished factors that influence testability and factors that require testability.

Although it was easy to agree on intrinsic testability, there was less agreement on the existence or extent of extrinsic testability and no clear boundary on extrinsic testability for those who are prepared to accept it. I'm not sure that matters for practical purposes but definitional questions interest me.

There was consensus that we should sshhh! ourselves on testability and instead describe testability issues in terms of their impact business value. Unfortunately, characterising business value is not itself an easy task.


--00--

The participants at DEWT 9 were: Adina Moldovan, Andrei Contan, Ard Kramer, Bart Knaack, Beren van Daele, Elizabeth Zagroba, Huib Schoots, James Thomas, Jean-Paul Varwijk, Jeroen Schutter, Jeroen van Seeter, Joep Schuurkes, Joris Meerts, Maaike Brinkhof, Peter Schrijver, Philip Hoeben, Ruud Cox, Zeger van Hese.

Material from DEWT is jointly owned (but not necessarily agreed with) by all of the participants. Any mistakes or distortions in the representation of it here are mine.

Thank you to the Association for Software Testing for helping to fund the event through their grant programme.

Other material referenced or posted during the conference:

Comments

  1. Nice outline.

    It would help to see specific examples. Especially good to see examples of testability of applications (as opposed to infrastructure).

    Also worth pointing out that testability is not the new shiny. Testability is a great tool/aspect. You still need the hard work of testing. It might be good to highlight that in the outcomes. Testability *improves* risk identification; *improves* building trust.

    ReplyDelete
  2. Hi Nilanjan, perhaps Rob Meaney's Tale of Testability could give you some practical examples? I saw a version of it at SoftTest and it made me realise that (in the service of testing) I was doing the same things that he was calling out in his CODS model.

    My notes on that are in Hard to Test

    ReplyDelete
  3. Awesome James, very good recap of the weekend

    ReplyDelete

Post a Comment

Popular posts from this blog

Can Code, Can't Code, Is Useful

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "If testers can’t code, they’re of no use to us" My first reaction is to wonder what you expect from your testers. I am immediately interested in your working context and the way

Meet Me Halfway?

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Stop answering my questions with questions." Sure, I can do that. In return, please stop asking me questions so open to interpretation that any answer would be almost meaningless and certa

Not Strictly for the Birds

  One of my chores takes me outside early in the morning and, if I time it right, I get to hear a charming chorus of birdsong from the trees in the gardens down our road, a relaxing layered soundscape of tuneful calls, chatter, and chirrupping. Interestingly, although I can tell from the number and variety of trills that there must be a large number of birds around, they are tricky to spot. I have found that by staring loosely at something, such as the silhouette of a tree's crown against the slowly brightening sky, I see more birds out of the corner of my eye than if I scan to look for them. The reason seems to be that my peripheral vision picks up movement against the wider background that direct inspection can miss. An optometrist I am not, but I do find myself staring at data a great deal, seeking relationships, patterns, or gaps. I idly wondered whether, if I filled my visual field with data, I might be able to exploit my peripheral vision in that quest. I have a wide monito

Testing (AI) is Testing

Last November I gave a talk, Random Exploration of a Chatbot API , at the BCS Testing, Diversity, AI Conference .  It was a nice surprise afterwards to be offered a book from their catalogue and I chose Artificial Intelligence and Software Testing by Rex Black, James Davenport, Joanna Olszewska, Jeremias Rößler, Adam Leon Smith, and Jonathon Wright.  This week, on a couple of train journeys around East Anglia, I read it and made sketchnotes. As someone not deeply into this field, but who has been experimenting with AI as a testing tool at work, I found the landscape view provided by the book interesting, particularly the lists: of challenges in testing AI, of approaches to testing AI, and of quality aspects to consider when evaluating AI.  Despite the hype around the area right now there's much that any competent tester will be familiar with, and skills that translate directly. Where there's likely to be novelty is in the technology, and the technical domain, and the effect of

Postman Curlections

My team has been building a new service over the last few months. Until recently all the data it needs has been ingested at startup and our focus has been on the logic that processes the data, architecture, and infrastructure. This week we introduced a couple of new endpoints that enable the creation (through an HTTP POST) and update (PUT) of the fundamental data type (we call it a definition ) that the service operates on. I picked up the task of smoke testing the first implementations. I started out by asking the system under test to show me what it can do by using Postman to submit requests and inspecting the results. It was the kinds of things you'd imagine, including: submit some definitions (of various structure, size, intent, name, identifiers, etc) resubmit the same definitions (identical, sharing keys, with variations, etc) retrieve the submitted definitions (using whatever endpoints exist to show some view of them) compare definitions I submitted fro

Testers are Gate-Crashers

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Testers are the gatekeepers of quality" Instinctively I don't like the sound of that, but I wonder what you mean by it. Perhaps one or more of these? Testers set the quality sta

Vanilla Flavour Testing

I have been pairing with a new developer colleague recently. In our last session he asked me "is this normal testing?" saying that he'd never seen anything like it anywhere else that he'd worked. We finished the task we were on and then chatted about his question for a few minutes. This is a short summary of what I said. I would describe myself as context-driven . I don't take the same approach to testing every time, except in a meta way. I try to understand the important questions, who they are important to, and what the constraints on the work are. With that knowledge I look for productive, pragmatic, ways to explore whatever we're looking at to uncover valuable information or find a way to move on. I write test notes as I work in a format that I have found to be useful to me, colleagues, and stakeholders. For me, the notes should clearly state the mission and give a tl;dr summary of the findings and I like them to be public while I'm working not just w

Build Quality

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "When the build is green, the product is of sufficient quality to release" An interesting take, and one I wouldn't agree with in general. That surprises you? Well, ho

Make, Fix, and Test

A few weeks ago, in A Good Tester is All Over the Place , Joep Schuurkes described a model of testing work based on three axes: do testing yourself or support testing by others be embedded in a team or be part of a separate team do your job or improve the system It resonated with me and the other testers I shared it with at work, and it resurfaced in my mind while I was reflecting on some of the tasks I've picked up recently and what they have involved, at least in the way I've chosen to address them. Here's three examples: Documentation Generation We have an internal tool that generates documentation in Confluence by extracting and combining images and text from a handful of sources. Although useful, it ran very slowly or not at all so one of the developers performed major surgery on it. Up to that point, I had never taken much interest in the tool and I could have safely ignored this piece of work too because it would have been tested by

The Best Laid Test Plans

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "What's the best format for a test plan?" I'll side-step the conversation about what a test plan is and just say that the format you should use is one that works for you, your coll