Wednesday, July 28, 2021

Mass Testing

The Association for Software Testing is crowd-sourcing a book, Navigating the World as a Context-Driven Tester, which aims to provide responses to common questions and statements about testing from a context-driven perspective.

It's being edited by Lee Hawkins who is posing questions on TwitterLinkedInSlack, and the AST mailing list and then collating the replies, focusing on practice over theory.

I've decided to contribute by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be.

Perhaps you'd like to join me?

--00--

"Do more test cases mean better test coverage?"

Simply, no. Less simply, depending on the assumptions you care to make, perhaps.

The terms test case and test coverage are loaded, so let's talk about a somewhat analogous problem:

Does more bulletproof glass mean the Popemobile is better protected?

I find it helpful to turn this kind of yes/no question into an exploration: 

Under what circumstances could more bulletproof glass mean the Popemobile is better protected?

This helps me to think of challenges, caveats, and clarifications. To give a few examples:

  • Better protected than what?
  • Better protected from what?
  • Better protected, judged how?
  • Is there any existing bulletproof glass?
  • Where is the existing bulletproof glass?
  • How protective is the existing glass?
  • ... against what kinds of projectiles?
  • ... propelled how?
  • What other kinds of bulletproof glass are available and how protective are they?
  • Where would it be possible to put additional bulletproof glass if we had some?
  • ... and where is the Popemobile susceptible to damage from bullets?
  • ... and which areas at risk are not covered?
  • What threats are we trying to protect the Pope from?
  • Would any amount of bulletproof glass protect the Pope from those threats?
  • Are there any ways other than glass to mitigate the risks of those threats?

Then there are other questions, ones that matter in general because we don't have infinite resources. For example:

  • Does the Pope need better protection?
  • If so, do we need to change the Popemobile to achieve it?
  • At what cost?
  • At what opportunity cost?

So, could I think of circumstances in which more test cases mean better test coverage? Yes. Do those circumstances hold generally? Not a chance in Hell.
Image: The Independent

Sunday, July 25, 2021

Open Testing with Confluence

 
I am a believer in open-notebook testing. I make my work visible to anyone who wants to look at it, while it is in progress.

Why? Well, I dislike information silos, publishing keeps me thoughtful about my work and my standards high, and sometimes someone will spot something I've missed or mistaken.

But I also want my testing to be friction-free. In this context that has two aspects: I need to be able to (a) record and (b) share what I'm doing with as little impact on my work as I can manage.

I've written before about the way I take notes in a text editor using a simple markup language. In my previous job I ran a little script on my testing notes and pasted the output straight into the Mediawiki instance we used.

Unfortunately, in my new job we use Confluence. Also unfortunately, I found that support for even its own markup language was unreliable and so I had to find a new route.

What I've iterated my way to over the last four months is, again, a simple markup language and a script, but this time the markup is based on Markdown and the script uploads the notes itself, along with images, attachments, and labels.

Here's a snippet to illustrate the kinds of things I do:

## Annotations

I've used the WIP! annotation already, but I have others:

OK! Yes, this worked!

FAIL! No, this didn't work.

?? Question, or something to come back to investigate.

!! Problem, or surprising finding.

TODO! Another task, maybe in the testing or in the notes.

And here's how it renders in Confluence:

When I'm working I'll make a directory for a new task, create a file for my notes, and start writing as I test. A default file will usually have the following:

  • A date stamp in the title so that when the pages are published I can easily see when they're from.
  • First section is Mission, so it's clear what the work is attempting to do.
  • Next section is Summary, for stakeholders, a high-level perspective on the activities, results, risks, next steps. I'll mark this work in progress until I'm done testing.

As I work, I'll Cmd-Tab into the text editor (I'm using VS Code at the moment) to pop in a note or take screenshots that I'll drop next to file for later upload.

Periodically, I'll upload the notes to Confluence so that what I've got so far is visible, and so that I can reference it in e.g. Jira tickets or Slack conversations.

This process is not static. I alter it as my needs alter. For example, this week I changed the markup that I use for inserting links because I found it too easy to make a mistake. Next week I might change it again because now it's close to Markdown's table notation.

You might think that you couldn't possibly write a tool like mine? Well, you might be surprised at how dumb my script is. I have bludgeoned my way to making it work with lots of trial and error and I don't care that it's not beautiful or efficient. It is valuable for what it's cost me.

What value have I got from it? To start with, it fulfils my philosophical requirements: it is easy for me to record and share with very low friction. I make notes in an environment tuned very specifically for my needs but share in an environment tuned for the general good. Also, it has saved me person-years worth of frustration with editing in Confluence.

The value is not just to me. Others like my testing notes and find them useful. Not just the people I'm working with either, those who are searching in Confluence can come across them too. I have recently added the ability to put labels into my text file and have them respected by Confluence, so now my notes can also be automatically added to groups of related pages.

To be clear, though, while the tooling is helpful, being able to take the right notes at the right cost at the right time and right level for the right people is a skill. I've spent a long time working on that and expect to continue doing so while refining the tooling to reduce whatever friction I encounter. 

Here's the full demo page I made for this post, and a zip of the script and the source documents that it was created from:


Highlighting: Pinetools

Sunday, July 18, 2021

The Future of Testing? GRRrrr!

I really enjoy lean coffee conversations. I am energised by the rapid-fire topic switches and love it when I get exposure to multiple perspectives. The time-boxed aspect of the activity is a turn-off for some, keeping things relatively shallow, but it's one of the great advantages of the format for me: I know that I'm only investing a certain amount of my time, and it's usually small enough that the return is worth it.

All of which is background context for my experience at the first Cambridge Tester Meetup event for over a year, an online lean coffee, this week. I died a little inside when I saw The Future of Testing ticket placed on the board. I felt the pain of internal necrosis spreading as it was voted up. I winced while my pre-frontal cortex withered into a tiny blackened stump at the precise second that Devops, automation, and quality champions were tossed into the discussion.

The? Future? Of? Testing? GRRrrr!

For as long as we are building things that matter for people that matter and care about the outcomes it's going to be necessary to think about problems and potential problems, about risks — to who, of what, and when — and priortise mitigating activities. This is testing.

Sure, the people we're doing it for will change.

Sure, the technologies with which we build the things will change.

Sure, the contexts that all of this takes place in will will change.

So testers might need to change.

So testing approaches might need to change.

So testing tooling might need to change.

But the thinking, the critical thinking, the systems thinking, the testing, that is not going away.

Don't believe me? Take ten testers from ten different contexts and compare what they do today. Why will the degree of similarity and difference be any different tomorrow?
Image: Discogs

P.S. Apologies to the others in my group if you felt I went off on one on this topic.

Thursday, July 15, 2021

Cambridge Lean Coffee


This morning I was delighted to see the long-awaited return of the Cambridge Tester Meetup in the form of an online Lean Coffee!  Here's a few aggregated notes from the conversation in the group I was in.

Onboarding of testers in Covid land

From the company's side:

  • Looking for tips for getting testers up and running remotely.
  • Structured introduction plan, inluding people, tools, and relevant docs.
  • Encouraging the new team member to complement that with finding their own path.
  • Remember that it's hard being new and remote.
  • Make sure that time to learn and to use the product is available.
  • Try to find some task that can make them feel productive, an early win.
  • Have a buddy with priority and time to answer questions.
  • Set expectations on both sides.

From the onboarder's side:

  • Be prepared to ask questions.
  • Self-motivation is really important.
  • Set some goals for yourself (to get that achievement hit).
  • Be prepared for it to be harder.
  • Be brave (even if you don't feel it).

The future of testing

  • Moving towards more hands-on in other areas, e.g. DevOps, automation, or Quality Champion.
  • Good testing or quality comes from the team supported by a quality professional.
  • The basics of testing itself don't change: delivering business value by seeking problems or potential problems, uncovering risks (of what, to who), helping stakeholders to get the information they need to prioritise them.
  • The implementation of that testing might change, e.g. by different people or with different practices.
  • The testing work will always be there, so long as we are building things that we want to fulfill people's needs.
  • Just look at the people on this call. We all call ourselves testers but we do different work in different ways even now.

Quality metrics that are ACTUALLY useful

  • Looking for metric that represent quality for a team's delivery.
  • The team selected a bunch of metrics but would like one that represents "maintainability."
  • That's a hard concept to put a number on.
  • Perhaps take something that's easy to get hold of, from CI.
  • Perhaps something that a tool can generate, e.g. cyclomatic complexity.
  • What is the impact you want to change, course correct?
  • Can you form the metric in terms of that impact rather than in some property of the code.
  • Perhaps keep records for a couple of months in a spreadsheet?
  • ... if any developer feels impacted by maintainability issues, record where and some measure of the badness or cost,
  • ... then review the data after a couple of months and see whether metrics suggest themselves.

Remote line management - top tips

  • A new line manager has never met any of his reports. Tips please!
  • Openly using a structure, to reduce uncertainty about what's going to happen.
  • Be transparent about newness, and experiments you'll do with approaches.
  • Use the mute button and let your reports talk.
  • Be prepared for it to take longer to get to know someone.
  • All the traditional stuff is still important.

Saturday, July 10, 2021

Are We Doing Well?


Elisabeth Hendrickson was on The Confident Commit podcast recently talking about systems and flow, and her new Curious Duck project. Towards the end she was asked a question about individuals, teams, and judging success. Her answer was simply super:

The team has to be the agent of work. There are many reasons for this but it's a key tenet for me in leading any organisation. If the team is the agent of work then what that means is the individuals absolutely contribute to it and deserve to have growth paths and career paths and be rewarded for their contributions, etc. So individuals matter very much.

However if somebody decides to go on a four-week vacation to Bali, work cannot stop on the whatever-it-was, they can't be the single point of failure. So if you have the team as the agent of work, the team can swarm on things, the team can have a set of working agreements internally for how they're going to accomplish things. There's plenty of space for the individual but that is a key tenet.

How does the team know that they are doing well? Getting to that point where you actually can deliver value is absolutely essential. 

It can feel like we're going really really fast: "we're doing all this work, we've delivered all these things on our feature branch, we haven't merged them. We've delivered all these things, isn't this great?" 

No, because nobody can get any value from that. The value is locked up in this feature branch and there's more work to do before you can actually ship it.

So I go all the way back to, I think it was, Ron Jeffries who talked about running tested features. You know that you're doing well when you can point to a way in which the external world is somehow different, that we have delivered real business value; we have running, tested features as a result of the work that the team is delivering. 

That is the best measure I know of success. Of course it has to be also tempered with "at a sustainable pace," so: 

A continuous stream of value, at a sustainable pace, and while adapting to the changing needs of the business. 

If you have all of those components then the team is doing well. And if the final result isn't meeting the business's needs then that's probably not the team not doing well. That's probably something else about product-market fit or strategic intent within the business.
Image: https://flic.kr/p/ActK3Z


Sunday, July 4, 2021

Be a Quality Detector

 

I've just finished reading Thinking in Systems: A Primer by Donella Meadows. It's not a new book but I'd managed to be unaware of it until recently when Marianne Bellotti mentioned it on her podcast, Marianne Writes a Programming Language

Bellotti is writing a programming language (duh!) for modelling system behaviour using the concepts of stocks and flows, inspired by Meadows' book. The image at the top gives an idea of how such models can work, notably making explicit the feedback relationships for resources in the system, and allowing modellers to reason about the system state under different conditions over time.

I have been aware of systems models similar to this since I first saw Diagrams of Effects in in Quality Systems Management Volume 1: Systems Thinking by Jerry Weinberg. Weinberg's book, and other reading I was doing early in my career as a tester, inspired me to look deeply at the systemic context in which the software I'm working on sits and to be explicit about the models of it that I'm creating.


Architecture diagrams are one common, and often useful, way to view a system graphically but they tend to miss out a couple of important factors: external, particularly non-technical, influences and system dynamics. Diagrams of Effects can remedy that, although I have always found them tricky to create, wondering both where to begin and where to stop.

When I heard Bellotti talk about systems models with two core types, stocks and flows, I was intrigued. Could this be a way for me to simplify the creation of more formal systems models? The answer turns out to be both yes and no. The conceptualisation is simple, for sure, but the struggle to construct a useful formal model after reading Thinking in Systems is still real.

I shouldn't be surprised, part of the challenge of making a model, however formal or informal, is deciding what to include in it, at what granularity. A significant part of that is down to your intended use of the model and the kind of insights you hope to gain by making it. Somewhat meta, this places the model itself as an entity in the system that includes you, the creator, the audience of the model, the constraints on your use of it, and so on.

I'm not going to review the content of Thinking in Systems here. If what I've said above sounds interesting, this handful of related links gives some background:

What I will do here is pull out a few quotes from the book that spoke to me about systems and models for testing and at work:

Be a quality detector. Be a walking, noisy Geiger counter that registers the presence or absence of quality. (p. 176)

A system is an interconnected set of elements that is coherently organized in a way that achieves something. If you look at that definition closely for a minute, you can see that a system must consist of three kinds of things: elements, interconnections, and a function or purpose.  (p. 11)

A system’s function or purpose is not necessarily spoken, written, or expressed explicitly, except through the operation of the system. (p. 14)

An important function of almost every system is to ensure its own perpetuation. (p. 15)

Because resilience may not be obvious without a whole-system view, people often sacrifice resilience for stability, or for productivity, or for some other more immediately recognizable system property. (p. 77)

Large organizations of all kinds, from corporations to governments, lose their resilience simply because the feedback mechanisms by which they sense and respond to their environment have to travel through too many layers of delay and distortion. (p. 78)

System structure is the source of system behavior. System behavior reveals itself as a series of events over time. (p. 89)

Nonlinearities are important not only because they confound our expectations about the relationship between action and response. They are even more important because they change the relative strengths of feedback loops. They can flip a system from one mode of behavior to another. (p. 92)

There are no separate systems. The world is a continuum. Where to draw a boundary around a system depends on the purpose of the discussion—the questions we want to ask. (p. 97)

At any given time, the input that is most important to a system is the one that is most limiting. (p. 101)

Insight comes not only from recognizing which factor is limiting, but from seeing that growth itself depletes or enhances limits and therefore changes what is limiting. (p. 102)

Policy resistance comes from the bounded rationalities of the actors in a system, each with his or her (or “its” in the case of an institution) own goals. Each actor monitors the state of the system with regard to some important variable—income or prices or housing or drugs or investment—and compares that state with his, her, or its goal. If there is a discrepancy, each actor does something to correct the situation. Usually the greater the discrepancy between the goal and the actual situation, the more emphatic the action will be. (p. 113)

The most effective way of dealing with policy resistance is to find a way of aligning the various goals of the subsystems, usually by providing an overarching goal that allows all actors to break out of their bounded rationality. (p. 115)

Rule beating is usually a response of the lower levels in a hierarchy to overrigid, deleterious, unworkable, or ill-defined rules from above. (p. 137)

Systems, like the three wishes in the traditional fairy tale, have a terrible tendency to produce exactly and only what you ask them to produce. Be careful what you ask them to produce. (p. 138)

[confusing effort with result is one] of the most common mistakes in designing systems around the wrong goal. (p. 139)

Listen to any discussion, in your family or a committee meeting at work or among the pundits in the media, and watch people leap to solutions, usually solutions in “predict, control, or impose your will” mode, without having paid any attention to what the system is doing and why it’s doing it. (p. 171)

Images: Mental Pivot, Weinberg

Wednesday, June 16, 2021

All Testing is not Context-Driven


The Association for Software Testing is crowd-sourcing a book, Navigating the World as a Context-Driven Tester, which aims to provide responses to common questions and statements about testing from a context-driven perspective.

It's being edited by Lee Hawkins who is posing questions on TwitterLinkedInSlack, and the AST mailing list and then collating the replies, focusing on practice over theory.

I've decided to contribute by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be.

Perhaps you'd like to join me?

--00--

"Isn't all testing context-driven?"

As we exist in a context, it's impossible to do anything that isn't influenced by that context to some extent. Some people take this to mean that all testing is driven by context.

But driven is the key term here. Existing in a context is not the same as consciously choosing how to take account of it, watch for changes, and adapt accordingly.

What does this mean in practice? On a project, a context-driven tester might seek to understand what the project aims are, who needs to benefit, in what ways, and why the approach proposed is thought to be appropriate. They could wonder whether the personnel, timeframe, and tooling are likely to deliver the desired outcome, and if not why not. They will look for ways that they can contribute to the project's success under whatever constraints it has and with whatever skills they have.

Testers who are taking a context-driven approach will recognise that every participant's perspective will differ to some extent, not least because we are each an integral part of our own contexts. They will try account for that by, for instance, repeated observation, applying heuristics, finding similar examples from their experience, calling on their expertise, looking for collaboration, making regular attempts to synchronise with the people who matter on the project, and aligning work with current goals and restrictions.

Do you think all testers take this kind of intentional, empathic, approach to their testing? No? Then I think you know the answer to your question.