Saturday, September 18, 2021

RIP Clive Sinclair


Sliding doors, naturally, but it feels like the Sinclair ZX Spectrum 16k I got as a combined birthday and Christmas present when I was a boy was significant in where I've ended up.

I recall with fondness the tedium-expectation opposition of typing in BASIC programs from printouts and then debugging them only to find that the monster was a letter M, and you were an asterisk and collision detection was a concept the author had only a passing grasp of.

I have nightmares about trying and failing to install several sets of RAM chips to upgrade the machine to 48k and instead ending up with a wobbly and unreliable external RAM pack. I mourn the times we had to take the whole computer back to the shop for repairs.

I regret spending my hard-earned paper round money on a Brother printer and then spending my hard-won free time trying to work out how to get it to print reliably, or at all. 

I can still feel the covers of the thick ring-bound manuals, introducing me to BASIC and helping me to write my own programs. It was magical when I realised there was an assembly language world beyond BASIC and that I could PEEK and POKE values directly into the heart of the computer!

Of course I read the monthly magazines religiously, and I played the games, played the games, played the games, ...

In retrospect, that was an amazing introduction to the pleasures and frustrations of computers and software, to the possibilities and the failures, to the often stark differences between desire and reality. It spurred my imagination and helped me to dream.


Thank you Clive Sinclair.
Image: Wikipedia

Friday, September 10, 2021

69.3%, OK?


The Association for Software Testing is crowd-sourcing a book, Navigating the World as a Context-Driven Tester, which aims to provide responses to common questions and statements about testing from a context-driven perspective.

It's being edited by Lee Hawkins who is posing questions on TwitterLinkedInSlack, and the AST mailing list and then collating the replies, focusing on practice over theory.

I've decided to contribute by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be.

Perhaps you'd like to join me?

 --00--

"What percentage of our test cases are automated?"

There's a lot wrapped up in that question, particularly when it's a metric for monitoring the state of testing.

It's not the first time I've been asked either. In my experience, it comes when someone has latched onto automating test cases because (a) they've heard of it, (b) test cases are countable, and (c) they have been tasked with providing a management-acceptable figure for the "Testing" value in a Powerpoint deck of several hundred slides mailed monthly to a large number of people who will not look at it. 

If that sounds cynical ... well, I suppose it is. But any cynicism over this particular measure doesn't mean I'm not interested in understanding your need and trying to help you get something that fulfils it. Can we talk about what you're after and why?

We can? Great!

I'll start. Some of the issues I have with the question as it stands are:

  • it seems to be perceived a measure of our testing
  • such a number would say nothing about the value of the testing done
  • the definition of a test case is moot
  • ... and, whatever they are, test cases are only a part of our testing
  • there's an implicit assumption that more automation is better
  • ... but automation comes with its own risks
  • ... and, whatever automation means, automated test cases are only a part of our test automation

If I look at how we test, and what we might call test cases, I can think of three ways I could answer your question right now:

  1. We don't have test cases in the sense I think the question intends. All of our ongoing testing is exploratory and, while we might document the results of the testing with automation, there is no sense in which a manual or scripted test case existed and was then automated. We score 0%.
  2. For the purposes of this exercise, I would be prepared to describe each assertion in our regression test suites a test case. As they would be our only test cases, all of them are automated. 100%!
  3. OK, we do have some items in a test case management system. These are historical release-time checks that (mostly) people outside the test team run through before we ship. I like to think of them more as checklists or jumping off points, but I'm realistic and know that some of my colleagues simply want to follow steps. Relative to the number of "automated test cases" there are few of them but if we include them in our calculation we'd bring the score down to, say, 99%.

Those answers don't seem very satisfactory to either of us do they? 

To me, at very best, this kind of metric covers a small slice of what we do and the assumptions underlying it are very questionable. To you, the metric matters less than some plausible number representing how well the testing is going to include in that monster Powerpoint deck.

I have some thoughts on that too:

  • testing, for me, is knowledge work and so notoriously hard to measure in simple numbers
  • testing does not exist in isolation from other product development activities
  • good testing can be done without the creation of artefacts such as test cases
  • metrics imposed without conversation and justification are likely to be viewed with suspicion
  • metrics are likely to be gamed when (perceived to be) used as a target, or to judge
  • starting with a list of artifacts (test cases, bug tickets, etc) is cart-before-horse
  • ... it's much better to ask first what you want to measure and why

So, for example, is the desire to measure customer satisfaction with the product? Is it to measure the testing contribution to that? Is it to see where time is being spent on certain kinds of activities that the business wants to stop? Is it to look for bottlenecks? Or something else?

If we do agree some kind of metrics, how can we reassure testers that they are not being judged, and that they should not pervert their working practices just to make the numbers look good?

We'll need something more than glib words.  Imagine you were told your performance would be judged on how many emails you sent. How would you react? Would you scoff at it but send more emails anyway? Would you send emails instead of having conversations? Would you care about the potential detrimental effects to you, others, the business? How could someone convince you to behave differently?

Finally, is there a real desire from you to look into sensible metrics with good intent and to act on the findings?

If so, then I will do all that I can to assist in getting something that is justifiable, that has explicit caveats, that is equitable, that is transparent, that acknowledges the messiness involved in its collection, that can be derived efficiently from data that we have, that sits within agreed error margins, and that reflects the work we're doing.

If not, then I'll ask you what kind of number will pass the cursory level of inspection that we both know it will receive, and I'll simply give you that: let's say 69.3%, OK?

Sunday, August 29, 2021

Fail Here or Fail There

The First Law of Systems-Survival, according to John Gall, is this:

A SYSTEM THAT IGNORES FEEDBACK HAS ALREADY BEGUN THE PROCESS OF TERMINAL INSTABILITY
Laws are all-caps in Systemantics. Not just laws, but also theorems, axioms, and corollaries. There are many of them so here's another (location 2393-2394):
JUST CALLING IT “FEEDBACK” DOESN’T MEAN THAT IT HAS ACTUALLY FED BACK

There was a point when I realised, as the capitalised aphorisms rolled by, that I was sinking into the warm and sweetly-scented comforting foamy bathwater of confirmatory bias. Seen, seen, seen! Tick, tick, tick!

I took the opportunity to let myself know that I'd been caught in the act, and that I needed to get out of the tub and start to challenge the content. 

Intervening at that moment was congruent: I was in a context where I would accept it and prepared to change because of it. Of course, I enjoyed the deep irony of nodding along with Gall when he talked about that too (2456-2458):

Feedback is likely to cause trouble if it is either too slow or too prompt. It must be adjusted to the response rhythms of the system as well as to the tempo of the actual events — a double restriction.
So what might I challenge?

  • The lack of data to back up claims.
  • The overwhelming landslide of those upper-case one-liners.
  • The desert-dry commentary that weaves dangerously along the line between sniper sharp-shooting and sniping foot-shooting.

Looking around, I see that other readers have made similar observations. Cristiano Rastelli's review notes that everyone else he has spoken to about the book thought it was bullshit and simply stopped reading.

But I liked it despite its flaws. As a collection of practical, cynical, and even pathological heuristics it's a useful reminder of the power systems to do their own thing, to be realistic about the extent to which we can influence them, and to note again that all complex systems run broken:

IF IT DOESN’T FAIL HERE, IT WILL FAIL THERE (1562-1563)

I've pulled out a few quotes on topics that speak to my experience, including:

  • iterate systems into existence
  • seek the minimum necessary process
  • restrict only what must be restricted
  • align with natural tendencies if you can
  • make small changes where possible
  • pause to observe the effects of changes, emergent and intended, local and remote
  • remain humble about what you understand and the extent to which you understand it

--00-- 

Systemantics ... is almost a form of Guerilla Theater. It is the collection of pragmatic insights snatched from painful contact with the burning issues and ongoing problems of the day. (463-464)

NEW SYSTEMS MEAN NEW PROBLEMS (504-505)

COMPLEX SYSTEMS EXHIBIT UNEXPECTED BEHAVIOR (631-632)

Most people would like to think of themselves as anticipating all contingencies. (652-653)

A LARGE SYSTEM, PRODUCED BY EXPANDING THE DIMENSIONS OF A SMALLER SYSTEM, DOES NOT BEHAVE LIKE THE SMALLER SYSTEM (686-688)

SYSTEMS TEND TO OPPOSE THEIR OWN PROPER FUNCTIONS (703-704)

THE GHOST OF THE OLD SYSTEM CONTINUES TO HAUNT THE NEW (847-848)

In general, the larger and more complex the System, the less the resemblance between a particular function and the name it bears. (886-887)

THE SYSTEM ITSELF DOES NOT DO WHAT IT SAYS IT IS DOING (902-903)

A SYSTEM IS NO BETTER THAN ITS SENSORY ORGANS (1011-1012)

THE BIGGER THE SYSTEM, THE NARROWER AND MORE SPECIALIZED THE INTERFACE WITH INDIVIDUALS (1033-1035)

THE END RESULT OF EXTREME COMPETITION IS BIZARRENESS (1199-1200)

BIG SYSTEMS EITHER WORK ON THEIR OWN OR THEY DON’T. IF THEY DON’T, YOU CAN’T MAKE THEM (1257-1258)

Even today, the futility of Pushing On The System is widely unappreciated. (1273-1274)

THE MODE OF FAILURE OF A COMPLEX SYSTEM CANNOT ORDINARILY BE DETERMINED FROM ITS STRUCTURE (1472-1473)

The problem of evaluating “success” or “failure” as applied to large Systems is compounded by the difficulty of finding proper criteria for such evaluation. (1366-1367)

The idea that Bugs will disappear as components become increasingly reliable is, of course, merely wishful thinking. (1557-1557)

ONE DOES NOT KNOW ALL THE EXPECTED EFFECTS OF KNOWN BUGS (1587-1589)

NEW STRUCTURE IMPLIES NEW FUNCTIONS (1645-1646)

The designers had built a machine with that capability, but they knew not what they had wrought. . . until Experience demonstrated it to them. (1642-1644)

AS SYSTEMS GROW IN SIZE AND COMPLEXITY, THEY TEND TO LOSE BASIC FUNCTIONS (1663-1664)

THE MEANING OF A COMMUNICATION IS THE BEHAVIOR THAT RESULTS (1879-1880)

We must not assume that a message sent will automatically go to a central Thinking Brain, there to be intelligently processed, routed to the appropriate sub-centers, and responded to. (1992-1994)

The student proficient in the Creative Tack asks such questions as: What can I do right now and succeed at it? For which problem do my current resources promise an elegant solution? (2170-2172)

DO IT WITHOUT A NEW SYSTEM IF YOU CAN (2186-2186)

AVOID UNNECESSARY SYSTEMS (SYSTEMS SHOULD NOT BE MULTIPLIED UNNECESSARILY) (2187-2189)

LOOSE SYSTEMS LAST LONGER AND FUNCTION BETTER (2264-2265)

A System represents someone’s solution to a Problem. The System itself does not solve Problems. Yet, whenever a particular problem is puzzling enough to be considered a Capital-P Problem, people rush in to design Systems which, they hope, will solve that Problem. (2521-2525)

How many features of the present System, and at what level, are to be corrected at once? If more than three, the plan is grandiose and will fail. (2584-2586)

IF IT’S WORTH DOING AT ALL, IT’S WORTH DOING POORLY (2603-2604)

IN ORDER TO BE EFFECTIVE, AN INTERVENTION MUST INTRODUCE A CHANGE AT THE CORRECT LOGICAL LEVEL (2689-2690)

It seems clear enough that changing actors does not improve the dialogue of a play, nor can it influence the outcome. Punishing the actors is equally ineffective. Control of such matters lies at the level of the script, not at the level of the actors. In general, and as a minimal requirement: (2686-2689)

Exploratory behavior constitutes a series of probes, each of which elicits a piece of behavior from the System. The accumulation of those pieces of behavior allows the rat (or person) eventually to obtain a perspective as to the range of behaviors the System is capable of exhibiting in response to typical probing behaviors. (2714-2716)

ALWAYS ACT SO AS TO INCREASE YOUR OPTIONS (2724-2724)

THE SYSTEM IS ALTERED BY THE PROBE USED TO TEST IT [...] THE PROBE IS ALTERED ALSO (2774-2775)

All reference locations are from the Kindle edition.
Image: Wikipedia

Sunday, August 15, 2021

See Plus Plus


Susan Finley's webinar for the Association for Software Testing, Making the Invisible Visible, has just been made available on YouTube.  

It starts in the middle of an ongoing war room situation with a large group crammed around a small table, surrounded by empty coffee cups and the crumbs of long-consumed sandwiches. They are breathing stale air, they have perspiration on their brows, there is a production incident in full flow and they have no confidence that they understand the causes, the potential fixes, or even each other.

The conversation repeats and circles and spirals back on itself until someone jumps up, sketches out a set of boxes and arrows on the whiteboard, and anchors the conversation on a model of the system. Suddenly there is shared context and understanding. That picture, Susan says, truly is worth a thousand words.

The rest of the talk is focused on strategies and practices for reporting on the state of software, and this is very firmly distinguished from the state of the testing of the software. The latter informs the former, but is not a proxy for it.

Susan favours visual representations of relevant dimensions of the product (for example, architecture, sequencing, data flow) overlaid with relevant information (risk, churn, test coverage, and so on.) Relevance is key here: what message needs to be communicated, to who, to achieve what? What kind of diagram could help to achieve that?

Of course, there are constraints on what data is available, what could be available, and the costs of gathering, manipulating, and presenting it. Following Arthur Ashe, she suggests starting now, with whatever you have, and doing what you can. 

For example,  take existing diagrams from elsewhere in your organisation and repurpose them. In fact, consider this a tactic because they may already be familiar to your target audience.

Be aware that, although you are likely to find technical diagrams easier to come by, technical risk may not be the most important thing when you need to talk to higher executives. When in front of that audience, if you can put any kind of dollar values on your risk diagrams — costs, lossess, wastage — particularly if they are coloured red, you are almost certainly going to get a more engaged audience.

Finally, don't sweat too much about maintaining the diagrams. Susan recommends that once they have served their purpose they have little value beyond historical record and perhaps being a starting point for the next visualisation.

The Ideal Test Plan

A colleague pinged me the other day, asking about an "ideal test plan" and wondering whether I could suggest something.

Not without a bit more information, I said.

OK, they said.

Who needs the plan, for what purpose? I asked.

Their response: it's for internal use, to improve documentation, and provide a standard structure.

We work in a medical context and have strict compliance requirements, so I wondered aloud whether the plan is needed for audit, or to show to customers?

It's not, they replied, it's just for the team.

Smiling now, I stopped asking questions and delivered the good news that I had what they were looking for.

Yes? they asked, in anticipation.

Naturally I paused for dramatic effect and to enhance the appearance of deep wisdom, before saying: the ideal plan is one that works for you.

Which is great and all that, but not heavy on practical advice.

--00--

I am currently running a project at the Association for Software Testing and there is a plan for it. In fact, there are plans.

1. I have a Google doc, shared with the rest of the AST Board, in which I've stated the Why of the project, in one sentence, at the top. The Why is the outcome we seek, independent of the way we might choose to achieve it. It's backed up by a few words about how we arrived at this need. 

The rest of the doc is a time-stamped log of the research I did, decisions we made, decisions still to make, proposals to choose from, and actions I took. This is the plan as a goal and the evolution of our thinking.

2. I have a Google sheet (anonymised version here) in which I have summarised the current state and the specific state we have decided to go for, based on the conversations around (1). I have done this as a pair of adjacent tables where I have highlighted the most important changes as a visual diff, and provided a bullet list of key consequences of the changes. 

It is structured deliberately for ease of consumption in order that my colleagues can review, compare to their understanding of our agreed approach, and approve or object. This is also the plan, the specific implementation we have decided we will aim for.

3. In the same Google sheet I have another table, this one with four columns: Action, Where, Notes, Status. Each row is a task I'll need to do to move from where we are to where we want to be. It lists the task, the place it needs to be done, references to decisions or resources, and whether it's To Do, Doing, or Done. 

I'm a fan of conditional formatting, so that final column is automatically either red, orange, or green depending on the status. This too is the plan, the list of jobs I'll carry out and where we are with that.

Is it the ideal plan (or plans)? Well, it (or they) are doing what I need them to: providing visibility of what we're trying to do, how, when, where, and why.

--00--

Plans can and do take many forms. Off the top of my head I have examples of all of these in play right now: runbooks, checklists, trello cards, kanban boards, Jira tickets, to do lists, mind maps, and unrefined bundles of open-ended what-ifs in my head.

The plan format(s) for the AST project I talked about were chosen in the order you see them listed, tactically, to suit the need that I had at the point I wanted to commit to sharing them. I am not omnisicient. If at some point what I'd got wasn't serving my need I would have chosen a different way.

The plan-as-goal (1) is a format that I find to be a useful default for longish projects with unknowns. It's similar to the way that I record test notes for pieces of work that run for a while, perhaps over days, in a single thread.

The side-by-side comparison in the plan-as-implementation (2) evolved from a table that another board member created to help us discuss options during one of our discussions. With hindsight, and only a little reformatting and decoration, I realised that I could succinctly summarise the plan using it, so I did.

A set of tasks in the-plan-as-list (3) is not remotely original, but I enjoy the way that listing them explicitly helps me to see connections and ordering and grouping; enables others to spot things I've missed; and helps me not to forget something when I've had to pause the work for a while.

--00--

The different formats of plan that I've mentioned have pros and cons. The kinds of factors I might consider when choosing a format include:

  • what is the purpose of this plan (for me)?
  • who needs to see this plan?
  • what is the purpose of this plan (for others)?
  • what granularity am I working at?
  • what kind of information do I want to record?
  • what timespan will it be needed for?
  • will it be a living plan or a static one?
  • will I be recording ongoing status in it?
  • will I be collaborating in it, or am I the sole author with others as consumers?
  • will the plan be re-used?
  • what external constraints are there on this plan?
  • what proportion of time available is worth investing in planning for this project?

The lower the level the more I am likely to go for a list, tickets, or other artefacts that represent work items; for higher level I'll reach for a textual, tabular, or graphical representation.

For plans that need to be shared and collaborated on, I'll invest time in scaffolding such as a README, a Background section with references to earlier work, or naming schemes that help to track the ideas or tasks in play easily.

Using specialist tools for ongoing status monitoring will sometimes be appropriate. Other times it makes sense to keep status with other versions of the plan. The tools you have acess to and are familiar with is an important aspect of that decision.

When re-use is likely — such as in runbooks for repeated tasks — I'll try to strip the plan down to something that can be easily followed and make it a checklist. I'll try to flag actions and commentary differently to reduce the cognitive load on the reader.

If the plan is likely to be long-lived or needed as a record for posterity I will take care to put in dates, cross-references to related work, and decision points. Context that seems prominent now will not be so prominent in a few weeks.

--00--

If I had written down a plan for this post (which I didn't) I can tell you that it would not have included section dividers. In over 500 posts on Hiccupps, I have never used them this way before.

The up-front plan, to the extent that there was one, was to get out of my head the thoughts that had been swimming around in there since my colleague asked me for that ideal test plan. The dividers were initially a way to separate blobs of thought in the text file I used for drafting. During the writing I decided they served a useful purpose in breaking the text up, signalling that a different angle was being considered. So I left them in. A tactical decision.

That's not to say there was no planning. In the moment, as I had ideas spurred from the ideas I was trying to get down, I would add notes to lists in the sections that were evolving. Sometimes I created a new section. Sometimes I merged existing sections. Or deleted them. Contextual decisions, based on what I knew at that time. I was more likely to delete as I got further into the work. In this writing, and in general, I will tend to preserve more ideas in my plan the less well-formed it is, even if they are relegated to some kind of 'just in case' bucket.

And, again, this talk of tactics and context is not to reject up front planning. My experience and instinct is that I tend towards strategic, why-based, big-picture plans before I start, and to refining tactically as I go. You can see precisely this trajectory in the AST project I described.

This preference is not remotely original, although if you've ever worked anywhere that plans anything you will have seen that it's by no means universal.

--00--

To finish, then, back to that ideal test plan. 

I recently joined the Exploratory Testing Slack instance set up by Maaret Pyhäjärvi and Ru Cindrea.  One of the discussions in there last week was about planning exploratory testing and the use of chartering

The term is somewhat loaded, not distinguishing well between the act of generating a specific set of tasks to be performed up front, choosing a piece of work to perform, and creating artefacts that describe the work performed.

I recognise all of these concepts but I realise that I have avoided the use of charter terminology in my own testing practice. Instead, I will try to understand what we're testing and why and what the constraints on that are. Having established the context, I'll try to identify what might be important to look at and for, I'll find a way to choose between those things, and then I'll pick something from the set to work on. 

I might use a simple list, a mind map, or even a spreadsheet to represent the test ideas, constraints, and so on. I will default to using a text file for the notes I take during the testing itself.

Interestingly, the testing is a microcosm of the whole. I start by being explicit about my mission, I like Elisabeth Hendrickson's template: explore X using Y to achieve Z. It encapsulates the goal, the constraints, and the why. Then I work tactically and in context. 

All of which means that I can now probably give a better answer than the glib one I tossed out at the beginning of this stream of consciousness.

The ideal way is that I plan my work, but I plan it in a context-driven way, taking into account whatever constraints exist, and using whatever representations, approaches, and tools are either required or serve my purpose. I want to understand the current strategy, but be free to change the tactics as the situation demands, and also to be able to question the strategy at any point. Seeking context-free perfection is a dead end.

Image: https://flic.kr/p/HCMQ8B

Saturday, July 31, 2021

Vote Testers!

 

The Association for Software Testing board elections are happening shortly. Terms are two years long but staggered so that half of the board is up for re-election every year. I've just finished my first term and I've decided to stand again, for a few reasons.

First, and accuse me of whatever cheesiness you like here, I truly believe in the AST's ethical code and mission:

The Association for Software Testing is dedicated to advancing the understanding of the science and practice of software testing according to Context-Driven principles.

It's important to me that I remain a practitioner of testing, and I roll my expertise and experience into the context when I practice. 

Second, and with another helping from the cheeseboard, the AST is emotionally significant to me. When I started testing it was the organisation that Lessons Learned in Software Testing led me to, and its Black Box Software Testing courses helped to culture and mould my intuitions about testing and how I wanted to test.

Third, and onto the coffee now for the business talk, I feel like I have unfinished work that I want to complete.

Next week there are hustings and the candidates have been asked to provide answers to a handful of questions. I thought I'd publish mine here as I did for a similar set after the 2019 elections.

Diversity

How do you intend to promote diversity within the AST? How could AST promote diversity, of all kinds, within our own organization and within the wider testing and technology
communities?

All AST members, including the board, sign up to a code of ethics  which requires us to respect the diversity of cultures, not to discriminate on the basis of race, sex, religion, age, disability, national origin or other factors, and to consider whether the results of our work will be used in socially responsible ways.

When I answered the same question two years ago I said I would "encourage AST to seek out people who do not already engage with it and find ways to help them to engage. This might be by making AST relevant to them, by moving AST geographically closer to them, by making AST financially accessible to them, or something else."  

I think I have been able to work towards that:

Black Box Software Testing

Please share your vision for the future of the AST's BBST program.

The recent move to partner with Altom for the provision of the Community Track of Black Box Software Testing is wonderful for multiple reasons. It brings back together the two major forks of BBST, it clears up the confusing relationship between them, and it provides shared resources for the teaching and maintenance of the materials. 

On top of that, it preserves everything that's great about BBST: the depth, breadth, and quality of the material; the small cohort approach; and the dedicated and knowledgeable instructors. In the immediate future I'd like to see us continue to doing everything we can to help that to work well.

In the longer term there have been conversations around broadening the education offerings from the AST to include, for example, something on exploratory testing or automation. We recently ran a course in partnership with Rob Sabourin and I think more of those kinds of partnerships could work for us, because the cost of creating, running, and maintaining courses is high and we have limited resources.

History

What do you think the AST board has historically done well, and what do you think needs to change?

With only a small number of volunteer staff, I think the board has done extraordinarily well to keep the organisation running.

CAST, our conference, is a significant part of what AST is and is known for, and not having CAST in 2020 was a major setback for us. We redirected our CAST energy into more smaller events and we've put something on every month instead, experimenting with different formats like Fika, a super-short and sweet mini-CAST, and even next week's online hustings for the 2021 elections.

A significant challenge for AST has been that it can seem dry, dated, and distant, and the value of being a member can be unclear. In some respects we shoot ourselves in the foot because, while we are a member organisation, our mission is to promote testing in the world and we've tended to err towards making everything we have available to anyone who is interested.

However, despite that difficulty, I support the policy. I want the AST to make a difference to the world generally, to engage with people outside of our organisation, and to grow the testing craft everywhere.

Future of AST

If you are elected to serve on the board, what is your vision for the future of AST and what do you hope to accomplish as part of the board?

At the first AST board meeting I attended I presented a paper called Why Be A Member of the AST? In it, I compared the value we offered, how we communicated our value and where, and I started thinking about what our various activities cost us versus the value they brought to us and our membership. 

Together we then worked on being explicit about what the AST stands for, what we think our members want from us, and what we want from our members. 

Those activities naturally exposed places that we thought we could do better but it also helped us to prioritise activities to improve on them.

For the last two years I've been working through tasks from that list, incrementally improving what we do and how how we do it. To give some examples:

The next major task for me is to make the cost of entry to AST more equitable.

For the AST more generally: I think there is a place in the world for a non-profit testing organisation advocating for high-quality, ethical, context-driven testing. We need to find better ways to help our tribe to find us, better ways to communicate what we stand for, why we think it is relevant and valuable, and why it's worth helping us to achieve it.

Conflicts of Interest

Please describe any current initiatives you participate in that might affect your ability to serve on the AST board, and serve the AST membership.
I don't have any.

Support

In what ways have you supported the mission of AST?

In lots of ways! I've mentioned some of the things I've done while on the board already. Here's a few examples of stuff I've done outside of AST that I think align with our mission:

  • I blog weekly (for the last ten years) from a context-driven perspective.
  • I am the organiser of the Cambridge Exploratory Workshop on Testing.
  • I speak about testing at work, in the local community, and at conferences.
  • I strive to demonstrate good work, done ethically, according to context-driven principles.
  • I invest time in helping others to test.
  • I invest time in helping myself to improve my testing.

Image: Wikipedia

Wednesday, July 28, 2021

Mass Testing

The Association for Software Testing is crowd-sourcing a book, Navigating the World as a Context-Driven Tester, which aims to provide responses to common questions and statements about testing from a context-driven perspective.

It's being edited by Lee Hawkins who is posing questions on TwitterLinkedInSlack, and the AST mailing list and then collating the replies, focusing on practice over theory.

I've decided to contribute by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be.

Perhaps you'd like to join me?

--00--

"Do more test cases mean better test coverage?"

Simply, no. Less simply, depending on the assumptions you care to make, perhaps.

The terms test case and test coverage are loaded, so let's talk about a somewhat analogous problem:

Does more bulletproof glass mean the Popemobile is better protected?

I find it helpful to turn this kind of yes/no question into an exploration: 

Under what circumstances could more bulletproof glass mean the Popemobile is better protected?

This helps me to think of challenges, caveats, and clarifications. To give a few examples:

  • Better protected than what?
  • Better protected from what?
  • Better protected, judged how?
  • Is there any existing bulletproof glass?
  • Where is the existing bulletproof glass?
  • How protective is the existing glass?
  • ... against what kinds of projectiles?
  • ... propelled how?
  • What other kinds of bulletproof glass are available and how protective are they?
  • Where would it be possible to put additional bulletproof glass if we had some?
  • ... and where is the Popemobile susceptible to damage from bullets?
  • ... and which areas at risk are not covered?
  • What threats are we trying to protect the Pope from?
  • Would any amount of bulletproof glass protect the Pope from those threats?
  • Are there any ways other than glass to mitigate the risks of those threats?

Then there are other questions, ones that matter in general because we don't have infinite resources. For example:

  • Does the Pope need better protection?
  • If so, do we need to change the Popemobile to achieve it?
  • At what cost?
  • At what opportunity cost?

So, could I think of circumstances in which more test cases mean better test coverage? Yes. Do those circumstances hold generally? Not a chance in Hell.
Image: The Independent