Skip to main content

Quality != Quality


Anne-Marie Charrett delivered a beta version of her Testbash Manchester keynote at the Cambridge Tester meetup this week. Her core message was that quality and testing are not the same thing:
  • there are non-testing aspects of software development that contribute to product quality
  • there are non-product aspects of quality which should be considered in software development.

A theme of the talk was that customer benefit could be threatened by the second of these, by factors such as code hygiene, speed of delivery, and time to recover after a failure in production. Testers, and others in software development, were urged to reframe their view of quality to encompass these kinds of activities. A Venn diagram represented it like this:


Interesting, but it didn't quite hang together for me. I slept on it.

In the morning, I found myself thinking that what Anne-Marie was trying to visualise really had two notions of quality, and they were not the same. Perhaps she could move from a two-way to a three-way relationship between product quality (features, performance, usability and so on), production quality (for the non-product stuff around producing the software), and customer benefit. (Although I prefer business value rather than customer benefit because the business might prefer things that don't give value to the customer at some times.)

Here's how I've tried to sketch that:


The sweet spot is work that improves the way the software is produced, improves the software and adds value for the business. For example, changing from a product implemented in two languages to a product implemented in a single language could enhance in-product consistency and performance, simplify toolchains, and reduce IDE licensing costs.

But the tripartite division gives other potentially interesting intersections too. There's the traditional new feature which drives an increase in sales (product quality/business value) and then there's situations like moving from weekly to daily drops of a core component to internal teams which removes wasted time on their side (production quality/business value).

Anne-Marie asked for feedback on her talk so I pinged her a few notes along with a sketch of my idea and she incorporated some of it into the keynote.


Which is gratifying and all that but while my model might be considered an iterative improvement on hers, it's not short of its own quirks. The intersection I haven't mentioned yet (production quality/product quality) could be encountered when ancient build servers are replaced, enabling newer libraries to be used in the product, but adding (at least, at that time) no value.

The caveat, at that time, is interesting because it reflects the idea that there's a granularity effect at play. The example just given, at a certain temporal granularity, adds no value. But once new features that build on the new library are implemented, value is added. Zoom out to a wider time perspective and the action of updating the build server can sit in the sweet spot.

There's other ways in which this model is fuzzy: in a continuous deployment world, the boundary in the pipeline between the product and the production of the product becomes harder to define. Also, there's no good way to represent stuff that's actively detrimental to business value.

And there's ways in which our viewpoint (biased to the technical) can distort the relative importance of our interests too. Remember that business value can be generated without any involvement of the development staff: dropping the price of the product might drive sales and increase overall revenue.

Your perspective on the model alters the value of the model. Quality may not be whatever you think it is. Stay humble.
Images: Nick Pass and Dan Billing (via Twitter)

Edit: This post blew up a bit on Hacker News. The views expressed there on what quality is or isn't, the ease with which quality can be achieved, and the notions of quality as subjective or objective distinction are interesting to see. Anne-Marie used Weinberg's definition of quality in her talk and I recently wrestled with that in In Two Minds.

Comments

  1. It's so helpful to me to see ways to visualize quality, and this captures so many aspects. Thank you for sharing it!

    ReplyDelete
    Replies
    1. Cheers, Lisa.

      I've had a few goes at understanding what I think quality might be. (Or more accurately, understanding what I don't understand.)

      Delete
  2. Thanks to Lisa for the slides.
    And thanks to James for the post. You are once again fueling my wilingness to write about basic things in testing :-).
    Here is how I describe the things to "young testers". https://lazytesterua.blogspot.com/2017/10/expectations-requirements-reality.html

    ReplyDelete
  3. I don't think the quality vs testing meme is presented as in this blog post. Usually quality is pitted *against* testing, i.e., quality is better than testing. In fact, these aren't either or.

    In the software industry, Bach-Bolton type testing is almost unknown. The quality vs testing meme is used to justify that.

    Discussions on quality vs testing would be welcome, *if* there was a strong understanding of testing. No good tester would argue against overall quality improvement. However, in most cases, quality is offered as a (better) choice. That is incorrect.

    ReplyDelete

Post a Comment

Popular posts from this blog

Notes on Testing Notes

Ben Dowen pinged me and others on Twitter last week , asking for "a nice concise resource to link to for a blog post - about taking good Testing notes." I didn't have one so I thought I'd write a few words on how I'm doing it at the moment for my work at Ada Health, alongside Ben. You may have read previously that I use a script to upload Markdown-based text files to Confluence . Here's the template that I start from: # Date + Title # Mission # Summary WIP! # Notes Then I fill out what I plan to do. The Mission can be as high or low level as I want it to be. Sometimes, if deeper context might be valuable I'll add a Background subsection to it. I don't fill in the Summary section until the end. It's a high-level overview of what I did, what I found, risks identified, value provided, and so on. Between the Mission and Summary I hope that a reader can see what I initially intended and what actually

Enjoy Testing

  The testers at work had a lean coffee session this week. One of the questions was  "I like testing best because ..." I said that I find the combination of technical, intellectual, and social challenges endlessly enjoyable, fascinating, and stimulating. That's easy to say, and it sounds good too, but today I wondered whether my work actually reflects it. So I made a list of some of the things I did in the last working week: investigating a production problem and pairing to file an incident report finding problems in the incident reporting process feeding back in various ways to various people about the reporting process facilitating a cross-team retrospective on the Kubernetes issue that affected my team's service participating in several lengthy calibration workshops as my team merges with another trying to walk a line between presenting my perspective on things I find important and over-contributing providing feedback and advice on the process identifying a

Risk-Based Testing Averse

  Joep Schuurkes started a thread on Twitter last week. What are the alternatives to risk-based testing? I listed a few activities that I thought we might agree were testing but not explicitly driven by a risk evaluation (with a light edit to take later discussion into account): Directed. Someone asks for something to be explored. Unthinking. Run the same scripted test cases we always do, regardless of the context. Sympathetic. Looking at something to understand it, before thinking about risks explicitly. In the thread , Stu Crook challenged these, suggesting that there must be some concern behind the activities. To Stu, the writing's on the wall for risk-based testing as a term because ... Everything is risk based, the question is, what risks are you going to optimise for? And I see this perspective but it reminds me that, as so often, there is a granularity tax in c

Agile Testing Questioned

Zenzi Ali has been running a book club on the Association for Software Testing Slack and over the last few weeks we've read Agile Testing Condensed by Janet Gregory and Lisa Crispin. Each chapter was taken as a jumping off point for one or two discussion points and I really enjoyed the opportunity to think about the questions Zenzi posed and sometimes pop a question or two back into the conversation as well. This post reproduces the questions and my answers, lightly edited for formatting. --00-- Ten principles of agile testing are given in the book. Do you think there is a foundational principle that the others must be built upon? In your experience, do you find that some of these principles are less or more important than others?  The text says they are for a team wanting to deliver the highest-quality product they can. If we can regard a motivation as a foundational principle, perhaps that could be it: each of the ten pr

The Great Post Office Scandal

  The Great Post Office Scandal by Nick Wallis is a depressing, dispiriting, and disheartening read. For anyone that cares about fairness and ethics in the relationship that business and technology has with individuals and wider society, at least. As a software tester working in the healthcare sector who has signed up to the ACM code of ethics through my membership of the Association for Software Testing I put myself firmly in that camp. Wallis does extraordinarily well to weave a compelling and readable narrative out of a years-long story with a large and constantly-changing cast and depth across subjects ranging from the intensely personal to extremely technical, and through procedure, jurisprudence, politics, and corporate governance. I won't try to summarise that story here (although Wikipedia takes a couple of stabs at it ) but I'll pull out a handful of threads that I think testers might be interested in: The unbelievable naivety which lead to Horizon (the system at th

Leaps and Boundary Objects

Brian Marick  recently launched a new podcast, Oddly Influenced . I said this about it on Twitter: Boundary Objects, the first episode of @marick's podcast, is thought-provoking and densely-packed with some lovely turns of phrase. I played it twice in a row. Very roughly, boundary objects are things or concepts that help different interest groups to collaborate by being ambiguous enough to be meaningful and motivational to all parties. Wikipedia  elaborates, somewhat formally:  [boundary objects are] both plastic enough to adapt to local needs and constraints of the several parties employing them, yet robust enough to maintain a common identity across sites ... The creation and management of boundary objects is key in developing and maintaining coherence across intersecting social worlds. The podcast talks about boundary objects in general and then applies the idea to software development specifically, casting acceptance test

Where No-one Else Looks

In yesterday's post, Optimising start of your exploratory testing , Maaret Pyhäjärvi lists anti-patterns she's observed in testers that can lead to shallow outcomes of testing. She ends with this call: Go find (some of) what the others have missed! That strikes a chord. In Toujours Testing I recalled how my young daughter, in her self-appointed role as a Thing Searcher, had asked me how she could find things that no-one else finds. I replied Look where no-one else looks. Which made her happy, but also made me happy because that instinctive response externalised something that had previously been internal.  The phrase has stuck, too, and I recall it when I'm working. It doesn't mean targeting the obscure, although it can mean that.  It also doesn't mean not looking at areas that have previously been covered, although again it can mean that. More, for me, it is about seeking levels of granularity, or perspectives, or methods of engagement, or personas, or data, or im

External Brains

A month or two ago, after seeing how I was taking notes and sharing information, a colleague pointed me at Tiego Forte's blog on Building a Second Brain : [BASB is] a methodology for saving and systematically reminding us of the ideas, inspirations, insights, and connections we’ve gained through our experience. It expands our memory and our intellect... That definitely sounded like my kind of thing so I ordered the upcoming book, waited for it to arrive, and then read it in a couple of sittings. Very crudely, I'd summarise it something like this: notes are atomic items, each one a single idea, and are not just textual notes should capture what your gut tells you could be valuable notes should capture what you think you need right now notes should preserve important context for restarting work notes on a topic are bundled in a folder for a Project, Area, or Resource and moved into Archive when they're done. ( PARA )

Binary Oppositions

I am totally loving Oddly Influenced, Brian Marick's new podcast. The latest episoide covers ways in which schools of thought and practice can inhibit the cross-fertilisation of ideas.  It includes a case study in experimental physics from Peter Galison's book, Image and Logic , where two different approaches to the same particle analysis problem seem to run on separate, parallel tracks: In the 'head to world' tradition, you use your head to carefully construct situations that allow the world to express its subtle truths ... In the 'world to head' tradition, you make yourself ever more sensitive to the world’s self-expressed truths ... The first of these wants to theorise and then craft an experiment using statistics while the latter wants to gather data and try to understand it visually. Marick is pessimistic about the scope for crossover in this kind of situation: How do you bridge traditions that differ on aesthetics, on different standards of what counts as

Result!

Last night I attended a Consequence Scanning workshop at the Cambridge Tester Meetup . In it, Drew Pontikis walked us through the basics of an approach for identifying opportunities and risks and selecting which ones to target for exploitation or mitigation. The originators of Consequence Scanning recommend that it's run as part of planning and design activities with the outcomes being specific actions added to a backlog and a record of all of the suggested consequences for later review. So, acting as a product team for the Facebook Portal pre-launch, we  listed potential intended and unintended consequences sorted them into action categories (control, influence, or monitor) chose several consequences to work on explored possible approaches for the action assigned to each selected consequence In the manual there are various resources for prompting participants to think broadly and laterally about consequences. For example, a product can have an effect on people other than its u