Saturday, May 18, 2019

The Process is Personal is Political


This year I've read three books that follow projects from the perspective of individual contributors, managers, and the business. They are The Goal, The Phoenix Project, and The Soul of a New Machine.

The Goal is perhaps the most well-known. It's a pedagogical novel in which a manufacturing plant manager is given three months to turn his failing plant around. He stumbles across a mentor who, with well-chosen questions and challenges, exposes the fallacies of the traditional production models being followed and suggests new ways to analyse the business. The philosophy at the heart of this is the Theory of Constraints, where constraints can be anything that gets in the way of the business goal, and the aim, as described by a couple of its key players is "not so much to reduce costs but to increase throughput". (p. 298, Kindle)


The Phoenix Project is also fictional but this time set in the world of IT, Ops, Dev, and Test. Again, the main protagonist is under pressure to change outcomes in a complex system, again finds a mentor and, again, The Theory of Constraints is central. In fact, the authors freely admit that the book is "an homage to The Goal, hoping to show that the same principles ... could be used to improve technology work." (p. 341)

The Soul of a New Machine sees journalist Tracy Kidder given access to a product team at the Data General Corporation during the development of the MV/8000 computer (codenamed Eagle) in the late 1970s. It predates the other two books, and has no obvious aim beyond the telling of a good story through the varied lenses of a selection of its protagonists.

For those who've been around implementation, deployment, and maintenance projects of any size or complexity over any length of time there will be many moments of empathy, sympathy, and antipathy in these three works. As a tester, I particularly enjoyed reading about debugging the prototype Eagle: "Veres ... tells Holberger [that they] ran 921 passes [of the test suite] last night, with only 30 failures. And Holberger makes a face. In this context, 921 is a vast number. It means that any given instruction in the diagnostic program may have been executed millions of times. Against 921 passes, 30 failures is a very small number. It tells them the machine is failing only once in a great while — and that's bad news ..." (p. 194)


There's a bigger picture here, though. Crudely, the first two books are about processes and their underpinnings while the third is about people and their interactions. They are symbiotic, they interact intimately: process is made by people, people follow imposed process and are the instigators of emergent process. Understanding both people and process is crucial in the workplace.

People and process are both also subject to politics, and understanding that is important too. In the Phoenix Project, as improvements are attempted in one group, turf wars and ass-covering activity break out around the place. In The Soul of a New Machine, the Eagle project can only exist because of the experienced under-the-radar manoeuvrings of the group manager, Tom West. "We're building what I thought we could get away with" he says early on. (p. 31)

I'd recommend all three of these books. Why? Well, the recognisable episodes and personalities are great fun in a Big Chief I-Spy kind of way, but the higher value for me came from the opportunity to use someone else's eyes to view them. And then, naturally, to reflect on that and try to apply it to my own contexts.

The editions I read:
  • The Soul of a New Machine, Tracy Kidder (Avon) 1981
  • The Goal, Eli Goldblatt 30th Anniversary Edition (North River Press) 2014, on Kindle
  • The Phoenix Project 5th Anniversary Edition, Gene Kim, Kevin Behr, George Spafford (IT Revolution) 2018
Thanks to the Dev manager for the loan of The Soul of a New Machine.
Images: Amazon and AbeBooks.

Saturday, May 11, 2019

A Doug Report


A couple of years ago our book club at work read On Testing Non-testable Programs. At the time I idly wondered whether I could use it to make an ontology of oracles but didn't pursue the idea beyond finding that Doug Hoffman had already provided a classification scheme back in 1998: A Taxonomy for Test Oracles

The other week he presented a webinar, The Often Overlooked Test Oracle, which built on some of that earlier work, outlining a range of oracle types with pros, cons, and other notes.

I like theory like this that can guide my practice and I'm still practicing my sketchnotes so I tried to capture the set of oracles he covered in a sketch. I'll just leave it here.

Monday, May 6, 2019

Bear Testing



I got my dad a Bear Grylls book, How to Stay Alive: The Ultimate Survival Guide for Any Situation, last Christmas. He keeps it in the toilet — in case of emergencies, presumably.

Flicking through the bright orange volume at the weekend, a section on being lost caught my eye. Having spent a proportion of the last four weeks struggling to understand a performance testing experiment, I am currently very familiar with that sense of bearings going AWOL.

Grylls is largely concerned with physical endurance in extreme environments. Clearly, the stakes are somewhat different for me, at my desk, wrestling with several databases, a few servers, a set of VMs and their hypervisors, numerous configuration options, and the code I've written to co-ordinate, log, and analyse the various actions and interactions.

Yet, still, Grylls' advice says something relevant to me:
Millions of people get lost every year ... Their critical faculties grow impaired and they become less able to make smart choices ...  When that happens, it's almost always because they've made one simple mistake: they didn't stop as soon as they realized they were lost.
It's natural human instinct to keep going. We don't like going backwards ... We talk ourselves into thinking that we're going the right way ... Instead we need to be rigorous about not fooling ourselves. We need to swallow our pride.
  • Stop: don't make a bad situation worse by pushing blindly on and getting more lost.
  • Think: your brain is your best survival tool, so control it and use it to think logically.
  • Observe: if you have a map, look for big, obvious features that you can't mistake for anything else in order to orientate yourself. 
  • Plan: have a definite strategy which will force you to think things through clearly and, crucially, keep your morale up. 
We've all sometimes dug deep pits for ourselves and then kept digging, mislaid our sense of perspective, amped up our sense of pride, broken out of time boxes, continued on with one more desperate, unfounded, experiment in the vain hope it'll miraculously clear things up.

Likewise, we all know that we should take a step back, zoom out, defocus. I like the bluntness here though: we should simply STOP. (If I'm honest, I also like the recursivity of the S in STOP being stop.)

So I'm already nodding in recognition, and smiling, when he closes with this beauty: "Nothing is more dispiriting than not knowing what you're doing or where you are going." Quite.

Friday, April 26, 2019

Diploadocus Testing


Eric Proegler was in town last week and I asked him if he'd be up for re-running his recent TestBash talk, Continuous Performance Testing, at our office.

Good news! he was, and he told us the story of how a lumbering testing dinosaur who watched the agile asteroid crashing into his world could become a nimble testing ninja, slotting valuable and targeted performance tests confidently into modern continuous integration.

Bad news! I continued my adventures in sketchnoting and they're not the greatest.

Friday, April 12, 2019

Order, Order!


"Do you have generic strategies for the prioritisation of tasks?" When Iuliana Silvăşan asked the question I realised that I probably did, but I'd never tried to enumerate them. So we went to a  whiteboard for 20 minutes, and we talked and sketched, and then afterwards I wrote brief notes, and then we reviewed them, and now here they are...

We think these factors are reasonably generally applicable to prioritisation of tasks:
  • Risk
  • Value
  • Cost
  • Importance
  • Urgency
  • Time
  • Goals
  • Commitments

Yes, they are generic and, yes, they will overlap. That's life.

The last three perhaps merit a little additional explanation. Time is a compound factor and covers things like resource availability, dependency planning, and scheduling problems which could be split out if that helps you. Goals cover things like experience you want to get, skills you want to practice, or people you want to work with. This might not be a primary factor, but could help you to choose between otherwise similar priorities. Commitments are things already on the schedule with some level of expectation that they'll be delivered. That thing you promised to Bill last week is a commitment.

We think this method is a handle that can be cranked to generate task priorities:
  • Put each of the factors as columns in a table.
  • If you know some are not relevant, don't use them.
  • If you have context-specific factors, add them.
  • Put the tasks to be prioritised as rows. 
  • Use data where possible, and gut where not, to score each of the factors for each of the tasks. 
  • Unless there's a good reason not to, prefer simple numerical scoring (e.g. 1, 2, 3 for small, medium, large).
  • Try to have a consistent scoring scheme, e.g. low score for something more desirable/easier/better to do sooner.
  • Don't agonise over the scores.
  • When you're done, add a final column which combines the scores (e.g. simple addition as a starting point).
  • Sort your table by the scores. 
  • Your scores are your prioritisation.
  • The prioritisation you have created probably doesn't fit your intuition.
  • If so, wonder why.

We think these are some possible reasons why:
  • You weren't right in your scoring. The table can help you to see this. Simply review the numbers. Do any look wrong now you have them all?
  • You weren't consistent in your scoring. The table can help you to see this too. Sort by each factor in turn.
  • You need to weight factors in the overall score. Perhaps the downside of a delay is really big so the urgency factor needs to dominate the overall score. 
  • You have factors that correlate. This is essentially also a weighting issue, and you can always remove a column if you think it is serving no particular value in the analysis.
  • You have missed an important factor. The order you have feels wrong. What factor should be here but isn't?
  • Your intuition is wrong. Perhaps you have uncovered a bias? Well done!

Once you've got an idea why your intuition and the prioritisation you have don't match, update the table and rescore.

We think a couple more factors are relevant, but in a different way to the others:
  • Pragmatism
  • Politics
Pragmatism says that you should spend a proportionate amount of time on prioritising. In general you might also want to ask whether this is the right list of tasks to be prioritising at all, but that's not for now.

Politics says that there may be reasons outside of reason which determine the work that gets done, who does it, and when. If you suspect that, then perhaps you should do something else ahead of prioritising these tasks.
Image: https://flic.kr/p/debvm

Tuesday, April 9, 2019

Seeing Essence


George Dinwiddie recently delivered a webinar, Distilling the Essence, on the topic of crafting good examples of acceptance criteria. From the blurb:
When creating our scenarios, we want to fully describe the desired functionality, but not over-describe it ... Which details should we leave visible, and which should we hide? ... [We will see how we can] distil our scenarios down to their essence and remove distracting incidental details.

I loved it and, naturally, wondered whether I could distil the essence of it. Here goes:
  • Not just examples, but essential examples.
  • An essential example promotes shared understanding, testability, clarity of intent.
  • Remove incidental details; they obscure the important.
  • Highlight essential details; they have the most communicative value.
  • Essential details don't change with user interface, workflow, implementation technology.
  • To help: name scenarios, abstract specifics, note explicit rules, conditions, and boundaries.
  • Bottom-up is OK; you can iterate from the specific to the essence.
  • Don't extract too much; totally generic is probably worse than too specific.

If that seems short, the webinar itself is admirably only about 15 minutes long, and that's mostly George giving worked examples of the approach.
Image: https://flic.kr/p/7JCEQD

Sunday, March 31, 2019

Cash Money


I laughed out loud when I saw this sign on the merch stand of the The Johnny Cash Roadshow gig in Bury St Edmunds yesterday.

At face value, it's straightforward, right? You can only pay with good, old-fashioned, money.

But the Roadshow are a tribute act and so also play Cash only.

Except they don't, they covered Wildwood Flower (June Carter) and Flowers on the Wall (Statler Brothers).

But those artists have strong connections to Johnny Cash so, yeah, still Cash only.

Except the band also played an original song by Clive John, from the band.

But it was a song he was asked to write, "in the style of Johnny Cash," so just about Cash only.

Except that several of Clive's CDs, also containing original material, are available on the merch stand.

But he's Johnny Cash in the band so perhaps still tenuously Cash only?

Except nothing on the table is actually by Johnny Cash.

[Repeat to fade.]

I strongly believe that there's value, as a tester, in being able to find alternative interpretations. The key thing, in my experience, is being able to decide how much time and effort to spend looking for them, which ones are likely to expose a relevant risk, and when to bring that up to relevant people.