Thursday, August 18, 2016

Fail Over

In another happy accident, I ended up with a bunch of podcasts on failure to listen to in the same week. (Success!) Here's a few quotes I particularly enjoyed.

In Failing Gracefully on the BBC World Service, David Mindell from MIT recalls the early days of NASA's Project Apollo:
The engineers said "oh it's going to have two buttons. That's the whole interface. Take Me To The Moon, that's one button, and Take Me Home is the other button" [but] by the time they landed on the moon it was a very rich interactive system ...
The ultimate goal of new technology should not be full automation. Rather, the ultimate goal should be complete cooperation with the human: trusted, transparent, collaboration ... we've learned that [full autonomy] is dangerous, it's failure-prone, it's brittle, it's not going to get us to where we need to go.
And NASA has had some high-profile failures. In another episode in the same series of programmes, Faster, Better, Cheaper, presenter Kevin Fong concludes:
In complex systems, failure is inevitable. It needs to be learned from but more importantly it needs to become a conscious part of everything that you do.
Which fits nicely with Richard Cook's paper, How Complex Systems Fail, from which I'll extract this gem:
... all practitioner actions are actually gambles, that is, acts that take place in the face of uncertain outcomes. The degree of uncertainty may change from moment to moment. That practitioner actions are gambles appears clear after accidents; in general, post hoc analysis regards these gambles as poor ones. But the converse: that successful outcomes are also the result of gambles; is not widely appreciated. 
In the Ted Radio Hour podcast, Failure is an Option, Astro Teller of X, Google's "moonshot factory", takes Fong's suggestion to heart. His approach is to encourage failure, to deliberately seek out the weak points in any idea and abort when they're discovered:
... I've reframed what I think of as real failure. I think of real failure as the point at which you know what you're working on is the wrong thing to be working on or that you're working on it in the wrong way. You can't call the work up to the moment where you figure it out that you're doing the wrong thing failing. That's called learning. 
He elaborates in his full TED talk, When A Project Fails, Should The Workers Get A Bonus?:
If there's an Achilles heel in one of our projects we want to know it right now not way down the road ... Enthusiastic skepticism is not the enemy of boundless optimism. It's optimism's perfect partner.
And that's music to this tester's ears.

Thursday, August 11, 2016

Understanding Testing Understanding

Andrew Morton tweeted at me the other day:
I ran an on-the-spot thought experiment, trying to find a counterexample to the assertion "In order to make a joke about something you have to understand it."

I thought of a few things that I don't pretend to understand, such as special relativity, and tried to make a joke out of one of them. Which I did, and so I think I can safely say this:
Now this isn't a side-splitting, snot shower-inducing, self-suffocating-with-laughter kind of a joke. But it is a joke and the humour comes from the resolution of the cognitive dissonance that it sets up: the idea that special relativity could have anything to do with special relatives. (As such, for anyone who doesn't know that the two things are unrelated, this joke doesn't work.)

And I think that set up is a key point with respect to Andrew's question. If I want to deliberately set up a joke then I need to be aware of the potential for that dissonance:
Reading it back now I'm still comfortable with that initial analysis although I have more thoughts that I intentionally left alone on the Twitter thread. Thoughts like:
  • What do we mean by understand in this context?
  • I don't understand special relativity in depth, but I have an idea about roughly what it is. Does that invalidate my thought experiment?
  • What about the other direction: does understanding something enable you to make a joke about it?
  • What constitutes a joke?
  • Do we mean a joke that makes someone laugh?
  • If so, who?
  • Or is it enough for the author to assert that it's a joke?
  • ...
All things it might be illuminating to pursue at some point. But the thought that I've been coming back to since tweeting that quick reply is this: in my EuroSTAR 2015 talk, Your Testing is a Joke, I made an analogy between joking and testing. So what happens if we recast Andrew's original in terms of testing?
Does being able to test something show that you understand it?
And now the questions start again...
Image: https://flic.kr/p/i6Zqba

Tuesday, August 2, 2016

Know What?


I regularly listen to the Rationally Speaking podcast hosted by Julia Galef. Last week she talked to James Evans about Meta Knowledge and here's a couple of quotes I particularly enjoyed.

When discussing machine learning approaches to discovering structure in data and how that can change what we learn and how we learn it:
James: In some sense, these automated approaches to analysis also allow us to reveal our biases to ourselves and to some degree, overcome them. 
Julia: Interesting. Wouldn't there still be biases built into the way that we set up the algorithms that are mining data? 
James: When you have more data, you can have weaker models
When discussing ambiguity and how it impacts collaboration:
James: I have a recent paper where we explore how ambiguity works across fields ... the more ambiguous the claims ... the more likely it is for people who build on your work to build and engage with others who are also building on your work ...
Really important work often ends up being important because it has many interpretations and fuels debates for generations to come ...  It certainly appears that there is an integrating benefit of some level of ambiguity.
Image: https://flic.kr/p/cXJ31N 

Friday, July 29, 2016

Seven Sees

Here's the column I contributed this month to my company's internal newsletter, Indefinite Articles. (Yeah, you're right, we're a bit geeky and into linguistics. As it happens I wanted to call the thing My Ding-A-Ling but nobody else was having it.) 

When I was asked to write a Seven Things You Didn't Know About ...  article ("any subject would be fine" they said) I didn't know what to write about. As a tester, being in a position of not knowing something is an occupational hazard. In fact, it's pretty much a perpetual state since our work is predominantly about asking questions. And why would we ask questions if we already knew? (Please don't send me answers to this.)

Often, the teams in Linguamatics are asking questions because there's some data we need to obtain. Other times we're asking more open-ended, discovery-generating questions because, say, we're interested in understanding more about why we're doing something, exploring the ramifications of doing something, wondering what might make sense to do next, and you can think of many others I'm sure.

We ask these kinds of questions of others and of ourselves. And plenty of times we will get answers. But I've found that it helps me to remember that the answers - even when delivered in good faith - can be partial, be biased, be flawed, and even be wrong. And, however little I might think it or like it, the same applies to my questions.

We are all subject to any number of biases, susceptible to any number of logical fallacies, influenced by any number of subtle social factors, and are better or worse at expressing the concepts in our heads in ways that the people we're talking to can understand. And so even when you think you know something about something, there's likely to be something you don't know about the something you think you know about that something.

To help with that, here's a list of seven common biases, oversights, logical fallacies and reasoning errors that I've seen and see in action, and have perpetrated myself:

Thursday, July 28, 2016

It's Great When You're Negate... Yeah

I'm testing. I can see a potential problem and I have an investigative approach in mind. (Actually, I generally challenge myself to have more than one.) Before I proceed, I'd like to get some confidence that the direction I'm about to take is plausible. Like this:

I have seen the system under test fail. I look in the logs at about the time of the failure. I see an error message that looks interesting.  I could - I could - regard that error message as significant and pursue a line of investigation that assumes it is implicated in the failure I observed.

Or - or -  I could take a second to grep the logs to see whether the error message is, say, occurring frequently and just happens to have occurred coincident with the problem I'm chasing on this occasion.

And that's what I'll do, I think.

James Lyndsay's excellent paper, A Positive View of Negative Testing, describes one of the aims of negative testing as the "prompt exposure of significant faults". That's what I'm after here. If my assumption is clearly wrong, I want to find out quickly and cheaply.

Checking myself and checking my ideas has saved me much time and grief over the years. Which is not to say I always remember to do it. But I feel great when I do, yeah.
Image: Black Grape (Wikipedia)

Tuesday, July 26, 2016

A Glass Half Fool

While there's much to dislike about Twitter, one of the things I do enjoy is the cheap and frequent opportunities it provides for happy happenstance.
Without seeing Noah Sussman's tweet, I wouldn't have had my own thought, a useful thought for me, a handy reminder to myself of what I'm trying to do in my interactions with others, captured in a way I had never considered it before.

Thursday, July 21, 2016

Go, Ape



A couple of years ago I read The One Minute Manager by Ken Blanchard on the recommendation of a tester on my team. As The One Line Reviewer I might write that it's an encouragement to do some generally reasonable things (set clear goals, monitor progress towards them, and provide precise and timely feedback) wrapped up in a parable full of clumsy prose and sprinkled liberally with business aphorisms.

Last week I was lent a copy of The One Minute Manager Meets the Monkey, one of what is clearly a not insubstantial franchise that's grown out of the original book. Unsurprisingly perhaps, given that it is part of a successful series, this book is similar to the first: another shop floor fable, more maxims, some sensible suggestions.

On this occasion, the advice is to do with delegation and, specifically, about managers who pull work to themselves rather than sharing it out. I might summarise the premise as:
  • Managers, while thinking they are servicing their team, may be blocking them.
  • The managerial role is to maximise the ratio of managerial effort to team output.
  • Which means leveraging the team as fully as possible.
  • Which in turn means giving people responsibility for pieces of work.

And I might summarise the advice as:
  • Specify the work to be done as far as is sensible.
  • Make it clear who is doing what, and give work to the team as far as is sensible.
  • Assess risks and find strategies to mitigate them.
  • Review on a schedule commensurate with the risks identified.

And I might describe the underlying conceit as: tasks and problems are monkeys to be passed from one person's back to another. (See Management Time: Who’s Got the Monkey?)  And also as: unnecessary.

So, as before, I liked the book's core message - the advice, to me, is a decent default - but not so much the way it is delivered. And, yes, of course, I should really have had someone read it for me.
Image: Amazon