Monday, June 11, 2018

Traitment Options


"Mastering the twelve traits that trap us." That's the subtitle of The Coach's Casebook by Geoff Watts and Kim Morgan, and the traits in question include impostor syndrome, fierce independence, and perfectionism. Varied as they are, they share a few characteristics: they can become problematic for some of those who have them, they have potential upsides, and the authors believe that most people will experience some sense of at least one of them.

There are, I estimate, about ... erm ... ten trillion books on coaching out there but a few nice touches set this one apart from others that I've seen. The first is the chapter structures: each starts with a case study of a composite character (based on Watts and Morgan's experience) suffering from one of the traits, is followed by a set of exercises that might be used for others with that trait, and ends with an interview with someone famous who is said to exhibit the trait, and sometimes harnessing it to their advantage.

Looking through these three lenses permits different aspects of the problem and possible approaches to become apparent. In the case study it was often the personal interactions that I found interesting: when the coach was silent, or chose to ask or not a particular question, or when (as in the case of Richard, who suffered from "ostrich syndrome") to be provocative. The coaching techniques and questions are what they are, useful to have in the back pocket, although I would have liked to have had suggestions on directions to take depending on the results. The interviews present the perspective of someone who has lived with a trait and frequently offer evidence that it can be overcome or put to good use, in moderation.

The second nice touch is the coaching of the coach (the term used is supervision) in the case studies. In these sessions, the coach gets an independent perspective on, and is sometimes challenged about, their interactions with the client. In these sections the uncertainty, biases, and fallibility of the coach is laid bare. Coaches can use coaches.

The final piece that helps this book to stand out is a summary matrix at the end which collects all of the approaches suggested during the book and cross-references them against the traits. Many techniques have value in multiple situations and the authors have done well to avoid repeating themselves across the chapters.

I have a few minor quibbles: I might question whether the twelve traits are indeed traits (even the authors admit that the chapter on coping with loss is an outlier), or that these are the traits that trap us rather than some that could do at some times, and I would have liked to have had discussion of ways to arrive at, or confirm, the coach's "diagnosis" of a given trait. I also would love the case studies to show some alternative paths for the same client and, given that they are fictionalised, to perhaps also show the interactions from the perspective of the client.

But these are relatively minor for me. I think this book will be a useful reference, I've already used bits of it, and I've ordered a copy for my team.
Image: Amazon

Wednesday, May 23, 2018

Cambridge Lean Coffee


This month's Lean Coffee was hosted by Linguamatics. Here's some brief, aggregated comments and questions on topics covered by the group I was in.

As a developer, how can I make a tester's job easier?

  • Lots of good communication.
  • Tell us about the test coverage you already have.
  • Tell us what it would be useful for you to know.
  • Tell us what you would not like to see in the application.
  • Tell us what is logged, where, why, when.
  • Tell us what the log messages mean.
  • Tell us how you think it's supposed to work.
  • Show us how you think it's supposed to work.
  • Give us feedback on our testing - what's helping, what isn't.
  • Offer to demonstrate what you've done.
  • Say what you think are the risky areas, and why.
  • Say what was hard to get right, and why.
  • Recognise that we're not there to try and beat or show you up.
  • Help us find our unknown unknowns by sharing with us 

How can we help you, as a developer?

  • Give good repro steps in your reports.
  • Help me to understand the ambiguous requirements.
  • Ask your questions, they really do help.
  • Don't accuse.
  • Have specific details of the issues you observed.
  • Understand that developers can feel defensive of their work.
  • Tell me when you see something good, or something that works.

How do you avoid or mitigate biases?

  • Look back and review what you did, critically.
  • Check your assumptions or assertions.
  • Ask a developer.
  • Externalise your assumptions.
  • Peer review.
  • Rubber ducking.
  • Write (but don't send) and email to someone who might know. (Like rubber ducking)
  • Do something else for a bit and come back.
  • Be aware of the kinds of biases there are and then you can check for them.
  • Rule of three helps to generate perspectives.
  • Write down what you did, as this prompts thoughts about it.
  • Compare what you did to something else you could have done.
  • Remember to say "my assumption is" or "but perhaps that's my bias" out loud.

Should all testers know a programming language and, if so, which one?

  • It can help, e.g. to review code changes.
  • It can help with other aspects of testing, e.g. data generation.
  • Which language? Shell, because it's almost always just there. 
  • I like python.
  • Should is a strong verb. Understanding the need would help to answer.
  • Testers shouldn't feel forced to learn a programming language
  • ... but they should understand the risks of not (e.g. in recruitment, or by lacking a powerful tool)
  • It helps with software development jobs generally.
  • So does other technical knowledge, e.g. of HTTP.
  • Reading a language helps in testing.
  • There's an analogy to speaking English in a foreign country - it helps to have a bit of the local language.
  • Enables more empathy with the developer - probably less of the "we'll just fix that" mentality.
  • When recruiting, I don't care about the language.
  • It's best when there's support and a community to help learn programming.

What testing tool would you like to be able to wave a magic wand and just invent?

  • A universal standard for test case management and reporting software.
  • A tool to create a visual map of product architecture which can be overlaid with code changes, all aspects of testing, recent issues.
  • ... and also predict new issues!
  • Something that can reliably map specification items to test cases, and show what needs to be changed when the spec changes.
  • A reliable, stable GUI testing tool.
  • Something that puts a team straight into the sweet spot of great communication, talking directly, working with each other.
  • A Babel fish that means that the listener understands exactly what was meant by the speaker.

Do you bring questions to Lean Coffee or make them up when you're here?

  • Both.
  • I think about them in the shower.
  • I try to come with one then make some up.
  • I take notes during the month, then try to remember them on the way.
  • I think on the way in.
  • Would some kind of a board, perhaps Trello, be useful to store them between meet ups?
  • Perhaps it'd lead to discussion in Trello?
  • Perhaps people would come prepared with arguments for the issues they'd seen on Trello.
  • Topics might be stale by the time the meetup comes around.
  • Spontaneous is good!
  • Person-to-person is good!
  • Paper and pencil mean you think differently

Wednesday, May 16, 2018

UX or Ex-Users


It seems like yonks ago that I stumbled across Steve Krug's book Rocket Surgery Made Easy and loved it, so I eagerly took the chance to see him talk at the Cambridge Usability Group in a double-header with Andy Morris.

In the first half, Andy talked about how Onshape try to engage and empower and delight and retain users while at the same time gathering data that the company can use in their design and support efforts. In the second half, Steve reprised some of the key points on usability testing from Rocket Surgery.

Tuesday, May 15, 2018

You Are Not Alone


Abstracta's recent review of Hiccupps for their 75 Best Software Testing Blogs list says "James [shares] learnings from events and fun sketchnotes he makes."  Learnings here are from this week's Cambridge Tester meetup at Linguamatics  and, while the notes might be fun, the subject matter is less so.

First up, Chris Kelly previewed his Testbash Dublin talk, The Anxious Tester, a story of how his anxiety has affected his work as a tester and some suggestions for fellow sufferers, those around them, and those they work for.


That was followed by a video of The Fraud Squad where Claire Reckless presented background material on impostor syndrome, talked about her personal experience of it, and gave advice for supporting oneself  or others when the unfounded fear of being found out hits.

These are timely topics during Mental Health Awareness Week, and it's worth noting that the speakers shared a recommendation for anyone experiencing difficulties: remember that you are not alone.

Sunday, May 13, 2018

Tomorrow Never Nose

At CEWT #3, back in October 2016, I presented a definition of testing I had been toying with, a definition which later became this:
Testing is the pursuit of relevant incongruity
After hearing me out, one of the participants asked a strong, strong question: did I think my definition of testing could also define something else? I love this. It's a way to test the explanatory power of the proposal. On the day, I think I said that I thought it could also be a description of science.

Yesterday, I read a review of The Happy Brain by Dean Burnett in which Katy Guest said:
He rattles through studies, building a picture of what exactly tickles the human brain and why ... Laughter, it turns out, may originate among the temporal, occipital and parietal lobes, whose role is to "detect and resolve incongruity".
Bzzzzzttttt!!!!!  Arrooogggaaa!!!!  Honk! Honk! And with a jolt of recognition, I realised only 18 months after the fact, that I would also say my definition could describe joking.

I get a rush from finding unexpected connections in unexpected places and this is a particularly intense high because at EuroSTAR 2015 I aligned testing with joking (and science) by appealing to the similarity of the aha! and haha! moments and an incongruity theory of humour.

But so what? Well, for me, this episode is a nice self-reminder that conclusions are relative. What you think you know is contingent on variables including you, the context, the data, and the time. Like me, on this occasion, you might not see even what's right under your nose until tomorrow, or perhaps never.
Image: Recordmecca

Saturday, May 5, 2018

Testing and Checklists


Our team's book club at Linguamatics is looking at the The Checklist Manifesto. I found it exceptionally readable, commendably short, and very direct about its key message, which I'd summarise as something like this: checklists can be extremely valuable, take care when writing them, and use them to free people up rather than tie them down.

For fun, I thought I'd try to extract a small set of checklists for checklists.

  • what problem are you are trying to solve? (e.g. whose perception, desire, situation)
  • what kind of problem is it? (e.g. simple, complicated, complex)
  • what kind of list do you need? (e.g. doing, reviewing)
  • what kind of items do you want? (e.g. actions, communications)
  • what kind of triggers do you have? (e.g. start of a task, decision point, review of result)

  • can you identify any critical items? (keep them)
  • can you assume list users will just do any of the items? (remove them)
  • can you leave room for judgement? (probably a good idea)
  • can you simplify the language? (you probably can)
  • can you clarify the layout? (you probably can)

  • trial the list. (in real-world situations)
  • take feedback. (from everyone)
  • refine the list. (for the end users)
  • maintain the list. (because it must keep up with its context)
  • treat the list as a tool. (like any other)
And Karo has just started a discussion on the book over at The Club.
Image: Amazon

Wednesday, May 2, 2018

Heuristics for Working: Doing


For a while now I've been collecting fieldstones on the topic of heuristics for working. Some of these are things that I've said to others, some of them are things that I've thought about when considering some aspect of myself or how I work, and others have come from books I've read, talks I've attended, and workshops I've participated in.

I've made a handful of rough categorisations and I'll put each set in a post under the tag Heuristics for Working.

But what do even I mean by heuristics for working? Good question. I mean rules of thumb for situations that arise in the workplace. They are bits of advice that can be useful to consider but don't offer any guarantees and will not always apply.

The collection is surely idiosyncratic, context-sensitive and perhaps too specific and too general in turn. Welcome to my head. I haven't sat down and tried to elaborate or enumerate more, or to try to fill the gaps. Everything here has arisen and been noted in the moment, although a good chunk of it is stuff that I've thought about in the past too.

Of course, having heuristics doesn't mean that I remember to use them, or pick a reasonable one when I do remember, or make a good choice when I have remembered and picked a reasonable one. That's part of the rich tapestry, isn't it? At least externalising them and listing them gives me an opportunity to try to understand and maybe change the way I work, the way I am biased, or the way I want to be.

Along the way, I got to wondering if there's one overriding heuristic, one heuristic to rule them all, a meta heuristic. If there was, I think it might run along these lines:
Question your heuristics.
I hope there's something interesting and perhaps even helpful here for you.

--00--

Remember that sometimes good work leads to bad outcomes.

Remember that sometimes bad work leads to good outcomes.

Wonder how you are doing this piece of work right now.

Don't expect perfection.

Ask what factors are relevant.

Ask how you can best make them obvious to those who need to know them.

When writing: start small, aggregate, edit.

Ask "what is the thing"? (The absolute key point, the heart of the matter, the nub of the problem.)

Can you draw a picture or a table rather than keep talking or writing?

What's the simplest explanation? (What is the thing of this thing?)

What's the simplest model that fits well enough for now?

Where does this thing fit into a universe of things? What dimensions are appropriate to map the universe? Can I draw it?

Give yourself a chance to review something before it goes out.

Deliberately take two bites at the cherry: get first thoughts down right now. Wait. Review and think again.

Organise your work as early as makes sense:

Use meaningful names for your data, files, folders.

"new", "latest", "new_2", "other_way" and so on are not meaningful names.

"windows10_java_18u34_english_firefox_UTF8_JS-disabled", "windows10_java_18u34_english_firefox_UTF8_JS-enabled" are meaningful names in a scheme likely on a path to a nightmare.

When scenarios get complicated use identifiers for your data, files, folders and map a table of variables to them.

You will not remember tomorrow that "new36" is the input that generated "out_latest".

Store related work in related places. Ideally the same place.

Don't store unrelated work in that place.

Be consistent about how you name things.

Be consistent about how you record things.

If you're not inventing process each time you are freer to be doing the work.

Break work down into small chunks and check each chunk.

Ask how you could get evidence that you've done what you wanted to.

Ask what would be true if you had  done the right thing.

If you don't know how to check whether what you're about to attempt worked, ask yourself whether doing it now is the right move.

Apply your testing skills to yourself and your work.

When you have an idea, don't rely on yourself to remember it. Externalise it.

Collect notes in the places they'll be useful.

Think big, act small.

Seek the the smallest piece of work that provides a benefit and doesn't actively contradict the overall mission.

Know the mission.

Note when the mission changed.

Change the mission consciously.

Seek the same result in a cheaper way (for some relevant measure of cost).

Seek better results at the same cost (for some relevant measure of goodness).

Find a way to iterate quickly.

Plan. Do. Check. Act.

Ask whether you can script it if it's regular and repeated.

Ask whether semi-automation is enough of a win at sufficiently small cost.

Find an an analogy to your problem and look for a solution in that space.

Search for a tool or someone else's solution.

Ask where you're trying to get to.

Ask whether there a direct route from here to there? Also, any other routes.

Can you work backwards from there and forwards from here?

Challenge yourself.

Gain credibility by doing credible things.

Gain credibility by doing things credibly.

Gain credibility by talking from experience.

From time to time put yourself outside your comfort zone. Watch how you react.

Wonder how you could react differently next time.

Work to make sure there is a next time.

Image: 45cat