Skip to main content

Exploring It!

This week the test team at Linguamatics held our first internal conference. There was no topic, but three broad categories could be seen in the talks and workshops that were given: experience reports, tooling, and alternative perspectives on our work. (The latter included the life cycle of a bug, and psychology in testing.) My contribution was an experience report looking at how I explore both inside and outside of testing. I've tidied up some of my notes from the prep for it below.

There are testing skills that I use elsewhere in my life. Or perhaps there are skills from my life that I bring to testing. Maybe I'm so far down life's road that it's hard to tell quite what started where? Maybe I'm naturally this way and becoming a tester with an interest in improvement amped things up? Maybe I've so tangled up my work, life, and hobby that deciding where one starts and another ends is problematic?

The answers to those questions is, I think, almost certainly "yes".

Before I start I need to caveat what I'm about to say. I'm going to describe some stuff that I do now. It's not all the stuff that I do, and it's certainly not all of the stuff that I've done, and I'm definitely not saying that you should do it too. It'd be great if you can take something from it, but this is just my report.

Exploring in the Background

When I say background research I mean those times when I'm not actively engaged in looking up a particular topic. I have a couple of requirements for background research: I'm interested in keeping up with what's current in testing, my product's domain, and related areas, including what new areas there are to keep up with; and I'm interested in what some specific people have to say about the same things.

One of the tools that I use for this is Twitter. I scan my timeline a few times a day, often while I'm in the kitchen waiting for a cuppa to brew. I'll scroll through a few screenfuls, looking for anything that catches my eye. This is where happenstance, coincidence, and synchronicity come into play. Sometimes — often enough that I care to continue — I'll find something that looks like it might be of interest: potential new spin on a topic I know, someone I trust talking about a topic I've never heard of, or something that sounds related to a problem I have right now. When I see that, I message the tweet to myself for later consumption.

I also maintain lists. One of them has my Linguamatics colleagues on it and I'm interested in what they have to say for reasons of business and pleasure. Because there aren't many people on that list and because I'm not worried about losing data off the bottom of it (as in the timeline), I'll check this less frequently. When you see me retweet work by testers on my team, I've usually come across it when scanning that list.

I do something similar with Feedly for blogs, although there I have more buckets: Monitor (a very small number of blogs I'll read every day if they have posts), Friends and workmates (similar to my Twitter list) which I'll try to look at a couple of times a week, Testing (a list of individual blogs) that gets attention once a week or so, and Testing Feeds (a list of aggregators such as the Ministry of Testing feed) which I'll skim less frequently still. Blogs move in and out of these lists as I discover them or the cost of looking outweighs the value I get back.

I can map this back to testing in a few ways. On one recent project I was trying to get to grips with an unfamiliar distributed system. There were four components of interest, and I wanted to understand how communication between them was co-ordinated. I found that they each had logs, and I identified aspects of the logs that I cared about and found ways that I could extract that easily. I then turned the log level up in all places as high as it could go and ran the system.

This gives me the same kind of split as I have on Twitter: carefully curated material that I know I want to see all of, and a firehose of other material that I'll never see all of but that could have something interesting to show me. In the case of the application I was testing, I could search the logs for interesting terms like error, warning, fatal, exception and so on. I could also scan them a page at a time to see if anything potentially interesting appeared, and I could go direct to a time stamp to compare what one component thought was happening with another.

  • I decide what I want, how much time and effort I'm prepared to put in, and which tools I'll use.
  • I curate the stuff I must have and leave a chance of finding other stuff.
  • I change frequently by trying new and retiring old sources.

Exploring ideas

When I finally read Weinberg on Writing: The Fieldstone Method I was struck with how similar it was to the working method I'd evolved for myself. Essentially, Weinberg captures thoughts, other people's quotes, references, and so on using index cards which he carries with him. He then files them with related material later. When he comes to write on a topic, he's got a bunch of material already in place, rather than the blank emptiness of a blank empty page staring blankly back at him, empty.

I work this way too. The talk that this essay is extracted from started as a collection of notes in a text file. Having decided on the topic, I'd drop a quick sentence, or a URL, or even just a couple of words, into the file whenever anything that I thought could be relevant occurred to me. After a while there was enough to sort into sections and then I started dropping new thoughts into the appropriate sections. When it came time to make slides, I could see what categories I had material in, review which I was motivated to speak about, and choose those that I thought would make a good talk.

It's a bonus that, for me, having some thoughts down already helps to inspire further thoughts.

I craft the material into something more like its final form (slides, or paragraphs as here) and can then challenge it. I've described this before but it's so powerful for me that I'll mention it again: I write, I dissociate myself from the words, I challenge them as if they're someone else's, and then I review and repeat. This is exactly the way that When Support Calls, my series of articles for the Ministry of Testing, was written recently.

It's also exactly the way I wrote new process at work for the internal audits we've just starting conducting to help us to qualify for some of the healthcare certifications we need. In the first round of audit, while I was learning how to audit on the job, I noted down things that I thought would be useful to others, or to me next time. Once I had a critical mass of material I sorted it into chunks and then added new thoughts to those chunks, and iterated it into first draft guidance documentation and checklists.

  • I collect thoughts as soon as I have them, in the place where I'll work on whatever it is.
  • When I go to work in that area, I'll usually have some material ready to go, and that spurs new thoughts.
  • For me, writing is a kind of dialogue that helps me to find clarity and improvement.

Exploring My Own Behaviour

There's any number of triggers that might change the way I do something. Here's a few:

  • it hurts in some way, takes too long, is boring.
  • it upsets someone that I care not to upset.
  • it was suggested by someone whose suggestions I take seriously.
  • it is something I fancy trying.

Once I've decided to change I explore that change with the Plan, Do, Check, Act cycle. In Planning I'll observe the current situation and work out what change I'll try; in Doing I'll make that change, usually an incremental one, and gather data as I go; when Checking I'll be comparing the data I gathered to what I hoped to achieve; and finally I'll Act to either accept the new behaviour as my default, or go round the cycle again with another incremental change.

I do this regularly. Some recent changes that I've achieved include learning to type with relatively traditional fingering to help me to overcome RSI that I was getting by stretching my hands in weird ways on the keyboard.

For some while I've been avoiding asking very direct "why?" and instead asking a less potentially challenging "what motivated that?" or "what did you hope to achieve?" That's going pretty well (I feel).

I've also spent a lot of time ELFing conversations, where ELF stands for Express, Listen, and Field, a technique that came out of the assertiveness training we did last year.

When I commit to a change, I'll often try to apply it consciously everywhere that I sensibly can. I don't want for the perfect opportunity to arrive, I just dive in. This has several benefits: (a) practice, (b) seeing the change at work in the places it should work, and (c) seeing how it does in other contexts. These latter two are very similar in concept to the curation-synchronicity pairs that I talked about earlier.

I was interested to explore how developers might feel when being criticised by testers and thought that a writer being edited might be similar. So I went out of my way to get commissioned to write articles. I felt like I generally did OK when someone didn't like my work (though I've had an awful lot of experience of being told about my failings by now) but there are still particular personal anti-patterns, things that trigger a reaction in me.

Hearing opinion stated as fact is one of them. I saw this from my editors and had to find ways to deal with my immediate desire to snap something straight back. (Thank you ELF!)

In turn, when criticising software, I strive to use safety language. If I'm complaining about the appearance of a feature, say, I want to avoid saying "this looks like crap" and instead say "this doesn't match the design standards we have elsewhere and I cite X, Y, Z as examples".

But there have also been occasions where I have failed to change, or failed to like the change I made (so far). I have been on a mission to learn keyboard shortcuts for some time, and with some success. In general, I don't want the mouse to get in the way of my mind interacting with the product when I'm working or when I'm testing. However, I have completely failed to get browser bookmark bar navigation into my fingers.

I've been trying to avoid diving straight in with answers when asked (hey, I like to think I'm a helpful chap!) and instead leave room for my questioner to find an answer for themselves (when that's appropriate). Yet still I find myself going for suggestions earlier than I strive to.

I've also been sketchnoting as a way to force myself to listen to talks differently. It's certainly had that effect, and I've also learned that talks of 10 minutes or less are hard for me to sketch, which means that my notes from the CEWT that's just gone are not wonderful. But the reason I don't class it as a success yet is that I feel self-conscious doing it.

  • I think about what I'm doing, how I'm doing it, and why.
  • I commit to what I want to achieve by trying what I've planned at every opportunity.
  • I review what happened, honestly, with data (which can be quantitative or qualitative)


I think these three kinds of exploration share some characteristics, and they apply equally to my testing:

  • I like to know my mission and if I don't know it then finding it often becomes my mission. 
  • I like to give myself a chance to find what I’m after but also leave myself open to find other things.
  • I like to keep an eye out for potential changes, and that means monitoring what and how I'm doing as well as the results of it.

A side-effect of the kind of approach I'm describing here is that it promotes self-monitoring generally. Even without changes in mind, watching what I do can have benefits, such as spotting patterns in the way that I work that contribute to good results, or bad ones.

To finish, then, a quote that popped up in my timeline while I was making some tea and thinking about this talk. (And it ended up in my notes file, natch.) It's by George Polya, from a book called How to Solve it:
The first rule of discovery is to have brains and good luck. The second rule of discovery is to sit tight and wait till you get a bright idea
I think that sitting tight is OK, but also that our actions can prompt luck and ideas. And, through exploration, I choose to do that.


  1. Glad you found my tweet inspiring ��

    I think you should try what Polya suggests, and wait. Of course not passively, sitting tight involves being attentive to the luck and possibility of something new popping up, but I interpret Polyas advice to say that having courage to let the idea shape itself is important.

    But now that you like exploration, try googling Deafult Mode Network. I think you'll like what you see. Add creativity to the search too ��

  2. I'm so pleased you're finding my ELF technique of practical use James, it's a great example of one of the ways it can be beneficial in handling a challenging communication - which of course applies both at work and beyond. I do hope your team continue to build on the Assertiveness Session learning as proactively as you do!


Post a Comment

Popular posts from this blog

Notes on Testing Notes

Ben Dowen pinged me and others on Twitter last week , asking for "a nice concise resource to link to for a blog post - about taking good Testing notes." I didn't have one so I thought I'd write a few words on how I'm doing it at the moment for my work at Ada Health, alongside Ben. You may have read previously that I use a script to upload Markdown-based text files to Confluence . Here's the template that I start from: # Date + Title # Mission # Summary WIP! # Notes Then I fill out what I plan to do. The Mission can be as high or low level as I want it to be. Sometimes, if deeper context might be valuable I'll add a Background subsection to it. I don't fill in the Summary section until the end. It's a high-level overview of what I did, what I found, risks identified, value provided, and so on. Between the Mission and Summary I hope that a reader can see what I initially intended and what actually

Enjoy Testing

  The testers at work had a lean coffee session this week. One of the questions was  "I like testing best because ..." I said that I find the combination of technical, intellectual, and social challenges endlessly enjoyable, fascinating, and stimulating. That's easy to say, and it sounds good too, but today I wondered whether my work actually reflects it. So I made a list of some of the things I did in the last working week: investigating a production problem and pairing to file an incident report finding problems in the incident reporting process feeding back in various ways to various people about the reporting process facilitating a cross-team retrospective on the Kubernetes issue that affected my team's service participating in several lengthy calibration workshops as my team merges with another trying to walk a line between presenting my perspective on things I find important and over-contributing providing feedback and advice on the process identifying a

Risk-Based Testing Averse

  Joep Schuurkes started a thread on Twitter last week. What are the alternatives to risk-based testing? I listed a few activities that I thought we might agree were testing but not explicitly driven by a risk evaluation (with a light edit to take later discussion into account): Directed. Someone asks for something to be explored. Unthinking. Run the same scripted test cases we always do, regardless of the context. Sympathetic. Looking at something to understand it, before thinking about risks explicitly. In the thread , Stu Crook challenged these, suggesting that there must be some concern behind the activities. To Stu, the writing's on the wall for risk-based testing as a term because ... Everything is risk based, the question is, what risks are you going to optimise for? And I see this perspective but it reminds me that, as so often, there is a granularity tax in c

Agile Testing Questioned

Zenzi Ali has been running a book club on the Association for Software Testing Slack and over the last few weeks we've read Agile Testing Condensed by Janet Gregory and Lisa Crispin. Each chapter was taken as a jumping off point for one or two discussion points and I really enjoyed the opportunity to think about the questions Zenzi posed and sometimes pop a question or two back into the conversation as well. This post reproduces the questions and my answers, lightly edited for formatting. --00-- Ten principles of agile testing are given in the book. Do you think there is a foundational principle that the others must be built upon? In your experience, do you find that some of these principles are less or more important than others?  The text says they are for a team wanting to deliver the highest-quality product they can. If we can regard a motivation as a foundational principle, perhaps that could be it: each of the ten pr

The Great Post Office Scandal

  The Great Post Office Scandal by Nick Wallis is a depressing, dispiriting, and disheartening read. For anyone that cares about fairness and ethics in the relationship that business and technology has with individuals and wider society, at least. As a software tester working in the healthcare sector who has signed up to the ACM code of ethics through my membership of the Association for Software Testing I put myself firmly in that camp. Wallis does extraordinarily well to weave a compelling and readable narrative out of a years-long story with a large and constantly-changing cast and depth across subjects ranging from the intensely personal to extremely technical, and through procedure, jurisprudence, politics, and corporate governance. I won't try to summarise that story here (although Wikipedia takes a couple of stabs at it ) but I'll pull out a handful of threads that I think testers might be interested in: The unbelievable naivety which lead to Horizon (the system at th

Leaps and Boundary Objects

Brian Marick  recently launched a new podcast, Oddly Influenced . I said this about it on Twitter: Boundary Objects, the first episode of @marick's podcast, is thought-provoking and densely-packed with some lovely turns of phrase. I played it twice in a row. Very roughly, boundary objects are things or concepts that help different interest groups to collaborate by being ambiguous enough to be meaningful and motivational to all parties. Wikipedia  elaborates, somewhat formally:  [boundary objects are] both plastic enough to adapt to local needs and constraints of the several parties employing them, yet robust enough to maintain a common identity across sites ... The creation and management of boundary objects is key in developing and maintaining coherence across intersecting social worlds. The podcast talks about boundary objects in general and then applies the idea to software development specifically, casting acceptance test

Where No-one Else Looks

In yesterday's post, Optimising start of your exploratory testing , Maaret Pyhäjärvi lists anti-patterns she's observed in testers that can lead to shallow outcomes of testing. She ends with this call: Go find (some of) what the others have missed! That strikes a chord. In Toujours Testing I recalled how my young daughter, in her self-appointed role as a Thing Searcher, had asked me how she could find things that no-one else finds. I replied Look where no-one else looks. Which made her happy, but also made me happy because that instinctive response externalised something that had previously been internal.  The phrase has stuck, too, and I recall it when I'm working. It doesn't mean targeting the obscure, although it can mean that.  It also doesn't mean not looking at areas that have previously been covered, although again it can mean that. More, for me, it is about seeking levels of granularity, or perspectives, or methods of engagement, or personas, or data, or im

External Brains

A month or two ago, after seeing how I was taking notes and sharing information, a colleague pointed me at Tiego Forte's blog on Building a Second Brain : [BASB is] a methodology for saving and systematically reminding us of the ideas, inspirations, insights, and connections we’ve gained through our experience. It expands our memory and our intellect... That definitely sounded like my kind of thing so I ordered the upcoming book, waited for it to arrive, and then read it in a couple of sittings. Very crudely, I'd summarise it something like this: notes are atomic items, each one a single idea, and are not just textual notes should capture what your gut tells you could be valuable notes should capture what you think you need right now notes should preserve important context for restarting work notes on a topic are bundled in a folder for a Project, Area, or Resource and moved into Archive when they're done. ( PARA )

Binary Oppositions

I am totally loving Oddly Influenced, Brian Marick's new podcast. The latest episoide covers ways in which schools of thought and practice can inhibit the cross-fertilisation of ideas.  It includes a case study in experimental physics from Peter Galison's book, Image and Logic , where two different approaches to the same particle analysis problem seem to run on separate, parallel tracks: In the 'head to world' tradition, you use your head to carefully construct situations that allow the world to express its subtle truths ... In the 'world to head' tradition, you make yourself ever more sensitive to the world’s self-expressed truths ... The first of these wants to theorise and then craft an experiment using statistics while the latter wants to gather data and try to understand it visually. Marick is pessimistic about the scope for crossover in this kind of situation: How do you bridge traditions that differ on aesthetics, on different standards of what counts as


Last night I attended a Consequence Scanning workshop at the Cambridge Tester Meetup . In it, Drew Pontikis walked us through the basics of an approach for identifying opportunities and risks and selecting which ones to target for exploitation or mitigation. The originators of Consequence Scanning recommend that it's run as part of planning and design activities with the outcomes being specific actions added to a backlog and a record of all of the suggested consequences for later review. So, acting as a product team for the Facebook Portal pre-launch, we  listed potential intended and unintended consequences sorted them into action categories (control, influence, or monitor) chose several consequences to work on explored possible approaches for the action assigned to each selected consequence In the manual there are various resources for prompting participants to think broadly and laterally about consequences. For example, a product can have an effect on people other than its u