This week the test team at Linguamatics held our first internal conference. There was no topic, but three broad categories could be seen in the talks and workshops that were given: experience reports, tooling, and alternative perspectives on our work. (The latter included the life cycle of a bug, and psychology in testing.) My contribution was an experience report looking at how I explore both inside and outside of testing. I've tidied up some of my notes from the prep for it below.
There are testing skills that I use elsewhere in my life. Or perhaps there are skills from my life that I bring to testing. Maybe I'm so far down life's road that it's hard to tell quite what started where? Maybe I'm naturally this way and becoming a tester with an interest in improvement amped things up? Maybe I've so tangled up my work, life, and hobby that deciding where one starts and another ends is problematic?
The answers to those questions is, I think, almost certainly "yes".
Before I start I need to caveat what I'm about to say. I'm going to describe some stuff that I do now. It's not all the stuff that I do, and it's certainly not all of the stuff that I've done, and I'm definitely not saying that you should do it too. It'd be great if you can take something from it, but this is just my report.
Exploring in the BackgroundWhen I say background research I mean those times when I'm not actively engaged in looking up a particular topic. I have a couple of requirements for background research: I'm interested in keeping up with what's current in testing, my product's domain, and related areas, including what new areas there are to keep up with; and I'm interested in what some specific people have to say about the same things.
One of the tools that I use for this is Twitter. I scan my timeline a few times a day, often while I'm in the kitchen waiting for a cuppa to brew. I'll scroll through a few screenfuls, looking for anything that catches my eye. This is where happenstance, coincidence, and synchronicity come into play. Sometimes — often enough that I care to continue — I'll find something that looks like it might be of interest: potential new spin on a topic I know, someone I know trust talking about a topic I've never heard of, or something that sounds related to a problem I have right now. When I see that, I message the tweet to myself for later consumption.
I also maintain lists. One of them has my Linguamatics colleagues on it and I'm interested in what they have to say for reasons of business and pleasure. Because there aren't many people on that list and because I'm not worried about losing data off the bottom of it (as in the timeline), I'll check this less frequently. When you see me retweet work by testers on my team, I've usually come across it when scanning that list.
I do something similar with Feedly for blogs, although there I have more buckets: Monitor (a very small number of blogs I'll read every day if they have posts), Friends and workmates (similar to my Twitter list) which I'll try to look at a couple of times a week, Testing (a list of individual blogs) that gets attention once a week or so, and Testing Feeds (a list of aggregators such as the Ministry of Testing feed) which I'll skim less frequently still. Blogs move in and out of these lists as I discover them or the cost of looking outweighs the value I get back.
I can map this back to testing in a few ways. On one recent project I was trying to get to grips with an unfamiliar distributed system. There were four components of interest, and I wanted to understand how communication between them was co-ordinated. I found that they each had logs, and I identified aspects of the logs that I cared about and found ways that I could extract that easily. I then turned the log level up in all places as high as it could go and ran the system.
This gives me the same kind of split as I have on Twitter: carefully curated material that I know I want to see all of, and a firehose of other material that I'll never see all of but that could have something interesting to show me. In the case of the application I was testing, I could search the logs for interesting terms like error, warning, fatal, exception and so on. I could also scan them a page at a time to see if anything potentially interesting appeared, and I could go direct to a time stamp to compare what one component thought was happening with another.
- I decide what I want, how much time and effort I'm prepared to put in, and which tools I'll use.
- I curate the stuff I must have and leave a chance of finding other stuff.
- I change frequently by trying new and retiring old sources.
Exploring ideasWhen I finally read Weinberg on Writing: The Fieldstone Method I was struck with how similar it was to the working method I'd evolved for myself. Essentially, Weinberg captures thoughts, other people's quotes, references, and so on using index cards which he carries with him. He then files them with related material later. When he comes to write on a topic, he's got a bunch of material already in place, rather than the blank emptiness of a blank empty page staring blankly back at him, empty.
I work this way too. The talk that this essay is extracted from started as a collection of notes in a text file. Having decided on the topic, I'd drop a quick sentence, or a URL, or even just a couple of words, into the file whenever anything that I thought could be relevant occurred to me. After a while there was enough to sort into sections and then I started dropping new thoughts into the appropriate sections. When it came time to make slides, I could see what categories I had material in, review which I was motivated to speak about, and choose those that I thought would make a good talk.
It's a bonus that, for me, having some thoughts down already helps to inspire further thoughts.
I craft the material into something more like its final form (slides, or paragraphs as here) and can then challenge it. I've described this before but it's so powerful for me that I'll mention it again: I write, I dissociate myself from the words, I challenge them as if they're someone else's, and then I review and repeat. This is exactly the way that When Support Calls, my series of articles for the Ministry of Testing, was written recently.
It's also exactly the way I wrote new process at work for the internal audits we've just starting conducting to help us to qualify for some of the healthcare certifications we need. In the first round of audit, while I was learning how to audit on the job, I noted down things that I thought would be useful to others, or to me next time. Once I had a critical mass of material I sorted it into chunks and then added new thoughts to those chunks, and iterated it into first draft guidance documentation and checklists.
- I collect thoughts as soon as I have them, in the place where I'll work on whatever it is.
- When I go to work in that area, I'll usually have some material ready to go, and that spurs new thoughts.
- For me, writing is a kind of dialogue that helps me to find clarity and improvement.
Exploring My Own BehaviourThere's any number of triggers that might change the way I do something. Here's a few:
- it hurts in some way, takes too long, is boring.
- it upsets someone that I care not to upset.
- it was suggested by someone whose suggestions I take seriously.
- it is something I fancy trying.
Once I've decided to change I explore that change with the Plan, Do, Check, Act cycle. In Planning I'll observe the current situation and work out what change I'll try; in Doing I'll make that change, usually an incremental one, and gather data as I go; when Checking I'll be comparing the data I gathered to what I hoped to achieve; and finally I'll Act to either accept the new behaviour as my default, or go round the cycle again with another incremental change.
I do this regularly. Some recent changes that I've achieved include learning to type with relatively traditional fingering to help me to overcome RSI that I was getting by stretching my hands in weird ways on the keyboard.
For some while I've been avoiding asking very direct "why?" and instead asking a less potentially challenging "what motivated that?" or "what did you hope to achieve?" That's going pretty well (I feel).
I've also spent a lot of time ELFing conversations, where ELF stands for Express, Listen, and Field, a technique that came out of the assertiveness training we did last year.
When I commit to a change, I'll often try to apply it consciously everywhere that I sensibly can. I don't want for the perfect opportunity to arrive, I just dive in. This has several benefits: (a) practice, (b) seeing the change at work in the places it should work, and (c) seeing how it does in other contexts. These latter two are very similar in concept to the curation-synchronicity pairs that I talked about earlier.
I was interested to explore how developers might feel when being criticised by testers and thought that a writer being edited might be similar. So I went out of my way to get commissioned to write articles. I felt like I generally did OK when someone didn't like my work (though I've had an awful lot of experience of being told about my failings by now) but there are still particular personal anti-patterns, things that trigger a reaction in me.
Hearing opinion stated as fact is one of them. I saw this from my editors and had to find ways to deal with my immediate desire to snap something straight back. (Thank you ELF!)
In turn, when criticising software, I strive to use safety language. If I'm complaining about the appearance of a feature, say, I want to avoid saying "this looks like crap" and instead say "this doesn't match the design standards we have elsewhere and I cite X, Y, Z as examples".
But there have also been occasions where I have failed to change, or failed to like the change I made (so far). I have been on a mission to learn keyboard shortcuts for some time, and with some success. In general, I don't want the mouse to get in the way of my mind interacting with the product when I'm working or when I'm testing. However, I have completely failed to get browser bookmark bar navigation into my fingers.
I've been trying to avoid diving straight in with answers when asked (hey, I like to think I'm a helpful chap!) and instead leave room for my questioner to find an answer for themselves (when that's appropriate). Yet still I find myself going for suggestions earlier than I strive to.
I've also been sketchnoting as a way to force myself to listen to talks differently. It's certainly had that effect, and I've also learned that talks of 10 minutes or less are hard for me to sketch, which means that my notes from the CEWT that's just gone are not wonderful. But the reason I don't class it as a success yet is that I feel self-conscious doing it.
- I think about what I'm doing, how I'm doing it, and why.
- I commit to what I want to achieve by trying what I've planned at every opportunity.
- I review what happened, honestly, with data (which can be quantitative or qualitative)
ThemesI think these three kinds of exploration share some characteristics, and they apply equally to my testing:
- I like to know my mission and if I don't know it then finding it often becomes my mission.
- I like to give myself a chance to find what I’m after but also leave myself open to find other things.
- I like to keep an eye out for potential changes, and that means monitoring what and how I'm doing as well as the results of it.
A side-effect of the kind of approach I'm describing here is that it promotes self-monitoring generally. Even without changes in mind, watching what I do can have benefits, such as spotting patterns in the way that I work that contribute to good results, or bad ones.
The first rule of discovery is to have brains and good luck. The second rule of discovery is to sit tight and wait till you get a bright ideaI think that sitting tight is OK, but also that our actions can prompt luck and ideas. And, through exploration, I choose to do that.