Thursday, December 13, 2018

Vote Karo!

UKSTAR are running their Meetup Hero competition again this year and I nominated Karo Stoltzenburg with this somewhat embarrassing gush:

Cambridge has had a really active tester community in the last few years: an evening meetup, a Software Testing Clinic, the Cambridge Exploratory Workshop on Testing and a morning Lean Coffee. Karo runs the first two, has been ever-present at CEWT and a regular at Lean Coffee. And, in case that wasn't enough, in our team at work she's initiated a book club, a series of "What I don't know about X" sharing sessions, and brought in guests to speak at Team Eating, our brown bag lunches. Pretty much, if there's something happening with testers in Cambridge you can expect to find Karo there.

At the evening meetup she's given local testers the chance to be inspired by great speakers such as Anne-Marie Charrett, Adam Knight, and Neil Studd; to practice speaking in front of a friendly audience; to share testing stories and tips at show and tell nights; to learn Riskstorming with the creator of TestSphere, Beren Van Daele; to get stuck into (and stuck by) puzzles and games; to learn about mental health; or just have a quiet chat with some fellow explorers of the testing space down the pub. When someone new turns up, Karo will be the first to welcome them and bring them into the group.

In the Clinic her expertise and personality enhance the syllabus and the room. She is encouraging and good-humoured, always willing to offer her experience, but also extremely welcoming of contributions from others, and unstintingly enthusiastic, even on the night when she turned up and found that she was unexpectedly running things without her co-host! I have seen first-hand how thoroughly she prepares and the care she takes with the organisation to make sure that everyone's needs are considered and catered for and I'm staggered by the effort she puts in. 

When she isn't running things, she's a great participant and her contributions at CEWT have been reliably thoughtful and considered. At CEWT #6 recently, she gave an inspirational talk about diversity in test teams and the risks of testers letting ourselves sit in a box defined by others. Given that, it's perhaps unfair of me to put her in a box, but I think there's one that fits her: Meetup Hero. 

Vote Karo!

P.S. All of the nominees are great and I'll still be your mate if you vote for one of them, but really I'd prefer it if you'd just nip over and vote for Karo. Cheers.

P.P.S Full disclaimer: Karo is on my team and previously nominated me for this award (which was also embarrassing) but neither of those are the reasons I've nominated her.

Sunday, November 25, 2018

Talking Shop


It can be tempting to confuse training with learning, with skill acquisition, or with the ability to recognise situations in which training material could be used. Attending a workshop is, in general, unlikely to make you an expert in a thing, equip you to apply the thing in your real world context, or even necessarily make you aware that the thing could be applied. Attendees would do well to remember it (particularly when sending me a CV!) and their managers would do even better.

I'm an attendee and a manager: do I do better?

I hope so. In the test team at Linguamatics we spend our training budget on the same kinds of things that your teams probably do: books, conferences, courses, subscriptions, workshops and occasionally something different like an internal conference or escape room. Crucially, as both manager and attendee, I try hard not to mistake having and doing for knowing and being confident in practice.

It's important to me, as a manager, to participate in training and to demonstrate ways that I think it can be a productive experience: training shouldn't be something that's simply done to others. From the attendee side, training isn't about just turning up, listening up, and getting skilled up. Training, I've found, rewards a positive mindset and active participation rather than passive attention and the sense it has to be got over with.

Training is an opportunity to step outside the bunker and the usual mindset, to get exposed to new perspectives or tools or ways of working. It's a place to inspire, challenge, and compare notes. It's often a place to get a broad view of some field, rather than a deep one, and to identify things that might be useful to follow up on later.

Providing training sessions is one way that, as a company, we can show that we care about our employees, and making an effort with our training is one way that I can show that I care about my team mates. We organise in-house workshops for the whole team to do together, at work and inside regular working hours. These are the topics we've covered in the last five years:

  • Experimentation and Diagnosis: a workshop on design and interpretation of experiments (James Lyndsay)
  • Think on Your Feet: strategies for reporting, particularly when put on the spot (Illumine)
  • A Rapid Introduction to Rapid Software Testing: highlights from RST in one day (Michael Bolton)
  • Workplace Assertiveness: remaining calm and getting your point across whatever the situation (Soft Skills)
  • Web Testing 101: introduction to HTTP, REST, proxies, and related testing tools (Alan Richardson)

Quite apart from exposure to those topics, bringing training to work has other advantages. I don't underestimate the value of team-based exercises in building esprit de corps, encouraging collaboration, and promoting empathy through shared experience. I also want to be sensitive to my teams' personal situations where, for example, family commitments can make travel to outside events difficult.

From a practical perspective, whole-team training can be financially worthwhile; it tends to be lower cost per person than the same content at an external location, there's usually more opportunity to customise it, and questions about your specific context are easier to ask and have answered. It's also a convenient way for me to satisfy my personal goal of providing a training opportunity to everyone on the team every year.

But still there's the question of internalising the material, practising it, finding ways that it can work for an individual, team, and ultimately company. (Or finding that it doesn't.) Again, we probably do the same kinds of things that you do: those attending conferences might reinforce their learning and understanding by sharing aspects of their experience back to the team; those with subscriptions to resources like the Ministry of Testing Dojo often summarise articles or organise lunchtime video watching; as a team, after a workshop, we might each verbalise something that we felt was a valuable takeaway to the rest of the group.

Afterwards, taking the training into our own context can be challenging. When work needs to be done, it's not always easy to find time and opportunity to practice, particularly in a way in which it feels safe to fail or just take longer while unfamiliarity is worked through.  There's an often-quoted (and also widely-disputed) idea that 10000 hours of practice are required to become an expert in something. The truth of the claim doesn't matter much to me — I rarely need to be ninja level at anything — but my own experience dictates that without any practice there's little likelihood of any improvement.

I try to pick an aspect of the training that I think could be valuable to me and apply it pretty much everywhere there is a chance to. This way I learn about the tool or approach, my aptitude for it, my reaction to it, the applicability of it in different contexts, and its inapplicability in different contexts. I wrote about eagerly using the Express-Listen-Field loop in conversations after our assertiveness training last. This year, after Alan Richardson's training, I focused on making bookmarklets and now have a handful, largely as efficiency tools, which I've shared back to the team. They are not pretty, but they are functional, they have cemented the idea in my head, and they are delivering benefit to me now.

Pretty much every training session I've ever attended has some kind of key points summary at the end, so it seems appropriate to finish with something similar here.

Managers:
  • care to find quality training to offer your teams, and attend it
  • don't confuse attendance with expertise and experience
  • demonstrate ways in which value can be taken 

Participants:
  • take a positive mindset into it
  • be alert for things that you can take out of it
  • seek to experiment with those things quickly and regularly afterwards

Naturally, if any of that sounded interesting simply reading it is insufficient to extract its value to you.
Image: https://flic.kr/p/dq4qcX

Tuesday, October 30, 2018

Hard to Test


I attended SoftTest 2018 last week and really enjoyed it. The vibe was good, the people were friendly, and the low-price, one-day format is right up my street. I've already posted sketchnotes and the slides from my own talk so here's a few words about the presentations that I saw.

Conor Fitzgerald talked about the benefits of learning about other industries, disciplines and domains. Part career history — from "confirmatory checker" to exploratory tester — and part a list of resources he's found useful over the years, he covered how studying business and economics grew his testing perspectives; how heuristics, oracles, and a ready supply of questions help testers cover both breadth and depth; how burnishing your social skills helps to make and maintain interpersonal relationships (examples: don't blame, assume goodwill, be kind); and how explicit modelling and data gathering and analysis can guide and drive discovery and understanding.

To create a high-performing team first create a culture in which the team can operate in comfort and safety, that's Marco Foley's message. Based on Daniel Pink's concept of Motivation 3.0 he defined Management 3.0 and contrasted it with Management 2.0 — a carrot and stick culture where managers dictate the work and then reward or punish the workers. Management 3.0 is about intrinsic enjoyment. As in Maslow's Hierarchy of Needs, the basic idea is that once baseline requirements (such as salary) are met, people are motivated by the intrinsic enjoyment of a task; they seek an environment in which autonomy, mastery, and purpose (again due to Daniel Pink) are present, an environment in which they are free to do the right tasks at the right time. A manager can help to facilitate this by providing opportunities for failure to take place safely and to be viewed as learning, so as to encourage more trying and hence more success. (Note that although the name is the same, Marco's content hasn't come from Jurgen Appelo's Management 3.0.)

This was my quote of the day:
If it's hard to test, it won't be tested
Rob Meaney said it, in a talk on Testability where he described his CODS mnemonic: Controllability, Observability, Decomposability, Simplicity. Systems which have these properties designed in are likely to be easier to test and Rob walked us through the application of CODS to a system he'd been working on in which the team built a new component alongside an existing monolith (decomposability, simplicity) with extra logging (observability) and an interface which let it be tested independently of the rest of the system (controllability).  An earlier version of Rob's talk is available at the Ministry of Testing Dojo.

How can we testers show our value when there's no tangible "product" for others to observe as an outcome? Mags Dineen asked the question and then answered it: align ourselves with business needs and priorities. Of course, it's on the business side to be clear about them, and then back to us to make the effort to understand them and the intent behind them. Once present and grasped, they can become oracles for choosing a direction of travel and bringing others along with us. While we're about it, let's try to get the idea that quality is owned by the team not just the testers to be more widely understood. We can do that by being collaborative and open, by being canny about how we influence things (for example, consider the benefits of influencing an influencer), and collecting and exploiting data about our products, processes, and ourselves.

Claire Barry-Murphy and Darryn Downey described a team re-organisation where the existing process was known broken: slow, lots of work in progress (WIP), impenetrable specs, and lengthy standups. They embraced BDD, introduced WIP limits, added lane policies to their Kanban, used 3 amigo reviews, and wrote stories in a spec by example fashion. One of the points that stuck in my mind was that developers "drive" the tickets across the board, pairing along the way. This is very much not handovers, but rather ownership and purpose.

The closing keynote was Gwen Diagram's observations on how traditional management is broken and her personal prescription for fixing it. It was nice to see themes from talks earlier in the day reprised, and fun for it to be laced with anecdotes and bonus swearing. There was a serious message too, though; Gwen's management medicine tastes something like this: treat people like people, and don't talk down to them; work together for solutions; lead, don't manage; provide motivation; remove fear; give feedback; remember that everyone can be excellent; aim for happy employees.
Image: Discogs

Saturday, October 27, 2018

Call Me


I spoke about the overlap between testing and technical support at SoftTest 2018 last week. The presentation was based on When Support Calls, the book I wrote with Neil Younger and Chris George for the Ministry of Testing.

Here's the blurb:
Testers are said to be advocates for the customer, but when do most testers come face to face with a real-life customer? I don’t mean internal stakeholders, but the people at the sharp end of things, the ones actually using the software. Rarely, I find. Which is why it can be a SHOCK! to be asked to participate in a customer support call. It’s an unusual situation, there’s pressure, the customer is watching, something needs fixing, and there’s a deadline ... of yesterday. 
Gulp. But don’t worry! You’re on the call because a colleague values your input. Perhaps you’re great at analysis, or lateral thinking, or problem solving. Maybe you have deep knowledge of your product, or the whole ecosystem, or the historical angle. You could be there for questions, or answers, or honesty when you don’t have either. 
These kinds of tools from your testing toolbox are valuable on support calls and in this talk I’ll say how and why. I’ll also give an intro to customer support, talk about how to prepare for calls, what to do during and after them, and — importantly — what you can take away personally, for your product, and for your team. 
Key messages:
  • understanding of customer support and its similarities to testing 
  • actionable advice for when support calls you 
  • benefits for you and others of being involved in support
Here's my slides:
Image: Atlas Records

Friday, October 26, 2018

Dublish People


I was at SoftTest 2018 in Dublin this week. I'll write proper notes later, but for now here's my sketchnotes (below) and the letter my youngest daughter gave me before I set off, detailing her research about the city (above).

 

For those who made it all the way down here: I did take a big coat and it was the right choice.

Sunday, October 14, 2018

And There's More


When new staff join Linguamatics they'll get a brief intro to each of the groups in the company as part of their induction. I give the one for the Test team and, as many of our joiners haven't worked in software before, I've evolved my spiel to be a sky-high view of  testing, how we service other parts of the company, and how anyone might take advantage of that service, even if they're not developing software.

This takes a whiteboard and about 10 minutes and I'll then answer questions for as long as anyone cares to ask. Afterwards we all go on our separate ways happy (I hope) that sufficient information was shared for now and that I'm always available for further discussion on that material or anything else.

I mentioned the helicopter perspective that I give to Karo Stoltzenburg when she was thinking about when and how to draw a testing/checking distinction in her Exploratory Testing 101 workshop for TestBash Manchester. I was delighted that she was able to find something from it to use in her session, and also that she produced a picture that looks significantly nicer than any version I've ever scrawled as I talked.


Karo's picture is above, her notes from the whole session are on the Ministry of Testing club, and below is the kind of thing I typically say to new staff ...

What many people think software testing is, if they've ever even thought for a second about software testing, goes something like this:
  • an important person creates a big, thick document (the specification) full of the things a piece of software is supposed to do
  • this spec gets given to someone else (the developer) to build the software
  • the spec and the software are in turn given to another person (the tester) who checks that all the things listed in the spec are in the software.

In this view of the world, the tester takes specification items such as "when X is input, Y is output" and checks the software to see whether Y is indeed produced when X is put in. The result of testing in this kind of worldview probably looks like a big chart of specification items with ticks or crosses against them, and if there are enough crosses in important enough places the software goes through another round of development.

While checking against specification can be an important part of testing, there's much more that can be done. For example, I want my testers to be thinking about input values other than X, I want them to be wondering what other values can give Y, I want them to be exploring situations where there is no X, or when X is clearly invalid, or when X and some other input are incompatible, or what kinds of things the software shouldn't do and under what conditions ...

That's all good stuff, but there's scope for more. I also want my testers to be wondering what the motivation for building this software is, who the users are, and whether the specification seems appropriate for a software product that meets that need for those users. I'd also expect them to think about whether the project, and the team — including themselves — is likely to be able to create the product, given that requirement and specification, in the current context. For example, is there time to do this work, is there resource to do this work, do the team have sufficient tooling, expertise, or other dependencies, ...?

Even assuming the spec is appropriate, the context is conducive to implementation, and the team are not blocked, you won't be surprised to find that more possibilities exist. I'd like the tester to be alert to factors that might be important but which might not be in the specification much at all. These might include usability, performance, or integration considerations, ...

For me, one of the joys of testing is the intellectual challenge of identifying the places where there might be risk and wondering which of those it makes sense to explore with what priority. Checking the specification can certainly be part of testing, but it's not all that testing is: there's always more.
Image: Jimmy CricketKaro Stoltzenburg

Sunday, October 7, 2018

My Goodness


The six presentations at CEWT #6  took different angles on the topic of good testing or good testers. Here's my brief summary of each of them:

Whatever good testing is, Karo Stoltzenburg said, it's not likely to be improved by putting a box around its practitioners. In fact, pretty much any box you care to construct will fit some other profession just about as well at it fits testing. Testers, like people in general, are individuals and it's in the combination of diverse individuals that we spread our risk of missing something and so increase our chances of doing the good testing we all want.

What makes a senior tester? That's the question Neil Younger has been asking himself, and his colleagues at DisplayLink. He shared some of his preliminary thoughts with us. Like Karo he wants to avoid boxes, or at least to reduce their rigidity, but against that there's a desire from some for objective criteria, some kind of definition of done that moves them from one box to another. A senior tester for Neil must naturally also be a good tester, and the draft criteria he shared spoke to the kinds of extras he'd expect such a tester to be able to provide. Things like mentoring, coaching, unblocking, improvement, awareness, and knowledge.

"Any kind of metric for testing that involves bug counts focusses on how we're sorting out the failures we find, not on team successes." Chris Kelly has seen metrics such as net promoter score applied to technical support and was musing out loud about whether metrics could be applied to testing. If they could, what should they look like? The discussion covered the difference between teams and individuals. Can a team member do good testing while the team fails? One way of judging success was suggested: ask the stakeholders whether they're getting what they want from your work.

In the limit, good testing would lead to perfect testing, Šime Simic proposes, perhaps a little teasingly. We aren't going to get there, naturally, but we can do things that help us along the road. In particular, he talked about knowing the mission, aligning the mission with stakeholder needs, testing in line with the mission, doing what we can inside the constraints that the project has, and then reflecting on how it went and how we might change next time in similar circumstances. Testing, he went on, should not exist in a vacuum, and it's unrealistic if not unfair to try to judge it apart from the whole development process. (References)

Because they have different needs it's perfectly possible, and indeed reasonable, for two people to have different views of the same piece of testing. When they are the test manager and the tester, however, it may lead to some problems. For the tester, on the project Helen Stokes and Claire Banks talked about, exercising the product was the primary requirement. For the manager, visibility of the work, and the results of the work, were imperative. "There's more to good testing than doing the tests" they conclude.

My own presentation was about how, assuming we could know what good testing is, it can be a challenge to know whether someone is capable of doing it. I talked about how this particular problem manifests in recruitment all the time, and about how testing skills can be be used to assess candidates for testing roles. (Slides)
Image: https://flic.kr/p/LywqN