Skip to main content

Joking With Jerry Part 2


This is the second part of the transcript of a conversation about the relationship between joking and testing. The jokers were Jerry Weinberg (JW), Michael Bolton (MB), James Lyndsay (JL), Damian Synadinos (DS), and me (JT).

See also Intro, Part 1 and Part 3.

--00--

JW: What relevance do dirty jokes have to testing?

DS: Dirty to whom?

JW: Exactly. What I was thinking was you learn that you don't tell certain jokes to certain people. For a certain audience, certain jokes are just not OK. Well, the same thing happens in testing. We censor what we say based on the audience we perceive. Sometimes for good reasons and sometimes for bad reasons. Sometimes you might need to find two versions: dirty and clean.

Every time you find an error there's information: about the specific error, about the person who made the error, information about the consequences of the error and that information needs to get distributed. So if you're censoring yourself mindlessly you're not doing a good job.

MB: I have a lot of experience watching comics [who were used to a club environment] where basically anything goes transfer themselves to television. They had to figure out how to edit themselves appropriately and yet still carry on.

There was a joke about "ring around the collar" ... there was a TV campaign years ago. A woman would say "my husband goes to work and he's so embarrassed because he had ring around the collar, what do I do?" and the comic's observation is "well, if he'd just wash his fuckin' neck every once in a while ..."


Of course when that comes to TV, he had to say "well, if he'd just wash his filthy neck every once in a while ..." The remarkable skill, I think, of taking stuff that you're willing to say in one context and figuring out how to carry the message across with the same level of intensity but using difference words that are socially OK in that radically different context. It's a rare comic who could cross the line successfully and be as effective in one genre as another. Clean comics often didn't work well in the night clubs either?

JT: Would technical language be an analogue of swearing here: you want to keep out the technical detail for some audiences when describing issues?

MB: I think it is more to do with emotional aspects.

JW: I think it's both things. There's a third thing; nowadays people are outsourcing stuff overseas and I see a lot of problems with cultural translations when you report something and, for example, the severity of what you're saying and the importance that you're giving to it is not understood in a different culture.

DS: I'll offer this up as a pithy restatement of the ideas we're talking about: be careful about what you say, but be careful about being careful about what you say.

Certainly, censorship has its time and place and each context you need to study and make sure that the things that you're saying are edited and appropriate for the audience but if you lose sight of the fact that you're doing that practice that's dangerous as well.

MB: That's reminded me of another joke: spontaneity has its time and its place.
I hadn't heard this one before. BrainyQuote attributes it to Arthur Frank Burns.
JW: As my mother used to say: always be sincere, whether you mean it or not.
Or, as I heard it first, "always be sincere, even if you don't mean it." - Harry S Truman. I think Jerry's mum's version is nicer, though.
JL: Sometimes when you approach the taboo in a joke, that's where you get the titter. As you go beyond the taboo into something that's unreasonable, that's when you get the sharp intake of breath. You can find that the taboo - as you change context - is revealing stuff about your audience and you can find the same thing with testing.

I'll give you a for-instance. I was doing something at a company that was very engineering-focussed rather than testing-focussed and the taboo that they were finding it difficult to face - and I hesitate to use the word - was breaking the software. I gave them a field to play with that could take an age [value] and typically they used numbers between 1 and 100. A few of them used 1000.

Very few of them used an exceptionally large number of characters or pasted in half a book because that was seen as unfair or taboo. With a group of testers if I show them what happened when you do that, sometimes I get a laugh of out of it. The guys from the engineering company felt that it was the wrong thing to do.

Sometimes people laugh not because it's amusing but because they're delighted by a revelation. I've got a book that talks about how that works in labs, with the lab director saying that he likes the sound of laughter because that's where the discovery is being made.

You can take the same set of testing exercises and show it to one group of people and they like it because it delights them, it confirms what they know and there's something extra but you take it to some engineers and they say "but that's not what you should do, James, it's not fair".

The taboo is the exposing of the audience. You might use it sometimes to see how far you can go in the same way that you might substitute "fucking" for "filthy". I think that most of the audience, of whatever age, is going to know that what you're really saying is "fucking" but you've avoided saying "fucking" and that's the funny bit.

JW: What Michael said about the emotional reaction is important too. You want to get an emotional reaction when you make a report, you don't just want people to yawn and say "so what?". On the other hand, you don't want their emotional reaction to be totally negative and opposing what you're saying.

Sometimes, breaking a taboo can change, or get permission for, an emotional climate that otherwise you would not have. So one of the  things that I teach is that when something happens, like you're writing a program and you're testing it and suddenly something comes up and you realise you made a big mistake. Give yourself five seconds and say "Oh shit."

Now, I tell this to some pretty conventional audiences ... it's a bad thing, shouldn't talk like that etc etc, strong words, but you can do it for five seconds and then stop and then get out and solve the problem. You're acknowledging that you have the emotional reaction, you're giving yourself permission to have it and you're giving yourself permission to feel strongly about it. But you don't have to stick with it for ever.

So five seconds is short - some people need a little longer - but it works for a lot of people. In the same way, you hear a joke and find it offensive, then you could just turn off and never hear anything else that the comedian has to say, perhaps you walk out. Learning to be an audience in a comedy club is something that takes some doing.

DS: This is where I think the idea of empathy comes in. Not only in testing but in improv and theatre and in standup as well. Empathising with the different audience members or customers that you will be serving. As a tester I think it's pretty obvious that you're testing and gathering information and being objective, but I think it also helps to be subjective and put on the hat of various customers who might be using this program and how might this make them feel.

At different talks that I do I like to show a kind of silly error where I induce a text document to become Chinese characters and the whole audience laughs at it and I tell them that how I discovered this bug was when I was researching this talk and it actually happened to me! And I thought "wow, isn't this hilarious?" until I realised that I didn't have a backup of my document and I completely lost a lot of work and I became furious and angry.

The error hadn't changed, but my perspective had. Instead of being a researcher for a presentation, I became a user and became angry. It helps as a tester if you can step into the shoes of your customers.

The same thing goes for comics. If you can think about the audience in the Deep South or up in New York City, how might they react to certain jokes.

MB: I had to take empathy on myself today. The printer once again let me down. Mercifully the house was empty so I took five seconds to swear at the top of my lungs at the printer because I'm upset about all these other things that are going on. I know that I'm upset, it's OK for me to be upset and so the empathy towards oneself is often a good place to start.

JW: It's part of what you do with your customers. Presenting them with the logic - this was the requirement, this is what we did, this was the result, blah blah blah. You want to start with making a connection to the people. I think it's a mistake for example for testing organisations to just issue paper reports but a lot of people do because they're exactly afraid of the reaction of their customers. If they report something serious they don't want to be yelled at.

But you want to be there. You have to sell what your product is. I have test reports; I have bug reports, and I have to sell them. I learned this years ago from my sister-in-law who was dating a policeman in LA. We were having dinner with them and he was late and finally he showed up and he apologised and said "I had to sell a ticket to some guy for speeding" and I said "sell a ticket?" and I had all these ideas about what that means, taking bribes ...

The officer was going to UCLA during the day and working the night shift. So if he gave him a ticket and didn't sell it - sell to the person that they really did something that was wrong and they deserved this ticket and they should accept it - then this would go to court and that would be in the day, which meant he couldn't go to classes...

So he developed this technique of selling the ticket. When you give the ticket to somebody it's not just handing them the paper with their name and the offence and so on like a bug report - it is a bug report, in fact - you sell it to them. And I've used that model because a bug report is very much like - and this is not a joke now - a traffic violation. You are delivering news that people don't want to hear. At some level they don't want to hear it. At another level they do want to hear it.

I got a ticket this week, for the first time in years. I went through a stop light and they caught me on camera.


My first reaction was "Oh shit!" and then I said "there's information there for me."  As I'm getting older I don't want to lose my license and I don't want to make mistakes and have accidents so I have to learn from this. So I appreciated getting caught. I was only half-aware of what I was doing.

DS: Jerry, you're giving people information that they don't want to hear. Some times they'll call it bad news, these bugs ...  but a lot of time testers deliver good news. They deliver news and sometimes it can be interpreted in different ways. You can deliver the same news to two different people and one thinks it's a problem and the other says "thank goodness!"

And so I have a thought experiment I sometimes run with people. What if you tested and tested and they gave you more time and you tested deeply and you tested widely and you didn't find any problems, have you provided value? What do you tell your customers?

And it usually has a very visceral reaction. They feel awful. Their stomach churns and they sweat and they say "I feel like I did a bad job" and I say "well, why is that? Maybe there was nothing to find. Perhaps there was no bad news. That doesn't mean you have to give no news. Perhaps the news you're giving might be interpreted as good news. It's just news that has to be interpreted by the receiver."

JW: It can also be interpreted as "you didn't do a good job of testing".

DS: But it's not necessarily so.

JW: No, of course. You always have to make an analysis of what it means, and it could mean different things to different people.  So, you asked the the tester if you run all kinds of stuff and never turn up anything, what do you know? It's one of the reasons we introduced the subject, and I coined the word many years ago, bebugging. Where we sprinkle some known errors in the program and then you always find something, if you approach it in a different way knowing that there is something there.
In the paper I wrote to go with my EuroSTAR talk on joking, I wondered about this too: "... when you're expecting a punch line, you will probably be more (even unconsciously) creative in making connections or in accommodating them when they are delivered. So, when testing, I wonder whether it would be advantageous to put yourself in that position and expect there is an issue for you to find?" Afterwards, I got some interesting suggestions.
One correspondent suggested that, unlike a defendant being considered innocent until proven guilty in a court of law, our starting point in the court of software development should be that an application is guilty.   
Another pointed me at the BBST Bug Advocacy course notes which, via Glen Myers, say “testers who want to find bugs are more likely to notice program misbehavior than people who want to verify that the program works correctly” 
There's different problems: looking for a needle in a haystack is one problem. Looking for a needle in a haystack if there is one is a very different problem. Looking for all the needles in a haystack is a different problem again.

MB: There is also the problem of looking for needles in haystacks that are composed entirely of needles. That's a tough one. It takes you a long time to find a specific needle. It takes more time to account for the needles when there's a whole bunch in the haystack.

JW: And when the needles are sticking together magnetically it's even harder. It's also very hard to find them if the technique you use is to drop your pants and sit in the haystack and wait for one of them to stick you in the ass.

That's a technique you could use and some of the testing techniques that I've seen look like that to me... Well, it would work and it might be more effective than looking.

But do you want to do it?

See also IntroPart 1 and Part 3.

Image: Ebay, Failblog

Comments

Popular posts from this blog

Can Code, Can't Code, Is Useful

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "If testers can’t code, they’re of no use to us" My first reaction is to wonder what you expect from your testers. I am immediately interested in your working context and the way

Meet Me Halfway?

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Stop answering my questions with questions." Sure, I can do that. In return, please stop asking me questions so open to interpretation that any answer would be almost meaningless and certa

Testing (AI) is Testing

Last November I gave a talk, Random Exploration of a Chatbot API , at the BCS Testing, Diversity, AI Conference .  It was a nice surprise afterwards to be offered a book from their catalogue and I chose Artificial Intelligence and Software Testing by Rex Black, James Davenport, Joanna Olszewska, Jeremias Rößler, Adam Leon Smith, and Jonathon Wright.  This week, on a couple of train journeys around East Anglia, I read it and made sketchnotes. As someone not deeply into this field, but who has been experimenting with AI as a testing tool at work, I found the landscape view provided by the book interesting, particularly the lists: of challenges in testing AI, of approaches to testing AI, and of quality aspects to consider when evaluating AI.  Despite the hype around the area right now there's much that any competent tester will be familiar with, and skills that translate directly. Where there's likely to be novelty is in the technology, and the technical domain, and the effect of

Testers are Gate-Crashers

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Testers are the gatekeepers of quality" Instinctively I don't like the sound of that, but I wonder what you mean by it. Perhaps one or more of these? Testers set the quality sta

Postman Curlections

My team has been building a new service over the last few months. Until recently all the data it needs has been ingested at startup and our focus has been on the logic that processes the data, architecture, and infrastructure. This week we introduced a couple of new endpoints that enable the creation (through an HTTP POST) and update (PUT) of the fundamental data type (we call it a definition ) that the service operates on. I picked up the task of smoke testing the first implementations. I started out by asking the system under test to show me what it can do by using Postman to submit requests and inspecting the results. It was the kinds of things you'd imagine, including: submit some definitions (of various structure, size, intent, name, identifiers, etc) resubmit the same definitions (identical, sharing keys, with variations, etc) retrieve the submitted definitions (using whatever endpoints exist to show some view of them) compare definitions I submitted fro

Build Quality

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "When the build is green, the product is of sufficient quality to release" An interesting take, and one I wouldn't agree with in general. That surprises you? Well, ho

Make, Fix, and Test

A few weeks ago, in A Good Tester is All Over the Place , Joep Schuurkes described a model of testing work based on three axes: do testing yourself or support testing by others be embedded in a team or be part of a separate team do your job or improve the system It resonated with me and the other testers I shared it with at work, and it resurfaced in my mind while I was reflecting on some of the tasks I've picked up recently and what they have involved, at least in the way I've chosen to address them. Here's three examples: Documentation Generation We have an internal tool that generates documentation in Confluence by extracting and combining images and text from a handful of sources. Although useful, it ran very slowly or not at all so one of the developers performed major surgery on it. Up to that point, I had never taken much interest in the tool and I could have safely ignored this piece of work too because it would have been tested by

Am I Wrong?

I happened across Exploratory Testing: Why Is It Not Ideal for Agile Projects? by Vitaly Prus this week and I was triggered. But why? I took a few minutes to think that through. Partly, I guess, I feel directly challenged. I work on an agile project (by the definition in the article) and I would say that I use exclusively exploratory testing. Naturally, I like to think I'm doing a good job. Am I wrong? After calming down, and re-reading the article a couple of times, I don't think so. 😸 From the start, even the title makes me tense. The ideal solution is a perfect solution, the best solution. My context-driven instincts are reluctant to accept the premise, and I wonder what the author thinks is an ideal solution for an agile project, or any project. I notice also that I slid so easily from "an approach is not ideal" into "I am not doing a good job" and, in retrospect, that makes me smile. It doesn't do any harm to be reminded that your cognitive bias

Test Now

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "When is the best time to test?" Twenty posts in , I hope you're not expecting an answer without nuance? You are? Well, I'll do my best. For me, the best time to test is when there

Vanilla Flavour Testing

I have been pairing with a new developer colleague recently. In our last session he asked me "is this normal testing?" saying that he'd never seen anything like it anywhere else that he'd worked. We finished the task we were on and then chatted about his question for a few minutes. This is a short summary of what I said. I would describe myself as context-driven . I don't take the same approach to testing every time, except in a meta way. I try to understand the important questions, who they are important to, and what the constraints on the work are. With that knowledge I look for productive, pragmatic, ways to explore whatever we're looking at to uncover valuable information or find a way to move on. I write test notes as I work in a format that I have found to be useful to me, colleagues, and stakeholders. For me, the notes should clearly state the mission and give a tl;dr summary of the findings and I like them to be public while I'm working not just w