Skip to main content

Joking With Jerry Part 1


This is the first part of the transcript of a conversation about the relationship between joking and testing. The jokers were Jerry Weinberg (JW), Michael Bolton (MB), James Lyndsay (JL), Damian Synadinos (DS), and me (JT).

See also Intro, Part 2 and Part 3.

--00--

JL: I wrote one joke for this. It turned up in my head and it may not be original, but it's mine. And the process of arrival is an interesting thing for me because you don't necessarily design a joke, you ready yourself for the arrival of the joke. And that certainly parallels some of the things I do when I'm testing ... where you are prepared and you've got your data and you're waiting for something to give you a hint, give you a smell, and then you might tune it a bit.

So the joke, post-tuning, is this: My name is James and I'm a workaholic ... I haven't had a job in 22 years.

JW: Do you get a special medal for 22 years? A little coin?

JL: I know that James Thomas is interested in how a particular idea - a little phrase - is tuned in a Milton Jones or a Steven Wright kind of way for a joke. Like you might do with a test. It has to be harmonised. Somebody comes along with what the next part of the joke might be. Similarly in software testing: you'll say "here's the problem I have" and someone else will say "I know how to use that problem you found; here's my exploit."

DS: I'd like to build on that idea. I found it fascinating working with James Thomas. It was very rewarding. Some of my favourite parts were watching him dissect his thought process as he went through joke analysis. Which, by the way, there's a nice joke about ...

<pause>

Erm, James, how does that go? It's fruitless process and the frog dies ... I think I botched the ending...
This is the quote Damian was looking for:“Explaining a joke is like dissecting a frog. You understand it better but the frog dies in the process.” - E.B. White
JW: That's like testing also. You run a series of tests, you set up everything and then you botch the ending and don't get anything useful out of it.

DS: Julius Sumner Miller used to have a TV show in the 60s or 70s. He used to do experiments and loved it when his experiments would fail. He would make a big point out of how it was an unexpected result and you could learn from that. Perhaps there's something similar in jokes. You think that you might have a punch line that works, that produces humour and mirth. But it does not, and figuring out why that is so...

But back to the point about building on jokes I find it interesting to see how sometimes the same joke structure can be changed across history. Perhaps it's a joke about Michael Jackson being the punch line but if you look up the etymology of the joke it started years ago about King Henry. The structure stays the same but the data is replaced.

JW: Very nice. I'm going to steal the joke that James Lyndsay made. I'm also a workaholic but I haven't have a job in 50 years! But that's like testing too. The best testers steal their techniques from others all the time. Probably the way the major amount of learning takes place in the testing community and anybody who thinks it's bad to steal jokes is probably not a good tester. And when you steal you do what Damian says; you change the content a little and suddenly the joke is funny again.

Related to that is the technique of repeated jokes. I talked about that before with James Thomas.
A snippet from that discussion: 
JW: One technique of comedians is to tell a joke that gets a good response, then keep referring to it later in variant jokes that produce increased laughter (a sort of in-joke for the audience). I'm not sure how that works when a tester keeps reporting the same fault pattern in a work.

JT: This is a really interesting thought. Comedians call those things callbacks and I spent some time wondering what analogies to software development might be. In humour, the power increases as the connection is remade. In testing, it can be the case that enough of a body of evidence (or body of dismay) can be built up to provoke a fix. But I've also been on the receiving end of disbelief and rejection of the issue from developers who see no value in reiterating the known issue.
JW: When a comedian tells a joke and it goes over well, they will repeat the punch line a number of times and hopefully find different contexts and it gets funnier each time.

But there's another phenomenon that occurs. You repeat the same joke over and over again, and it stops being so funny for some people. My wife, for example ... we've been married 55 years and the reason we've stayed together is that she doesn't remember my jokes.

If you keep finding the same error that some developer or team is making there's no problem of them accepting what you're saying but they get really frustrated. You get frustrated; keeping finding the same thing, keep reporting it, they don't seem to do anything about it. That's a source of big conflict and it's a way to make jokes not be very funny.

JT: I was thinking about the same thing and I wonder whether there's a subtlety. I wonder if it's not necessarily bad to come across the same underlying fault in multiple contexts that're still interesting; it's essentially the same issue but a different way into it. Someone who spends all their time searching in a place where they already know there's a problem and reporting slight variants of it - much like someone who beats a joke to death - that is a problem. Repetition doesn't have to mean bad.

DS: Here's an idea that might be related. There's a lot of old chestnuts, old jokes that've been told countless times and many people will say "I've heard that."

I'm a big fan of what they call anti-humour which is taking a very typical joke structure and set-up and turning it on its ear. Somebody starts to tell a joke and the listener is primed and ready for an unexpected ending [and] I find it hilarious for the ending to be entirely expected. Which in itself becomes funny.

I'll give you an example: What's red and smells like blue paint?
Red paint.

That idea of taking a script, a joke script, that has been run and executed many times and beaten to death. It might be interesting to take that as the premise and change something about it so that it's unexpected again in an expected way.

JW: I'm going to tell a joke... A comedian takes a friend of his to the comedian's convention. The comedians know all the jokes, so when they get up they just say a number and everyone starts laughing.

The friend says "what are they doing?" and the comedian says "they're all comedians and they know all the jokes so we catalogue them all and save a lot of time so we don't have to repeat the whole joke".

Now I know two endings for this joke... One of them is, this guy's just told number 739 and the room went wild over that and the comedian says "we hadn't heard that one before."  The other ending is that another guy gets up and says his number but nobody was laughing and the friend says "you see, it's not just the joke, it's the way he tells it".

So what does that have to do with testing?

The most important and least well-done part of testing is telling the joke. We find a joke, we find an unexpected ending and many, many testers are not good at reporting what they find. They tell it in a way that's not funny. It's blaming, they call people stupid for making these mistakes, they're not clear what the joke was. If you're going to be a good tester, you've got to practice your presentation.

JL: People grasp narratives better than they grasp lists of bullets. If you've got that circularity that Jerry was talking about of jokes coming back, particularly.

I've come across situations where you say "... of course that problem what we had with the null pointer exception it's here too" and that becomes the punchline, the thing that spins it time and time again.

Weirdly enough in Minecraft every three or four sets of release notes they talk about removing Herobrine.


But the character has never existed. Running jokes become a kind of narrative and also become in-jokes which then makes it an inclusion/exclusion thing. That can help to get it across (if you're in). But as soon as you come across someone who doesn't know who Herobrine is, or even what Minecraft is then suddenly the joke falls flat.

JW: I've been writing fiction recently. People respond better to narrative than to lists. One of the techniques in writing a mystery is not to have the solution pulled out of nowhere. You have to be able to tell people how you got to what you found - like testing. You have to have the clues in the mystery somewhere but you don't want the readers to know "oh, this is the key clue" when it comes up. So what you do is bury the clue in a list. If you have a list of more than three items people maybe remember the first one and the last one but all the middle ones kind of disappear.

What does this have to do with testing? When you're reporting issues, people don't hear the middle ones if you give them a long list. One of the things testers - like comedians - learn to do is to pace out presentation of your jokes. You can't just give a bullet list of jokes or they don't appreciate all the ones in the middle.

You have to do the most important ones first and hold the others in reserve, find out if people are actually paying attention. If not then you've got to back up and take care of the top few.

JT: I read an article by a comedian recently and he said "if you don't pause, there's no applause" and if you're telling your testing story you might want to make sure give your audience chance to take it in.

Jerry, you also said "start with the most important first" but I think typically comedians are going to want to put their best gag at the end. I've heard a lot of comedians say something like "I'll start with my second-best joke" and finish with the best.

JW: Let me clarify that a little bit: when I say start with the most important one, I mean start with the one that's most important to start with ...

... because you want to start with one that you're sure they'll respond to. They'll hear it, they won't deny it, they'll realise that there's something they need to do. If they don't respond to the one they should respond to then you know it's hopeless to go on with the rest.

So you're constantly testing your audience. This is the same technique in presenting your test results. You've got to test the presentation. To me, the way that you do that is to give them something that you're absolutely sure they'll respond to.

We use the same trick in development. We have a tool that we're sure will work in a certain case and we try it and it fails, then we can forget about the rest. It teaches us something. That's the great thing about testing to me, you always look at things for the unexpected, the possibility, for the other punch line. That's the mindset that I think distinguishes real testers from ordinary.

You're walking along the street on a Spring day and you're testing. You're testing the trees, you're testing the birds, if they're singing, if they're nesting in the same places, if something has changed on your route. It's like a mental disease, except that I find it very proper.

JT: It's about observing and reacting, both testing and joking.

JW: Yeah.

DS: Testers react to the things they observe and the things in the environment. I have a background in improvisation, short-form improv comedy. It's essentially creating something from nothing. You start on a blank stage and create theatre. Some times it's funny, some times it's serious but it's very exploratory and reacting to others and to the environment.

I'm also a raconteur of sorts. I have some stories that over and over again people at parties and gatherings will tell me "Damian, tell me that air-conditioning story" and I say "well you've heard it a dozen times" and they say "I don't care, I want to hear it 13". They love the story. So I go through and I tell the same story that I've told many times.

Now I realised that when I tell the story I don't explore and don't react. I tell a very structured way. Over the tellings of it many times to many people I pause at certain points, I hit certain words, I punch certain lines, I use certain words. It's all very planned out but it seems very improvisational.

Then I saw a movie called Comedian. It's a documentary that follows Jerry Seinfeld. This was after his TV show and he was incredibly famous and the cameras followed him as he went back on the road on the comedy circuit and began to develop brand new material.

He would show up at a comedy club unannounced and say "hey can I go on and do ten minutes?" and of course the owner of the club said "You're Jerry Seinfeld of course you can come on" and the cameras would follow him in and he'd do ten minutes and bomb. It was horrible and it was almost surreal to watch to Jerry Seinfeld, the funniest guy, do horribly.

But then the cameras would show him back in his office and he would change the joke. He would put a pause in here, he would change a word.  One of the jokes was about a disease and he would try scoliosis and he'd try chicken pox and he tried lupus and he realised that lupus was the funniest disease to have in this particular spot in the joke. And as the documentary progressed you saw this same joke evolve over time and the audience would start to chuckle and then start to giggle.

By the end he's doing a two-hour HBO special where he's telling the same joke that started as something completely unfunny in a nightclub and now it's a beautiful thing where he pauses and says "ah ah ah". That "ah ah ah" was planned. It's not Jerry forgetting the next word. It's absolutely tailored to getting the maximum response.

And I realised that's what I'm doing with my stories. What people thought was off the top of my head was actually a very rehearsed story. So this is the idea that I'm proposing. I want to see if anyone has any ideas how that concept might be related to testing.

MB: My experience with that, aside from working in a comedy club for three years, was one night I saw George Carlin at the Comedy Store in LA. He was preparing his HBO special. He essentially wrote for something in the order of eight or nine months then he took it to the Comedy Store for about six weeks if I recall correctly, and worked it out. He took the material that he'd written and reads it off a clipboard which is, for standup comedy, almost unheard of. Nobody in standup ever did this, in my observation, other than Carlin.

And he explained it to us. He said this is my process for doing this HBO special. I do a year's worth of material and I throw it out at the end of each year. I can't do the same material in a special year after year. I spend these weeks rehearsing this, repeating it, memorizing it. And then the whole time I'm working on the timing, I'm seeing what works, what doesn't, I'm editing.

To see him speak in public, he's very witty and very off-the-cuff and articulate in all kinds of ways but the act was very, very polished and very well-rehearsed. So well-rehearsed that you couldn't tell how well-rehearsed it was. Very similar to what Damian said, but even more extreme.

That's interesting because it's the opposite of the way most comics do things. They try little things and see if it works. He was working in a scripted way rather than an exploratory. But he was exploring the script and how to perform the script.

DS: I recently went to the Comedy Store in LA to see Kevin Nealon and Dana Carvey from Saturday Night Live. They hosted a few younger comedians in what they call New Material Night. Comedians come on for the sole purpose of trying brand new jokes to see if they float or sink.

Carvey brought with him a piece of paper and gave it to an audience member and said "pass this around and everybody just pick a topic off there and yell it up". And someone would yell "airplane" and he'd tell a joke and if it got a big laugh he'd note it down and if not he'd say "scratch that one off the list". That was very amazing to see a comedian do that style of comedy trying the material and testing it in that way.

JW: How is this related to testing and testers? Number one, testing something is a learned skill. Some people have more talent going in than others but if you don't work at it the way these comedians do ... as Michael says it looks so spontaneous because it's so well-rehearsed.

I had the experience, many years ago, after I published The Psychology of Computer Programming, I got invited to Carnegie Mellon where they were experimenting on timing people to debug a program by reading it.

As a psychologist, the first thing I wanted to do was try the experiment out as a subject. So they gave me the thing and I opened it and pointed at the bug. The guy with the stopwatch said "you're supposed to tell me what you're doing" and I said "I'll tell you what I'm doing - there's the bug." It was an off-by-one error.

It took me some time to get him to stop the stopwatch. And he said "you can't do that. We've had hundreds of people and nobody's done it in less than six minutes and you did it in six seconds, spontaneously." And I asked whether I got it right and he said "Yeah, but ..." and he couldn't finish the sentence.

It looked to him like I was just being spontaneous. I asked who he tested before and he said "Oh they were experienced programmers. Some of them had written 10, 20 or 30 programs." I did a little estimate and said "in my career so far I've looked at over 10000 programs..."

And when I look at a program it's like a comedian telling a joke.  This is not the first joke I ever told. It's not even the first time I ever told that joke. There's many studies that show that after 10000 times playing chess or doing musical performance something different happens. This is the same with testing. Repetition enhances and that's the first point.

The second point may be even more important and I don't think I've ever heard anyone discussing it. One of the biggest things you do as a tester is that you are modelling for your audience. If you make an error report and you bungle it and you don't convince them or you made a wrong tip and you get challenged, how do you respond?

If you respond to being tested in a way that you don't want them to respond,  then you're teaching them. What you want to teach them is: listen to what I'm saying, think about it before you respond, and then do something about it.

If you don't act in that way then your audience, the people you're testing for, won't act in that way.

JT: There's different audiences though aren't there? In a small group, or 1-1, you get a different reaction from the person you're joking with than when you're a comedian on stage with 1000 people watching. Is it the same? Are you being tested in both cases? Differently?

JW: One of the mistakes you can make when testing your presentation is to look at the person who is responding the best. That is, most favourably. If you really want to do this right you should look at the person who is not responding and find out why they're not responding, why they don't understand your presentation. It's a very common mistake - one person is smiling and the others are frowning. You'll look at the smiling person.

MB: There's an equally bad mistake you can make, which is to dedicate all of your attention to the sole frowning person.

JW: Absolutely!

See also IntroPart 2 and Part 3.

Image: Ebay, Youtube

Comments

Popular posts from this blog

Can Code, Can't Code, Is Useful

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "If testers can’t code, they’re of no use to us" My first reaction is to wonder what you expect from your testers. I am immediately interested in your working context and the way

Meet Me Halfway?

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Stop answering my questions with questions." Sure, I can do that. In return, please stop asking me questions so open to interpretation that any answer would be almost meaningless and certa

Not Strictly for the Birds

  One of my chores takes me outside early in the morning and, if I time it right, I get to hear a charming chorus of birdsong from the trees in the gardens down our road, a relaxing layered soundscape of tuneful calls, chatter, and chirrupping. Interestingly, although I can tell from the number and variety of trills that there must be a large number of birds around, they are tricky to spot. I have found that by staring loosely at something, such as the silhouette of a tree's crown against the slowly brightening sky, I see more birds out of the corner of my eye than if I scan to look for them. The reason seems to be that my peripheral vision picks up movement against the wider background that direct inspection can miss. An optometrist I am not, but I do find myself staring at data a great deal, seeking relationships, patterns, or gaps. I idly wondered whether, if I filled my visual field with data, I might be able to exploit my peripheral vision in that quest. I have a wide monito

Testing (AI) is Testing

Last November I gave a talk, Random Exploration of a Chatbot API , at the BCS Testing, Diversity, AI Conference .  It was a nice surprise afterwards to be offered a book from their catalogue and I chose Artificial Intelligence and Software Testing by Rex Black, James Davenport, Joanna Olszewska, Jeremias Rößler, Adam Leon Smith, and Jonathon Wright.  This week, on a couple of train journeys around East Anglia, I read it and made sketchnotes. As someone not deeply into this field, but who has been experimenting with AI as a testing tool at work, I found the landscape view provided by the book interesting, particularly the lists: of challenges in testing AI, of approaches to testing AI, and of quality aspects to consider when evaluating AI.  Despite the hype around the area right now there's much that any competent tester will be familiar with, and skills that translate directly. Where there's likely to be novelty is in the technology, and the technical domain, and the effect of

Postman Curlections

My team has been building a new service over the last few months. Until recently all the data it needs has been ingested at startup and our focus has been on the logic that processes the data, architecture, and infrastructure. This week we introduced a couple of new endpoints that enable the creation (through an HTTP POST) and update (PUT) of the fundamental data type (we call it a definition ) that the service operates on. I picked up the task of smoke testing the first implementations. I started out by asking the system under test to show me what it can do by using Postman to submit requests and inspecting the results. It was the kinds of things you'd imagine, including: submit some definitions (of various structure, size, intent, name, identifiers, etc) resubmit the same definitions (identical, sharing keys, with variations, etc) retrieve the submitted definitions (using whatever endpoints exist to show some view of them) compare definitions I submitted fro

Testers are Gate-Crashers

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Testers are the gatekeepers of quality" Instinctively I don't like the sound of that, but I wonder what you mean by it. Perhaps one or more of these? Testers set the quality sta

Vanilla Flavour Testing

I have been pairing with a new developer colleague recently. In our last session he asked me "is this normal testing?" saying that he'd never seen anything like it anywhere else that he'd worked. We finished the task we were on and then chatted about his question for a few minutes. This is a short summary of what I said. I would describe myself as context-driven . I don't take the same approach to testing every time, except in a meta way. I try to understand the important questions, who they are important to, and what the constraints on the work are. With that knowledge I look for productive, pragmatic, ways to explore whatever we're looking at to uncover valuable information or find a way to move on. I write test notes as I work in a format that I have found to be useful to me, colleagues, and stakeholders. For me, the notes should clearly state the mission and give a tl;dr summary of the findings and I like them to be public while I'm working not just w

Build Quality

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "When the build is green, the product is of sufficient quality to release" An interesting take, and one I wouldn't agree with in general. That surprises you? Well, ho

Make, Fix, and Test

A few weeks ago, in A Good Tester is All Over the Place , Joep Schuurkes described a model of testing work based on three axes: do testing yourself or support testing by others be embedded in a team or be part of a separate team do your job or improve the system It resonated with me and the other testers I shared it with at work, and it resurfaced in my mind while I was reflecting on some of the tasks I've picked up recently and what they have involved, at least in the way I've chosen to address them. Here's three examples: Documentation Generation We have an internal tool that generates documentation in Confluence by extracting and combining images and text from a handful of sources. Although useful, it ran very slowly or not at all so one of the developers performed major surgery on it. Up to that point, I had never taken much interest in the tool and I could have safely ignored this piece of work too because it would have been tested by

The Best Laid Test Plans

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "What's the best format for a test plan?" I'll side-step the conversation about what a test plan is and just say that the format you should use is one that works for you, your coll