Skip to main content

Cambridge Lean Coffee


This month's Lean Coffee was hosted by Linguamatics. Here's some brief, aggregated comments and questions on topics covered by the group I was in.

As a developer, how can I make a tester's job easier?

  • Lots of good communication.
  • Tell us about the test coverage you already have.
  • Tell us what it would be useful for you to know.
  • Tell us what you would not like to see in the application.
  • Tell us what is logged, where, why, when.
  • Tell us what the log messages mean.
  • Tell us how you think it's supposed to work.
  • Show us how you think it's supposed to work.
  • Give us feedback on our testing - what's helping, what isn't.
  • Offer to demonstrate what you've done.
  • Say what you think are the risky areas, and why.
  • Say what was hard to get right, and why.
  • Recognise that we're not there to try and beat or show you up.
  • Help us find our unknown unknowns by sharing with us 

How can we help you, as a developer?

  • Give good repro steps in your reports.
  • Help me to understand the ambiguous requirements.
  • Ask your questions, they really do help.
  • Don't accuse.
  • Have specific details of the issues you observed.
  • Understand that developers can feel defensive of their work.
  • Tell me when you see something good, or something that works.

How do you avoid or mitigate biases?

  • Look back and review what you did, critically.
  • Check your assumptions or assertions.
  • Ask a developer.
  • Externalise your assumptions.
  • Peer review.
  • Rubber ducking.
  • Write (but don't send) and email to someone who might know. (Like rubber ducking)
  • Do something else for a bit and come back.
  • Be aware of the kinds of biases there are and then you can check for them.
  • Rule of three helps to generate perspectives.
  • Write down what you did, as this prompts thoughts about it.
  • Compare what you did to something else you could have done.
  • Remember to say "my assumption is" or "but perhaps that's my bias" out loud.

Should all testers know a programming language and, if so, which one?

  • It can help, e.g. to review code changes.
  • It can help with other aspects of testing, e.g. data generation.
  • Which language? Shell, because it's almost always just there. 
  • I like python.
  • Should is a strong verb. Understanding the need would help to answer.
  • Testers shouldn't feel forced to learn a programming language
  • ... but they should understand the risks of not (e.g. in recruitment, or by lacking a powerful tool)
  • It helps with software development jobs generally.
  • So does other technical knowledge, e.g. of HTTP.
  • Reading a language helps in testing.
  • There's an analogy to speaking English in a foreign country - it helps to have a bit of the local language.
  • Enables more empathy with the developer - probably less of the "we'll just fix that" mentality.
  • When recruiting, I don't care about the language.
  • It's best when there's support and a community to help learn programming.

What testing tool would you like to be able to wave a magic wand and just invent?

  • A universal standard for test case management and reporting software.
  • A tool to create a visual map of product architecture which can be overlaid with code changes, all aspects of testing, recent issues.
  • ... and also predict new issues!
  • Something that can reliably map specification items to test cases, and show what needs to be changed when the spec changes.
  • A reliable, stable GUI testing tool.
  • Something that puts a team straight into the sweet spot of great communication, talking directly, working with each other.
  • A Babel fish that means that the listener understands exactly what was meant by the speaker.

Do you bring questions to Lean Coffee or make them up when you're here?

  • Both.
  • I think about them in the shower.
  • I try to come with one then make some up.
  • I take notes during the month, then try to remember them on the way.
  • I think on the way in.
  • Would some kind of a board, perhaps Trello, be useful to store them between meet ups?
  • Perhaps it'd lead to discussion in Trello?
  • Perhaps people would come prepared with arguments for the issues they'd seen on Trello.
  • Topics might be stale by the time the meetup comes around.
  • Spontaneous is good!
  • Person-to-person is good!
  • Paper and pencil mean you think differently

Comments

Popular posts from this blog

Can Code, Can't Code, Is Useful

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "If testers can’t code, they’re of no use to us" My first reaction is to wonder what you expect from your testers. I am immediately interested in your working context and the way

Meet Me Halfway?

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Stop answering my questions with questions." Sure, I can do that. In return, please stop asking me questions so open to interpretation that any answer would be almost meaningless and certa

Not Strictly for the Birds

  One of my chores takes me outside early in the morning and, if I time it right, I get to hear a charming chorus of birdsong from the trees in the gardens down our road, a relaxing layered soundscape of tuneful calls, chatter, and chirrupping. Interestingly, although I can tell from the number and variety of trills that there must be a large number of birds around, they are tricky to spot. I have found that by staring loosely at something, such as the silhouette of a tree's crown against the slowly brightening sky, I see more birds out of the corner of my eye than if I scan to look for them. The reason seems to be that my peripheral vision picks up movement against the wider background that direct inspection can miss. An optometrist I am not, but I do find myself staring at data a great deal, seeking relationships, patterns, or gaps. I idly wondered whether, if I filled my visual field with data, I might be able to exploit my peripheral vision in that quest. I have a wide monito

Testing (AI) is Testing

Last November I gave a talk, Random Exploration of a Chatbot API , at the BCS Testing, Diversity, AI Conference .  It was a nice surprise afterwards to be offered a book from their catalogue and I chose Artificial Intelligence and Software Testing by Rex Black, James Davenport, Joanna Olszewska, Jeremias Rößler, Adam Leon Smith, and Jonathon Wright.  This week, on a couple of train journeys around East Anglia, I read it and made sketchnotes. As someone not deeply into this field, but who has been experimenting with AI as a testing tool at work, I found the landscape view provided by the book interesting, particularly the lists: of challenges in testing AI, of approaches to testing AI, and of quality aspects to consider when evaluating AI.  Despite the hype around the area right now there's much that any competent tester will be familiar with, and skills that translate directly. Where there's likely to be novelty is in the technology, and the technical domain, and the effect of

Postman Curlections

My team has been building a new service over the last few months. Until recently all the data it needs has been ingested at startup and our focus has been on the logic that processes the data, architecture, and infrastructure. This week we introduced a couple of new endpoints that enable the creation (through an HTTP POST) and update (PUT) of the fundamental data type (we call it a definition ) that the service operates on. I picked up the task of smoke testing the first implementations. I started out by asking the system under test to show me what it can do by using Postman to submit requests and inspecting the results. It was the kinds of things you'd imagine, including: submit some definitions (of various structure, size, intent, name, identifiers, etc) resubmit the same definitions (identical, sharing keys, with variations, etc) retrieve the submitted definitions (using whatever endpoints exist to show some view of them) compare definitions I submitted fro

Testers are Gate-Crashers

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Testers are the gatekeepers of quality" Instinctively I don't like the sound of that, but I wonder what you mean by it. Perhaps one or more of these? Testers set the quality sta

Vanilla Flavour Testing

I have been pairing with a new developer colleague recently. In our last session he asked me "is this normal testing?" saying that he'd never seen anything like it anywhere else that he'd worked. We finished the task we were on and then chatted about his question for a few minutes. This is a short summary of what I said. I would describe myself as context-driven . I don't take the same approach to testing every time, except in a meta way. I try to understand the important questions, who they are important to, and what the constraints on the work are. With that knowledge I look for productive, pragmatic, ways to explore whatever we're looking at to uncover valuable information or find a way to move on. I write test notes as I work in a format that I have found to be useful to me, colleagues, and stakeholders. For me, the notes should clearly state the mission and give a tl;dr summary of the findings and I like them to be public while I'm working not just w

Build Quality

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "When the build is green, the product is of sufficient quality to release" An interesting take, and one I wouldn't agree with in general. That surprises you? Well, ho

Make, Fix, and Test

A few weeks ago, in A Good Tester is All Over the Place , Joep Schuurkes described a model of testing work based on three axes: do testing yourself or support testing by others be embedded in a team or be part of a separate team do your job or improve the system It resonated with me and the other testers I shared it with at work, and it resurfaced in my mind while I was reflecting on some of the tasks I've picked up recently and what they have involved, at least in the way I've chosen to address them. Here's three examples: Documentation Generation We have an internal tool that generates documentation in Confluence by extracting and combining images and text from a handful of sources. Although useful, it ran very slowly or not at all so one of the developers performed major surgery on it. Up to that point, I had never taken much interest in the tool and I could have safely ignored this piece of work too because it would have been tested by

The Best Laid Test Plans

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "What's the best format for a test plan?" I'll side-step the conversation about what a test plan is and just say that the format you should use is one that works for you, your coll