Skip to main content

AST Lean Coffee

 

My second Lean Coffee in a week, this one online with the Association for Software Testing. Here's a few aggregated notes from the conversation.

Why do people want to speak at conferences, and can we support them to get what they need in other ways?

  • Lots of noise recently on Twitter about people being rejected, and discussion of tactics for getting in.
  • So why do people want to speak at conferences?
  • Increase their brand.
  • Be more employable.
  • Be better known.
  • Go to the conference for free.
  • Company will only pay if they are speaking.
  • Share what I've learned.
  • Share my story.
  • Challenge yourself.
  • Because they see others doing it
  • Personal access to "big names."
  • Conferences always have the same speakers.
  • Do people need better access to conferences?
  • Can be a vicious cycle: accepted because we know you; we know you because you speak.
  • Perhaps the return to in-person conferences has increased the demand for speaking slots.
  • People don't know how to sell their talk to conferences
  • Lots of people stick the same proposal into multiple conferences, not tailored
  • I get inspiration from conferences: testing is a lot bigger than I remember day-to-day.
  • If you want to get known, other platforms might not be so good.
  • Conference talks are often amplified widely on social media.
  • What else can we do to boost signal?
  • Magazines, Podcasts, Blogs, Twitter, LinkedIn, Peer Conference, YouTube, ...

What one thing would you prefer never to have to do again (as a tester)

  • Repeatedly explaining something to someone who is lazy or doesn't want to open their mind.
  • Justifying why it's useful having a tester on a team or doing testing.
  • ... One of the best things is explaining to people who want to learn (teaching vs justifying).
  • ... Many different people in project teams think they can tell testers how to work.
  • ... I'm happy to question my own practices for improvemen.
  • ... Providing good feedback and constructive criticism is positive
  • Explaining why you can't automate all the testing.
  • ... Software development has an undercurrent that is trying to destroy good work by testers.
  • Being in a conference where some smartass explains why everyone can be as successful as them.
  • Sitting with a developer while they fix a bug I reported (because they feel they need that reassurance).
  • ... or working with developers that think testing is someone else's job.

Working conditions for testers

  • Inspired by a Twitter thread.
  • The reputation of the gaming industry is dire.
  • Why are big-name companies apparently abusing game testers?
  • What is the culture like?
  • How can we change it?
  • What other industries are bad?
  • Outsourcing companies have been known to abuse juniors particularly.
  • I have worked with people who tell all sorts of tales about game testing.
  • ... a culture of short-term contracts.
  • ... a funding model that means they need to be able to drop people after product release.
  • Tech has a weird relationship to unions.
  • Reasonable pay in tech makes people think we don't need unions.
  • Geography is important, e.g. China and India have a reputation for poor worker conditions.
  • ... e.g. 40 hour week contract but employers expect 996 work.
  • Is it worse for testers than other software professionals?
  • But some people have no choice.

Tactics for learning while testing

  • You have work to do and it has a deadline
  • But you also want to learn new tools, new approaches, new things about your domain
  • Be prepared to gamble some specific time on trying things.
  • ... but abandon it if it's not working out in the time.
  • ... and come back to it on another occasion.
  • Give yourself permission to learn
  • Developers give themselves time and spike tickets.
  • People with seniority need to get better at giving time to their staff.
  • Senior testers tend to be better at making time for learning.
  • Discipline is needed.
  • I only do it when the situation forces it because of time pressure.
  • Look for an opportunity to do something new.
  • You need to practice a thing to get deep with it.
  • Perhaps analyse tasks before jumping into them, for learning opportunities.
  • Ask "can I try this experiment in a different way?"
  • We have a Community Day at work.
  • If you don't have that, then use stealth approaches!
  • Unionise!

Image: https://flic.kr/p/63wFf7


Comments

Popular posts from this blog

Notes on Testing Notes

Ben Dowen pinged me and others on Twitter last week , asking for "a nice concise resource to link to for a blog post - about taking good Testing notes." I didn't have one so I thought I'd write a few words on how I'm doing it at the moment for my work at Ada Health, alongside Ben. You may have read previously that I use a script to upload Markdown-based text files to Confluence . Here's the template that I start from: # Date + Title # Mission # Summary WIP! # Notes Then I fill out what I plan to do. The Mission can be as high or low level as I want it to be. Sometimes, if deeper context might be valuable I'll add a Background subsection to it. I don't fill in the Summary section until the end. It's a high-level overview of what I did, what I found, risks identified, value provided, and so on. Between the Mission and Summary I hope that a reader can see what I initially intended and what actually

Why Do They Test Software?

My friend Rachel Kibler asked me the other day "do you have a blog post about why we test software?" and I was surprised to find that, despite having touched on the topic many times, I haven't. So then I thought I'd write one. And then I thought it might be fun to crowdsource so I asked in the Association for Software Testing member's Slack, on LinkedIn , and on Twitter for reasons, one sentence each. And it was fun!  Here are the varied answers, a couple lightly edited, with thanks to everyone who contributed. Edit: I did a bit of analysis of the responses in Reasons to be Cheerful, Part 2 . --00-- Software is complicated, and the people that use it are even worse. — Andy Hird Because there is what software does, what people say it does, and what other people want it to do, and those are often not the same. — Andy Hird Because someone asked/told us to — Lee Hawkins To learn, and identify risks — Louise Perold sometimes: reducing the risk of harming people —

Enjoy Testing

  The testers at work had a lean coffee session this week. One of the questions was  "I like testing best because ..." I said that I find the combination of technical, intellectual, and social challenges endlessly enjoyable, fascinating, and stimulating. That's easy to say, and it sounds good too, but today I wondered whether my work actually reflects it. So I made a list of some of the things I did in the last working week: investigating a production problem and pairing to file an incident report finding problems in the incident reporting process feeding back in various ways to various people about the reporting process facilitating a cross-team retrospective on the Kubernetes issue that affected my team's service participating in several lengthy calibration workshops as my team merges with another trying to walk a line between presenting my perspective on things I find important and over-contributing providing feedback and advice on the process identifying a

Testing is Knowledge Work

  The Association for Software Testing is crowd-sourcing a book, Navigating the World as a Context-Driven Tester , which aims to provide responses to common questions and statements about testing from a context-driven perspective . It's being edited by Lee Hawkins who is posing questions on Twitter ,  LinkedIn ,  Slack , and the AST mailing list and then collating the replies, focusing on practice over theory. I've decided to contribute by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "We need some productivity metrics from testers" OK. I'd like to help you meet your need if I can but to do that I'll need to ask a few questions. Let's start with these: Who needs the metrics? Is there a particular pr

My Favourite Tool

Last week I did a presentation to a software testing course at EC Utbildning in Sweden titled Exploring with Automation where I demoed ways in which I use software tools to help me to test. Following up later, one of the students asked whether I had a favourite tool. A favourite tool? Wow, so simple but sooo deep!  Asking for a favourite tool could make a great interview question, to understand the breadth and depth of a candidate's knowledge about tools, how they think about an apparently basic request with deep complexity beneath (favourite for what task, on what basis, in what contexts, over what timescale?  what is a tool anyway?) and how they formulate a response to take all of that into account. I could truthfully but unhelpfully answer this question with a curt Yes or No. Or I could try and give something more nuanced. I went for the latter. At an extremely meta level I would echo Jerry Weinberg in Perfect Software : The number one te

Risk-Based Testing Averse

  Joep Schuurkes started a thread on Twitter last week. What are the alternatives to risk-based testing? I listed a few activities that I thought we might agree were testing but not explicitly driven by a risk evaluation (with a light edit to take later discussion into account): Directed. Someone asks for something to be explored. Unthinking. Run the same scripted test cases we always do, regardless of the context. Sympathetic. Looking at something to understand it, before thinking about risks explicitly. In the thread , Stu Crook challenged these, suggesting that there must be some concern behind the activities. To Stu, the writing's on the wall for risk-based testing as a term because ... Everything is risk based, the question is, what risks are you going to optimise for? And I see this perspective but it reminds me that, as so often, there is a granularity tax in c

Use the Force Multiplier

On Fridays I pair with doctors from Ada 's medical quality team. It's a fun and productive collaboration where I gain deeper insight into the way that diagnostic information is encoded in our product and they get to see a testing perspective unhindered by domain knowledge. We meet at the same time each week and decide late on our focus, choosing something that one of us is working on that's in a state where it can be shared. This week we picked up a task that I'd been hoping to get to for a while: exploring an API which takes a list of symptoms and returns a list of potential medical conditions that are consistent with those symptoms.  I was interested to know whether I could find small input differences that led to large output differences. Without domain knowledge, though, I wasn't really sure what "small" and "large" might mean. I prepared an input payload and wrote a simple shell script which did the following: make a

Done by Friday

The Association for Software Testing is crowd-sourcing a book, Navigating the World as a Context-Driven Tester , which aims to provide responses to common questions and statements about testing from a context-driven perspective . It's being edited by Lee Hawkins who is posing questions on Twitter ,  LinkedIn ,  Slack , and the AST mailing list and then collating the replies, focusing on practice over theory. I've decided to contribute by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00--  "Will the testing be done by Friday?" If the question relates to some prior discussion about scenarios we've agreed to run through before Friday then I'll do my best to base my answer on experience gathered so far . How sim

The Great Post Office Scandal

  The Great Post Office Scandal by Nick Wallis is a depressing, dispiriting, and disheartening read. For anyone that cares about fairness and ethics in the relationship that business and technology has with individuals and wider society, at least. As a software tester working in the healthcare sector who has signed up to the ACM code of ethics through my membership of the Association for Software Testing I put myself firmly in that camp. Wallis does extraordinarily well to weave a compelling and readable narrative out of a years-long story with a large and constantly-changing cast and depth across subjects ranging from the intensely personal to extremely technical, and through procedure, jurisprudence, politics, and corporate governance. I won't try to summarise that story here (although Wikipedia takes a couple of stabs at it ) but I'll pull out a handful of threads that I think testers might be interested in: The unbelievable naivety which lead to Horizon (the system at th

A Model Project

And this is how it goes. One thing to another ... At the weekend I  was listening to Gene Kim's Idealcast interviews with Dr. Nicole Forsgren and Jez Humble . Jez Humble was talking about the importance of rapid feedback and referenced a  Brett Victor  talk that had stuck with him over the years: ... he built this little JavaScript game and he was changing parameters and the game was changing in real time. And his whole comment is like, if you can change something and see the change as you are changing it, it's incredibly powerful.   And so I looked it up in the show notes and watched it. Wow ... Inventing in Principle shows examples of experimental tooling for creative activities, particularly those that include a temporal dimension. The essential idea is to reduce the friction of having to maintain a model outside of the tool. In the image at the top, the left side is a traditional IDE and the right side is a dynamic visualisation of the function being developed. You might i