Skip to main content

Agile in the Ether

 

What I particularly like about Lean Coffee are the timeboxes. At Agile in the Ether yesterday it was an hour for the whole event and just eight minutes per topic. At that level, the investment is low and the potential returns are high: some ideas for right-now problems and background for those that will surely come later. On top of that, the possibility that I can share something that will be a win for someone else.

Here's my notes aggregated from the conversations.

Ideas for agile coaching teams when you're in there on a very ad hoc basis. How to not disturb but still bring a value. Is it possible?

  • The teams are typically overconstrained which makes change difficult.
  • It's common for the coach to suggest an approach but not return for a couple of iterations.
  • They don't know whether it landed.
  • Can you build a relationship with someone in the team and have close communication with them to get the feedback?
  • Ideally this would be with someone who cares about improvements and would be very regular.
  • Find other people in the team who can evangelise and teach them to coach.
  • Remember that you can't do everything. 
  • Find a way to priortise, e.g. just coach retros across all teams.
  • Try and get a coaching agreement with leadership to decide on a focus.
  • Find some leading indicators that teams need your help and focus on those teams.
  • Can you focus on a particular team for a longer period?
  • Let a successful team tell the stories of their success to others.
  • Remember that practices should be contextual and you can give different advice to teams at different stages. If these teams need it, perhaps be very directive.
  • See also Dreyful model of skills acquisition.

How to combat "rabbit holing" being used to strong-arm or shut down conversations.

  • You might accuse someone of rabbit-holing if you feel they are going way deeper into a topic than the context merits.
  • The phrase is being used to shut down conversations. Not in meetings, though, but outside of them.
  • "I've been told I'm rabbit-holing"
  • Can conversations around the topics be (more) facilitated to help get around this?
  • It sounds like "this is just semantics" or bike-shedding.
  • Different people have different perspectives and exposing that might help.
  • Can there be shared understanding of positions or a common goal?
  • Perhaps those who want high-level conversation can give context on why it should stay general.
  • Perhaps those who want low-level can explain why it's relevant and important.
  • Teach people to communicate well e.g. the pyramid principle.
  • A coaching trick to decide what level to speak at is to ask "Do you want a straight answer?"
  • Find ways to let people talk about the risks they perceive and then acknowledge them.
  • Use some kind of tool to surface concerns as a group: discovery template, Six Thinking Hats, Riskstorming.
  • Talk about the feelings around rabbit-holing separate from the meetings.
  • Find consensus-building techniques and increase psychological safety.
  • Ask "is anyone thinking about something they're not saying?"

The one best training, conference or qualification you've done, and why

Any tips on getting a team that has been so focused on delivering what's in front of them to start thinking about longer term vison & goals

  • The team don't seem to be able to break out of the low-level builder mindset.
  • Right now we need to be thinking at a higher, longer-term level.
  • Have you tried a different location? A change in environment might help to change the thinking patterns.
  • Throw the old backlog away. Reset the context!
  • Some teams love to be reactive!
  • Take a look at the Good Strategy, Bad Strategy
  • Pick a small thing and focus on it.
  • Try OKRs. (That don't suck.) 
  • Take a look at Continuous Discovery Habits.
  • Pre-mortems (or futurespective), puts them in a mindset of having done it. 
  • The switch from detail to wide view can be hard. Ask what is blocking them from doing it.
  • Connect their work to the company's vision.

Image: https://flic.kr/p/boCumi

Comments

Popular posts from this blog

Can Code, Can't Code, Is Useful

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "If testers can’t code, they’re of no use to us" My first reaction is to wonder what you expect from your testers. I am immediately interested in your working context and the way

Meet Me Halfway?

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Stop answering my questions with questions." Sure, I can do that. In return, please stop asking me questions so open to interpretation that any answer would be almost meaningless and certa

Testing (AI) is Testing

Last November I gave a talk, Random Exploration of a Chatbot API , at the BCS Testing, Diversity, AI Conference .  It was a nice surprise afterwards to be offered a book from their catalogue and I chose Artificial Intelligence and Software Testing by Rex Black, James Davenport, Joanna Olszewska, Jeremias Rößler, Adam Leon Smith, and Jonathon Wright.  This week, on a couple of train journeys around East Anglia, I read it and made sketchnotes. As someone not deeply into this field, but who has been experimenting with AI as a testing tool at work, I found the landscape view provided by the book interesting, particularly the lists: of challenges in testing AI, of approaches to testing AI, and of quality aspects to consider when evaluating AI.  Despite the hype around the area right now there's much that any competent tester will be familiar with, and skills that translate directly. Where there's likely to be novelty is in the technology, and the technical domain, and the effect of

Testers are Gate-Crashers

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Testers are the gatekeepers of quality" Instinctively I don't like the sound of that, but I wonder what you mean by it. Perhaps one or more of these? Testers set the quality sta

Postman Curlections

My team has been building a new service over the last few months. Until recently all the data it needs has been ingested at startup and our focus has been on the logic that processes the data, architecture, and infrastructure. This week we introduced a couple of new endpoints that enable the creation (through an HTTP POST) and update (PUT) of the fundamental data type (we call it a definition ) that the service operates on. I picked up the task of smoke testing the first implementations. I started out by asking the system under test to show me what it can do by using Postman to submit requests and inspecting the results. It was the kinds of things you'd imagine, including: submit some definitions (of various structure, size, intent, name, identifiers, etc) resubmit the same definitions (identical, sharing keys, with variations, etc) retrieve the submitted definitions (using whatever endpoints exist to show some view of them) compare definitions I submitted fro

Build Quality

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "When the build is green, the product is of sufficient quality to release" An interesting take, and one I wouldn't agree with in general. That surprises you? Well, ho

Make, Fix, and Test

A few weeks ago, in A Good Tester is All Over the Place , Joep Schuurkes described a model of testing work based on three axes: do testing yourself or support testing by others be embedded in a team or be part of a separate team do your job or improve the system It resonated with me and the other testers I shared it with at work, and it resurfaced in my mind while I was reflecting on some of the tasks I've picked up recently and what they have involved, at least in the way I've chosen to address them. Here's three examples: Documentation Generation We have an internal tool that generates documentation in Confluence by extracting and combining images and text from a handful of sources. Although useful, it ran very slowly or not at all so one of the developers performed major surgery on it. Up to that point, I had never taken much interest in the tool and I could have safely ignored this piece of work too because it would have been tested by

Am I Wrong?

I happened across Exploratory Testing: Why Is It Not Ideal for Agile Projects? by Vitaly Prus this week and I was triggered. But why? I took a few minutes to think that through. Partly, I guess, I feel directly challenged. I work on an agile project (by the definition in the article) and I would say that I use exclusively exploratory testing. Naturally, I like to think I'm doing a good job. Am I wrong? After calming down, and re-reading the article a couple of times, I don't think so. 😸 From the start, even the title makes me tense. The ideal solution is a perfect solution, the best solution. My context-driven instincts are reluctant to accept the premise, and I wonder what the author thinks is an ideal solution for an agile project, or any project. I notice also that I slid so easily from "an approach is not ideal" into "I am not doing a good job" and, in retrospect, that makes me smile. It doesn't do any harm to be reminded that your cognitive bias

Test Now

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "When is the best time to test?" Twenty posts in , I hope you're not expecting an answer without nuance? You are? Well, I'll do my best. For me, the best time to test is when there

Vanilla Flavour Testing

I have been pairing with a new developer colleague recently. In our last session he asked me "is this normal testing?" saying that he'd never seen anything like it anywhere else that he'd worked. We finished the task we were on and then chatted about his question for a few minutes. This is a short summary of what I said. I would describe myself as context-driven . I don't take the same approach to testing every time, except in a meta way. I try to understand the important questions, who they are important to, and what the constraints on the work are. With that knowledge I look for productive, pragmatic, ways to explore whatever we're looking at to uncover valuable information or find a way to move on. I write test notes as I work in a format that I have found to be useful to me, colleagues, and stakeholders. For me, the notes should clearly state the mission and give a tl;dr summary of the findings and I like them to be public while I'm working not just w