Skip to main content

Just the Fracts, Ma'am


Adam Knight spoke at the Cambridge Tester Meetup last night on Fractal Exploratory Testing, a topic he's blogged about a couple of times:

Fractals can be roughly defined as having similar properties whatever level of magnification you apply to them. The Mandelbrot Set is a famously fractal shape and zooming into into it exposes characteristics that make each image recognisably from the same family. Going 10x or 100x into some other image, say a photograph of my head, would not have the same effect.

There's an analogy to be made with Exploratory Testing - in fact, with exploration of any kind - and this is reinforced by Adam's choosing to cast exploration in terms of charters written in a concise but formal way inspired by Elisabeth Hendrickson along the lines of "Explore <area> with <resources> to achieve <aim>".

Each exploration uses appropriate testing approaches to attempt to achieve its aim and sometimes succeeds. However, along the way it might expose another area of interest, or fail to because it instead finds something else, or is blocked for some reason or an assumption about the mission proves false and so the charter is invalid or ...

Each of these outcomes can themselves pose new questions, which can in turn inspire new charters, new explorations which will look just the same as the mission which spawned them in all relevant details: they will have a charter in the same format and the same kinds of testing techniques can be used to execute them.

In a fractal, you can magnify any part to any degree. It's a mathematical paradox that coastlines tend to infinite length: the greater the level of magnification, the greater the possible resolution of the ruler, the more small deviations that can be observed and measured. And which of us hasn't from time to time got so engrossed in a testing task that we've burned through hours of investigation focusing on increasingly detailed analysis of some aspect of a product and still thought that there was more we could do?

Adam's insight in this talk wasn't to do with exploratory testing, nor even how thinking about fractals can help a tester in their testing missions, particularly, but much more about how describing testing in this recursive way can help to explain why, for example:
  • on a project with 10 requirements, there aren't merely 10 test cases to be executed before the product is shipped
  • estimation of testing time "required" is not necessarily a simple calculation
  • focus on different areas of the system under test might differ radically, depending on what exploration in those areas found

He talked about how he has used fractals to explain testing to non-testers and particularly to non-testers on the business side of the company. They might not "get" testing but they can understand a picture of it which shows that successive rounds of investigation are defining the differences between the specification and the product that was delivered. The level of investigation in a particular area can increase the resolution with which the size and shape of that area is understood.

Decisions from the business, based on what's known at any point, can then be seen to be guiding further testing into a new area or choosing the magnification of some existing area that is most important to get the information that will motivate the next round of decisions. The alert members of the business side might then themselves see that they have become engaged in a fractal process too.
Image: https://flic.kr/p/azddT6

Comments

Popular posts from this blog

Notes on Testing Notes

Ben Dowen pinged me and others on Twitter last week , asking for "a nice concise resource to link to for a blog post - about taking good Testing notes." I didn't have one so I thought I'd write a few words on how I'm doing it at the moment for my work at Ada Health, alongside Ben. You may have read previously that I use a script to upload Markdown-based text files to Confluence . Here's the template that I start from: # Date + Title # Mission # Summary WIP! # Notes Then I fill out what I plan to do. The Mission can be as high or low level as I want it to be. Sometimes, if deeper context might be valuable I'll add a Background subsection to it. I don't fill in the Summary section until the end. It's a high-level overview of what I did, what I found, risks identified, value provided, and so on. Between the Mission and Summary I hope that a reader can see what I initially intended and what actually

69.3%, OK?

The Association for Software Testing is crowd-sourcing a book, Navigating the World as a Context-Driven Tester , which aims to provide responses to common questions and statements about testing from a context-driven perspective . It's being edited by Lee Hawkins who is posing questions on Twitter ,  LinkedIn ,  Slack , and the AST mailing list and then collating the replies, focusing on practice over theory. I've decided to contribute by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "What percentage of our test cases are automated?" There's a lot wrapped up in that question, particularly when it's a metric for monitoring the state of testing. It's not the first time I've been asked either. In my

Why Do They Test Software?

My friend Rachel Kibler asked me the other day "do you have a blog post about why we test software?" and I was surprised to find that, despite having touched on the topic many times, I haven't. So then I thought I'd write one. And then I thought it might be fun to crowdsource so I asked in the Association for Software Testing member's Slack, on LinkedIn , and on Twitter for reasons, one sentence each. And it was fun!  Here are the varied answers, a couple lightly edited, with thanks to everyone who contributed. Edit: I did a bit of analysis of the responses in Reasons to be Cheerful, Part 2 . --00-- Software is complicated, and the people that use it are even worse. — Andy Hird Because there is what software does, what people say it does, and what other people want it to do, and those are often not the same. — Andy Hird Because someone asked/told us to — Lee Hawkins To learn, and identify risks — Louise Perold sometimes: reducing the risk of harming people —

Testing is Knowledge Work

  The Association for Software Testing is crowd-sourcing a book, Navigating the World as a Context-Driven Tester , which aims to provide responses to common questions and statements about testing from a context-driven perspective . It's being edited by Lee Hawkins who is posing questions on Twitter ,  LinkedIn ,  Slack , and the AST mailing list and then collating the replies, focusing on practice over theory. I've decided to contribute by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "We need some productivity metrics from testers" OK. I'd like to help you meet your need if I can but to do that I'll need to ask a few questions. Let's start with these: Who needs the metrics? Is there a particular pr

My Favourite Tool

Last week I did a presentation to a software testing course at EC Utbildning in Sweden titled Exploring with Automation where I demoed ways in which I use software tools to help me to test. Following up later, one of the students asked whether I had a favourite tool. A favourite tool? Wow, so simple but sooo deep!  Asking for a favourite tool could make a great interview question, to understand the breadth and depth of a candidate's knowledge about tools, how they think about an apparently basic request with deep complexity beneath (favourite for what task, on what basis, in what contexts, over what timescale?  what is a tool anyway?) and how they formulate a response to take all of that into account. I could truthfully but unhelpfully answer this question with a curt Yes or No. Or I could try and give something more nuanced. I went for the latter. At an extremely meta level I would echo Jerry Weinberg in Perfect Software : The number one te

Enjoy Testing

  The testers at work had a lean coffee session this week. One of the questions was  "I like testing best because ..." I said that I find the combination of technical, intellectual, and social challenges endlessly enjoyable, fascinating, and stimulating. That's easy to say, and it sounds good too, but today I wondered whether my work actually reflects it. So I made a list of some of the things I did in the last working week: investigating a production problem and pairing to file an incident report finding problems in the incident reporting process feeding back in various ways to various people about the reporting process facilitating a cross-team retrospective on the Kubernetes issue that affected my team's service participating in several lengthy calibration workshops as my team merges with another trying to walk a line between presenting my perspective on things I find important and over-contributing providing feedback and advice on the process identifying a

Trying to be CEWT

I attend, enjoy, hopefully contribute to, and get a lot from, the local tester meetups and Lean Coffee  in Cambridge. But I'd had the thought kicking around for a long time that I'd like to try a peer workshop inspired by MEWT , DEWT , LEWT and the like. I finally asked a few others, including the local meetup organisers, and got mostly positive noises, so I decided to give it a go. I wrote a short statement to frame the idea, based on LEWT's: CEWT ( Cambirdge Exploratory Workshop on Testing ) is an exploratory peer workshop. We take the view that discussions are more interesting than lectures. We enjoy diverse ideas, and limit some activities in order to work with more ideas. and proposed a mission for an initial attempt to validate it locally on a small scale. Other local testers helped to refine the details in usual the testing ways - you know: criticism, questions, thought experiments, challenges, comparisons, mockery and the rest - and a list of potential at

Testing and Words

  The other day I got tagged on a Twitter thread started by Wicked Witch of the Test about people with a background in linguistics who’ve ended up in testing. That prompted me to think about the language concepts I've found valuable in my day job, then I started listing them, and then realised how many of them I've mentioned here over the years .   This post is one of an occasional series collecting some of those thoughts.  --00-- In The Complete Plain Words , Ernest Gowers notes, acidly, that: What appears to be a sloppy or meaningless use of words may well be a completely correct use of words to express sloppy or meaningless ideas. It surely sounds trite to say it but our choice of words can make a significant difference to how well our message is understood, and how we are judged. We choose from amongst those words we know, our lexicons . The more my lexicon agrees with yours, the greater our chance of us achieving a shared understanding when we converse. But lexic

The Ideal Test Plan

A colleague pinged me the other day, asking about an "ideal test plan" and wondering whether I could suggest something. Not without a bit more information, I said. OK, they said. Who needs the plan, for what purpose? I asked. Their response: it's for internal use, to improve documentation, and provide a standard structure. We work in a medical context and have strict compliance requirements, so I wondered aloud whether the plan is needed for audit, or to show to customers? It's not, they replied, it's just for the team. Smiling now, I stopped asking questions and delivered the good news that I had what they were looking for. Yes? they asked, in anticipation. Naturally I paused for dramatic effect and to enhance the appearance of deep wisdom, before saying: the ideal plan is one that works for you. Which is great and all that, but not heavy on practical advice. --00-- I am currently running a project at the Association for Software Testing and there is a plan for

Use the Force Multiplier

On Fridays I pair with doctors from Ada 's medical quality team. It's a fun and productive collaboration where I gain deeper insight into the way that diagnostic information is encoded in our product and they get to see a testing perspective unhindered by domain knowledge. We meet at the same time each week and decide late on our focus, choosing something that one of us is working on that's in a state where it can be shared. This week we picked up a task that I'd been hoping to get to for a while: exploring an API which takes a list of symptoms and returns a list of potential medical conditions that are consistent with those symptoms.  I was interested to know whether I could find small input differences that led to large output differences. Without domain knowledge, though, I wasn't really sure what "small" and "large" might mean. I prepared an input payload and wrote a simple shell script which did the following: make a