Skip to main content

Shall We Ask the Magic 8-Ball?


Identifying a technology need is usually pretty easy - your team will complain at every opportunity, however tangential, about how some application is too complicated or is not powerful enough or has a major missing feature or doesn't integrate with other applications  or you can't search it or it's too slow or it uses different conventions to the other tools or there was something better at their last job or they just plain don't like it.

You'll usually agree. And you'll usually want to wait for a (non-existent, and you know it) better time to think about it because introducing a new technology can be time-consuming, hard work and risky.

Eventually events will overtake you. When that happens, I start by drawing up a list of  application-specific requirements, prioritised of course, and then add this basic set of parameters that I want to compare across any candidate tools:
  • user community: is it active? how is the tool viewed?
  • support: forums, bug database, blogs etc
  • developer community: are people building and building on the tool?
  • maturity: will the tool be changing under your feet?
  • regular releases/fixes: is the tool being maintained?
  • dependencies/requirements: what else needs to be installed?
  • deployment: Does it use standard packages? Is it easy to update? 
  • integration: does it offer any APIs or ability to customise?
  • price: include maintenance, per-user licenses fees, your own costs etc
    Once you've got your comparison factors, you can start to look for candidate applications. Almost certainly someone will have trodden this path before so try searching for lists of tools or reviews and comparisons of different products to give you a starting point.

    In a short initial phase, identify as many tools as you can - be inclusive at this stage, so bring in anything that looks remotely possible - and quickly grade them in against your requirements. Don't spend long on this and don't be afraid to put don't know entries in the table to start with. Sometimes you'll find that a tool does something you hadn't thought of that you might like. Don't be afraid to add it to your comparison table as you go. What you're trying to do here is discover (a) classes of tool,  (b) obvious non-starters and (c) obvious candidates for a deeper review.

    Once you've done that you can rank and cluster the tools based on your criteria and choose a selection (e.g. one from each class you've identified) to take to the next round. The next round has to be more specific to your intended usage. It might be another review, based on deeper reading about the tools or it might be trial installations, or you might have already identified one outstanding candidate in which case you're done.

    As an example, when we were looking for GUI automation tools recently we had 20 or so requirements including these, with their priorities:
    • P1 programmatic access to GUI components
    • P1 supports testing Swing
    • P1 allows versioned source control
    • P2 easy for Dev to run alongside unit tests
    • P3 ability to drive other products
    • P3 works with applications and applets
    Our initial list of around 30 tools included pyWinAuto, Win32::GuiTest, Abbot, AutoHotkey, SIKULI, FEST, SilkTest and Squish and we identified three classes of tool:
    • purely record/playback
    • purely programmatic
    • hybrid
    We trialed at least one of each class, attempting to create a small set of  tests we identified as interesting for our product, and ultimately chose FEST, not least because we can share skills and test cases with the Dev team. They'll use the library for unit tests and we'll drive it using JUnit for running application-level tests too.

    We invested effort into choosing this technology to give ourselves the best chance of making the right choice first time but, as so often, we won't know whether it really does everything that we want until we're much further down the road. It'd be so much easier if we could just ask the 8-Ball.

    Comments

    Popular posts from this blog

    The Ideal Test Plan

    A colleague pinged me the other day, asking about an "ideal test plan" and wondering whether I could suggest something. Not without a bit more information, I said. OK, they said. Who needs the plan, for what purpose? I asked. Their response: it's for internal use, to improve documentation, and provide a standard structure. We work in a medical context and have strict compliance requirements, so I wondered aloud whether the plan is needed for audit, or to show to customers? It's not, they replied, it's just for the team. Smiling now, I stopped asking questions and delivered the good news that I had what they were looking for. Yes? they asked, in anticipation. Naturally I paused for dramatic effect and to enhance the appearance of deep wisdom, before saying: the ideal plan is one that works for you. Which is great and all that, but not heavy on practical advice. --00-- I am currently running a project at the Association for Software Testing and there is a plan for

    Notes on Testing Notes

    Ben Dowen pinged me and others on Twitter last week , asking for "a nice concise resource to link to for a blog post - about taking good Testing notes." I didn't have one so I thought I'd write a few words on how I'm doing it at the moment for my work at Ada Health, alongside Ben. You may have read previously that I use a script to upload Markdown-based text files to Confluence . Here's the template that I start from: # Date + Title # Mission # Summary WIP! # Notes Then I fill out what I plan to do. The Mission can be as high or low level as I want it to be. Sometimes, if deeper context might be valuable I'll add a Background subsection to it. I don't fill in the Summary section until the end. It's a high-level overview of what I did, what I found, risks identified, value provided, and so on. Between the Mission and Summary I hope that a reader can see what I initially intended and what actually

    69.3%, OK?

    The Association for Software Testing is crowd-sourcing a book, Navigating the World as a Context-Driven Tester , which aims to provide responses to common questions and statements about testing from a context-driven perspective . It's being edited by Lee Hawkins who is posing questions on Twitter ,  LinkedIn ,  Slack , and the AST mailing list and then collating the replies, focusing on practice over theory. I've decided to contribute by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "What percentage of our test cases are automated?" There's a lot wrapped up in that question, particularly when it's a metric for monitoring the state of testing. It's not the first time I've been asked either. In my

    Why Do They Test Software?

    My friend Rachel Kibler asked me the other day "do you have a blog post about why we test software?" and I was surprised to find that, despite having touched on the topic many times, I haven't. So then I thought I'd write one. And then I thought it might be fun to crowdsource so I asked in the Association for Software Testing member's Slack, on LinkedIn , and on Twitter for reasons, one sentence each. And it was fun!  Here are the varied answers, a couple lightly edited, with thanks to everyone who contributed. Edit: I did a bit of analysis of the responses in Reasons to be Cheerful, Part 2 . --00-- Software is complicated, and the people that use it are even worse. — Andy Hird Because there is what software does, what people say it does, and what other people want it to do, and those are often not the same. — Andy Hird Because someone asked/told us to — Lee Hawkins To learn, and identify risks — Louise Perold sometimes: reducing the risk of harming people —

    Testing is Knowledge Work

      The Association for Software Testing is crowd-sourcing a book, Navigating the World as a Context-Driven Tester , which aims to provide responses to common questions and statements about testing from a context-driven perspective . It's being edited by Lee Hawkins who is posing questions on Twitter ,  LinkedIn ,  Slack , and the AST mailing list and then collating the replies, focusing on practice over theory. I've decided to contribute by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "We need some productivity metrics from testers" OK. I'd like to help you meet your need if I can but to do that I'll need to ask a few questions. Let's start with these: Who needs the metrics? Is there a particular pr

    My Favourite Tool

    Last week I did a presentation to a software testing course at EC Utbildning in Sweden titled Exploring with Automation where I demoed ways in which I use software tools to help me to test. Following up later, one of the students asked whether I had a favourite tool. A favourite tool? Wow, so simple but sooo deep!  Asking for a favourite tool could make a great interview question, to understand the breadth and depth of a candidate's knowledge about tools, how they think about an apparently basic request with deep complexity beneath (favourite for what task, on what basis, in what contexts, over what timescale?  what is a tool anyway?) and how they formulate a response to take all of that into account. I could truthfully but unhelpfully answer this question with a curt Yes or No. Or I could try and give something more nuanced. I went for the latter. At an extremely meta level I would echo Jerry Weinberg in Perfect Software : The number one te

    Trying to be CEWT

    I attend, enjoy, hopefully contribute to, and get a lot from, the local tester meetups and Lean Coffee  in Cambridge. But I'd had the thought kicking around for a long time that I'd like to try a peer workshop inspired by MEWT , DEWT , LEWT and the like. I finally asked a few others, including the local meetup organisers, and got mostly positive noises, so I decided to give it a go. I wrote a short statement to frame the idea, based on LEWT's: CEWT ( Cambirdge Exploratory Workshop on Testing ) is an exploratory peer workshop. We take the view that discussions are more interesting than lectures. We enjoy diverse ideas, and limit some activities in order to work with more ideas. and proposed a mission for an initial attempt to validate it locally on a small scale. Other local testers helped to refine the details in usual the testing ways - you know: criticism, questions, thought experiments, challenges, comparisons, mockery and the rest - and a list of potential at

    Fail Here or Fail There

    The First Law of Systems-Survival, according to John Gall, is this: A SYSTEM THAT IGNORES FEEDBACK HAS ALREADY BEGUN THE PROCESS OF TERMINAL INSTABILITY Laws are all-caps in Systemantics . Not just laws, but also theorems, axioms, and corollaries. There are many of them so here's another (location 2393-2394): JUST CALLING IT “FEEDBACK” DOESN’T MEAN THAT IT HAS ACTUALLY FED BACK There was a point when I realised, as the capitalised aphorisms rolled by, that I was sinking into the warm and sweetly-scented comforting foamy bathwater of confirmatory bias. Seen, seen, seen! Tick, tick, tick! I took the opportunity to let myself know that I'd been caught in the act, and that I needed to get out of the tub and start to challenge the content.  Intervening at that moment was congruent: I was in a context where I would accept it and prepared to change because of it. Of course, I enjoyed the deep irony of nodding along with Gall when he talked about

    Testing and Words

      The other day I got tagged on a Twitter thread started by Wicked Witch of the Test about people with a background in linguistics who’ve ended up in testing. That prompted me to think about the language concepts I've found valuable in my day job, then I started listing them, and then realised how many of them I've mentioned here over the years .   This post is one of an occasional series collecting some of those thoughts.  --00-- In The Complete Plain Words , Ernest Gowers notes, acidly, that: What appears to be a sloppy or meaningless use of words may well be a completely correct use of words to express sloppy or meaningless ideas. It surely sounds trite to say it but our choice of words can make a significant difference to how well our message is understood, and how we are judged. We choose from amongst those words we know, our lexicons . The more my lexicon agrees with yours, the greater our chance of us achieving a shared understanding when we converse. But lexic

    Use the Force Multiplier

    On Fridays I pair with doctors from Ada 's medical quality team. It's a fun and productive collaboration where I gain deeper insight into the way that diagnostic information is encoded in our product and they get to see a testing perspective unhindered by domain knowledge. We meet at the same time each week and decide late on our focus, choosing something that one of us is working on that's in a state where it can be shared. This week we picked up a task that I'd been hoping to get to for a while: exploring an API which takes a list of symptoms and returns a list of potential medical conditions that are consistent with those symptoms.  I was interested to know whether I could find small input differences that led to large output differences. Without domain knowledge, though, I wasn't really sure what "small" and "large" might mean. I prepared an input payload and wrote a simple shell script which did the following: make a