Skip to main content

Posts

Showing posts from January, 2017

Elis, Other People

I've written before about the links I see between joking and testing - about the setting up of assumptions, the reframing, and the violated expectations, amongst other things. I like to listen to the The Comedian's Comedian podcast because it encourages comics to talk about their craft, and I sometimes find ideas that cross over, or just provoke a thought in me. Here's a few quotes that popped out of the recent Elis James episode . On testing in the moment: Almost everyone works better with a bit of adrenaline in them. In the same way that I could never write good stuff in the house, all of my best jokes come within 20 minutes to performing or within 20 minutes of performing ... 'cos all of my best decisions are informed by adrenaline. On the value of multiple perspectives, experiences, skills: I've even tried sitting in the house and bantering with myself like I'm in a pub because I hate the myth that standups are all these weird auteurs and we sho...

Cambridge Lean Coffee

This month's  Lean Coffee  was hosted by  Redgate . Here's some brief, aggregated comments and questions  on topics covered by the group I was in. Why 'test' rather than 'prove'? The questioner had been asked by a developer why he wasn't proving that the software worked. What is meant by proof? (The developer wanted to know that "it worked as expected") What is meant by works? What would constitute proof for the developer in this situation? Do we tend to think of proof in an absolutist, mathematical sense, where axioms, assumptions, deductions and so on are deployed to make a general statement? ... remember that a different set of axioms and assumptions can lead to different conclusions. In this view, is proof 100% confidence? There is formal research in proving correctness of software . In the court system, we might have proof beyond reasonable doubt   ... which seems to admit less than 100% confidence.  Are we happier with that?...

Listens Learned in Software Testing

I'm enjoying the way the Test team book club at Linguamatics has been using our reading material as a jumping-off point for discussion about our experiences, about testing theory, and about other writing. This week's book was a classic, Lessons Learned in Software Testing , and we set ourselves the goal of each finding three lessons from the book that we found interesting for some reason, and making up one lesson of our own to share. Although Lessons Learned was the first book I bought when I became a tester, in recent times I have been more consciously influenced by Jerry Weinberg. So I was interested to see how my opinions compare to the hard-won suggestions of Kaner, Bach and Petttichord. For this exercise I chose to focus on Chapter 9, Managing the Testing Group. There are 35 lessons here and, to the extent that it's possible to say with accuracy given the multifaceted recommendations in many of them, I reckon there's probably only a handful that I don't...

Speaking Easier

Wow. I've been thinking about public speaking and me. Wind back a year or so. Early in November 2015 I presented my talk, my maiden conference talk, the first conference talk I'd had accepted, in fact the only conference talk I had ever submitted, on a massive stage, in a huge auditorium, to an audience of expectant software testers who had paid large amounts of money to be there, and chosen my talk over three others on parallel tracks . That was EuroSTAR in Maastricht. I was mainlining cough medicine and menthol sweets for the heavy cold I'd developed and I was losing my voice. The thing lasted 45 minutes and when I was finished I felt like I was flying. Wind back another year or so. At the end of July 2014 I said a few words and gave a leaving present to one of my colleagues in front of a few members of staff in the small kitchen at work. I was healthy and the only drug I could possibly be under the influence of was tea (although I do like it strong ). The thin...

Without Which ...

This week's Cambridge Tester meetup  was a show-and-tell with a theme: Is there a thing that you can't do without when testing? A tool, a practice, a habit, a method that just works for you and you wouldn't want to miss it?  Here's a list, with a little commentary, of some of the things that were suggested: Testability: mostly, in this discussion, it was tools for probing and assessing a product. Interaction with developers: but there's usually a workaround if they're not available .. Workarounds The internet: because we use it all the time for quick answers to quick questions (but wonder about the impact this is having on us). Caffeine: some people can't do anything without it. Adaptability: although this  is like making your first wish be infinite wishes . People: Two of us suggested this. I wrote my notes up in  Testing Show .  Emacs Money: for paying for staff, tools, services etc. Visual modelling: as presented, this was mostly about ...

Testing Show

This week's Cambridge Tester meetup was a show-and-tell with a theme: Is there a thing that you can't do without when testing? A tool, a practice, a habit, a method that just works for you and you wouldn't want to miss it?  Thinking about what I might present I remembered that Jerry Weinberg, in Perfect Software, says "The number one testing tool is not the computer, but the human brain — the brain in conjunction with eyes, ears, and other sense organs. No amount of computing power can compensate for brainless testing..." And he's got a point. I mean, I'd find it hard to argue that any other tool would be useful without a brain to guide its operation, to understand the results it generates, and to interpret them in context. In show-and-tell terms, the brain scores highly on "tell" and not so much on "show", at least without a trepanning drill . But, in any case, I was prepared to take it as a prerequisite for testing so I tho...

State of Play

The State of Testing Survey for 2017 is now open. This will be the fourth iteration of the survey and last year's report  says that there were over 1000 respondents worldwide, the most so far. I think that the organisers should be applauded for the efforts they're putting into the survey. And, as I've said before , I think the value from it is likely to be in the trends rather than the particular data points, so they're playing a long game with dedication. To this end, the 2016 report shows direct comparisons to 2015 in places and has statements like this in others: We are starting to see a trend where testing teams are getting smaller year after year in comparison with the results from the previous surveys. I'd like to see this kind of analysis presented alongside the time-series data from previous years and perhaps comparisons to other relevant industries where data is available. Is this a trend in testing or a trend in software development, for instance...