Skip to main content

Posts

A Testing Patina

A couple of weeks ago I was wondering what testing patina might look like .  What do I mean by patina in this context? I think I'm looking for artefacts and side-effects of work, visible on tools and places of work, that demonstrate something about the length of time, depth, and breadth of work, and ways of working.  I'm seeking things that other practitioners could recognise and appreciate as evidence of that work. But, and this is important, the patina is not the work itself. So here's the list of things I've come up with so far: Patina might be visible in an IDE I've been using for a long time through a litter of plug-ins, some for defunct tooling, or obsolete languages, with multiple plug-ins for the same file format, and so on. Patina might be visible at work from my Confluence home page where I collect links to the internal talks and demos I've done. (Top-right in the image at the top, and deliberately obfuscated I'm afraid.) Patina might be visible in
Recent posts

The Perpetual Apprentice

I paired with my friend Vernon last week. He mentioned it in a blog post afterwards: Watching my colleagues Lisi and James work is like watching wizards cast spells. He's very kind, and I do like a pointy hat, but there was no sorcery involved, simply intentful exploratory testing.  What do I mean by that? In this case, I mean that we started with a question and looked for ways that we could find information to help us to answer it. This particular question was very open because we didn't have a very specific oracle: can we find examples where the output of the system under test might be problematic? What did we do? CODS : Control, Observe, Decompose, Simplify. We could use the debugger to trace execution to a few pivotal functions and see what the application was doing (control, observe) but that was tiresome after a while. So we hacked the source code a little (conceptually simplify) so that variables were available to be d

A Testing Patina?

  I was in the shed earlier on, spraying the footrest I've made for my daughter a metallic purple.  As I ghosted the can back and forth, a waft of paint drifted onto the workbench adding another colourful contribution to the happenstance Pollock that's been building up since I fished the boards out of a skip in the centre of town *cough* years ago. That's not to mention the various scratches, cuts, dents, dinks, and drill holes that pepper the surface. Yes, I thought to myself, this bench has a real patina . On seeing my bench, a fellow maker and fixer would recognise it. The layers and shapes are archeological evidence of the variety of activities at different times, with different materials, operated on by different tools.  Which got me thinking: how and where am I building up patina in my work in testing? And how would anyone ever see it? And would they be able to appreciate it if they did? Edit: I followed up with a few ideas in A Testing Patina

A Qualified Answer

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn ,   Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Whenever possible, you should hire testers with testing certifications"  Interesting. Which would you value more? (a) a candidate who was sent on loads of courses approved by some organisation you don't know and ru

ChatGPT Whoppers

Over Christmas I thought I'd have a look at ChatGPT . Not to "break" it, or find more examples of its factual incorrectness , but to explore it sympathetically, for fun. And it was fun. In particular, the natural language generation and understanding capabilities of the system are really impressive. However, even without trying it's not hard to expose weaknesses in the tool. So much so that I doubt I would have bothered to blog about what I found, except that I enjoyed the accidental semantic connection between a handful of my observations. I asked for ASCII art to celebrate my 600th blog post on software testing and got this whopper! . .: :: :; ;: .;; .;;: ::;: :;;: ;;;:

600 Bad Ideas?

That's my 599 blog posts from October 2011 to December 2022.   Well, it's one view of them, a view that demonstrates that I show up and I ship. For me, these are useful, satisfying, and creative acts.   I like Seth Godin on creativity. In an interview with Thought Economics he says: What it means to be creative is pretty simple. It’s to do something human, something generous, something that might not work. Tick, tick, tick ... I hope.    Godin is also interesting on achieving success. I don't know whether I'd say Hiccupps is successful in any objective sense. Godin's take is that you have to show up first: Commitment to the process, the practice and the method comes before the success. So, success? I wrote a music fanzine when I was younger. More than once it was described as the zine-writer's zine. Looking for a reference to that, I found this :  "... zines that stood out, such as Robots and Electronic Br

The Show

  Episode 20 of Oddly Influenced , Brian Marick's podcast, is concerned with Julian Orr's book, Talking About Machines .  Orr makes much of war stories , the tales that colleagues tell each other about work they've done to help solve a live problem, commiserate about something that's gone wrong, or build culture. I recognise this from the teams I've been part of, communities of practice I've participated in, and meetups and conferences I've attended and run. Those stories establish our credentials, and to an extent our status, in our peer groups. Almost as an aside, the podcast mentions another kind of story, this one aimed not at the peer group but at those who are asked to assess their performance.  The group of technicians followed by Orr in his book were evaluated in part by account managers at the companies they were assigned to; people usually very distant from the technical work. This lead the technicians to go out of their way to make the outcomes of

Seat of the Pants

    Yesterday, reading The Year Without Pants by Scott Berkun, this leapt off the page at me:  Diversity of skill makes people self-sufficient. It's in a paragraph about the culture at Automattic, the people behind WordPress, when Berkun worked there around a decade ago. The paragraph continues: "They didn't need much help to start projects and were unafraid to learn skills to finish them ... They weren't afraid to get their hands dirty in tasks that in a mature engineering company would span the turf of three or four different job titles. That lack of specialization made people better collaborators since there was less turf to fight over." Last week at work I observed that my team's build pipeline was broken. I do not enjoy investigating this kind of infrastructure problem. The company Jenkins setup has a complex ecosystem with many moving parts, my mental model of how all the bits are interconnected is fuzzy, and in any case all the bits keep changing. The

Granularity Familiarity

  I saw Maaret's Pyhäjärvi's post on LinkedIn   the other day. This line chimed strongly with me: Sense of lack of time. Someone asked how to have joy of discovery when feeling always pinched with time. We have in many cases lost control over time, and I have done work I have not necessarily appreciated on making time flexible - always seeing there is a next day and having no schedules and small slices of work. I feel like I do my work at various granularities across multiple dimensions and so, to begin to get this idea straight in my head, I tried to list some of them. It was harder than I thought it would be because so much of this is instinctive, intuitive, and in the moment. Given that, here goes draft 0.1. Hopefully I'll begin to feel more familiar with the idea and be able to revise the model later on. Scope Parcel of work . My team's practices are reasonably common, I think. Tasks are portioned into into Jira tickets which progress out of a back

It's Great to Mutate

The product I'm working on at the moment is a Ada, a symptom checking app . The basic idea is that users enter a few details about themselves and their current symptoms and are then guided through a series of questions which leads to them being given a list of probability-ranked conditions they might have based on their answers. As you'd expect in this field, with this kind of data, a lot of care and concern is taken to understand how the app performs and what caused any changes in that performance across releases. There are many layers of testing. In one of those layers we have medical test cases. (And these are literally cases in the medical sense.) Each one represents data about an individual who might present to a doctor with a particular set of symptoms, a given medical history, possible comorbidities and so on. Each also comes with a set of acceptable condition suggestions and other expectations about how the software should behave when asking this kind of user questions