Four lightning talks at the Cambridge Tester meetup at Linguamatics last night, four topics apparently unrelated to one another, four slices through testing. Yet, as I write up my notes this morning I wonder whether there's a common thread ...
Samuel Lewis showed us the money. Or, at least, where the money is held before being dispensed through those holes in the wall. He included some fascinating background information about ATMs (and a scary security story) but the thrust of his talk was the risks and corresponding mitigation strategies in a massive project to migrate the ATMs for a big bank to a new application layer and OS (more scariness: many are still running Windows XP).
Much of the approach involved audit trails of various kinds, with customer and other stakeholders sharing their road maps and getting a view of the test planning and strategy in return. I enjoyed that the customer themselves was considered a risk (because they had a reputation for changing their minds) and contingency was built in for that. Samuel described the approach as waterfall and spoke in praise of that kind of process for this kind of project (massive, regulated, traditionally-minded customer). I can accept that; I certainly don't have personal experience there to argue against it. But it was striking to me that one of the factors that contributed to the successful completion of the project was the personal relationship with a developer which lead to the testers getting an unofficial early build to explore.
If you want to get your way, make your case fit the worldview of the person you need to convince. That's one of the three persuasiveness spanners (Robert Cialdini's principles of persuasion) in Sneha Bhat's toolbox. Another is to set up a context in which the other person feels some obligation to you: help them first and they'll likely help you back. The third she shared with us was to find "social proof", that is some evidence that someone else, someone respected, endorses the perspective you're proposing.
She touched a little on how persuasion might turn into coercion and gave us a useful acronym from Katrina Clokie for framing a conversation that's requesting something: SPIN. Identify the Situation and the Problem, explain the Implication and describe the Need you have to resolve it. I've heard the talk a couple of times now and, while everything I've said so far is useful, the phrase that sticks in my mind is that it's important to prepare, and then deliver the message with "passion, compassion, and purpose".
Andrew Fraser started his talk with the request to criticise it. I was already interested (a talk about testing in the abstract, with philosophical tendencies, wrestling with big-picture questions is my thing, and I don't care who knows it) but at that point I was hooked. As far as I understood it, Andrew's basic argument runs something like this: all metrics can be gamed; you can view tests as metrics; so tests can be gamed, i.e. developers will code to the tests; the conditions that the tests check for may not represent anything a customer cares about; ergo software that maximises conformity to the wrong thing is produced.
Phew. I can't pretend to agree, but I enjoyed it so much that afterwards I asked to be a reviewer of the bigger essay that this short piece was abstracted from. From my notes: so this is anti-TDD? so this is like over-fitting to a model? so all the tests need to be specified up front? but surely if you can "train" your developers (in some Skinnerian sense) to code in particular ways you can use it to the advantage of the product?
Finally I ran through an early version of The Anatomy of a Definition of Testing which I'll be delivering at UKSTAR next month. It's a personal definition, one that helps me to do the work that I need to do in my context.
Four diverse talks then, but what thread did I divine running through them? Well perhaps it reflects something about me, about what I took from the talks, or about what I want to impose on them. It seems to me that people are at the heart of these stories: a personal relationship delivered the early build, a persuasion conversation involves human emotions on both sides, it's people that intuitively game metrics, and a personal definition is really only about the person. Jerry Weinberg was quoted during the questions for my talk and I doubt he'd be surprised to find this kind of theme in talks around software, his second law of consulting being "No matter how it looks at first, it's always a people problem."
Image: https://flic.kr/p/njHqzD
Comments
I think I agree with Andrew Fraser, but I think he is addressing a more fundamental problem than just TDD.
Testing, as we perform it, is quantitative. Even exploratory testing is difficult to value. Much testing is driven by a hope that testing might help.
Scripted and automated tests are in some ways worse, as they only assert what they've been programmed to do. They only produce data, not knowledge.
But I'm actually inclined to say that most of the testing we do produces data, signals, streams of information, and often only very little real experience about risks of the product we're testing.
Narratives about the testing helps, but only if we chew on them, and tries to understand the story they tell. I think we need to learn how to qualify testing's ontological results , i.e. make them explicitly valuable in business, operational, and development perspectives.
The core of the problem, I think, is that there is too little research happening in testing. Probably because IT is still a very young industry, and testing is widely seen as a hands-on activity, not an analytical one.
Jess Ingrassellino shared an interesting perspective here: jessingrassellino.com/communities-practice/
/Anders
Post a Comment