Skip to main content

Posts

Showing posts from June, 2016

The Rat Trap

Another of the capsule insights I took from The Shape of Actions by Harry Collins (see also Auto Did Act ) is the idea that a function of the value some technology gives us is the extent to which we are prepared to accommodate its behaviour. What does that mean? Imagine that you have a large set of data to process. You might pull it into Excel and start hacking away at its rows and columns, you might use a statistical package like R to program your analysis, you might use command line tools like grep, awk and sed to cut out slices of the data for narrower manual inspection. Each of these will have compromises, for instance: some tools have possibilities for interaction that other tools do not have (Excel has a GUI which grep does not) some tools are more specialised for particular applications (R has more depth in statistics than Excel) some tools are easier to plug into pipelines than others (Linux utilities can be chained together in a way that is apparently trickier in R

Making the Earth Move

In our reading group at work recently we looked at Are Your Lights On? By Weinberg and Gause. Opinions of it were mixed but I forgive any flaws it may have for this one definition:   A problem is a difference between things as desired and things as perceived. It's hard to beat for pithiness, but Michael Bolton's relative rule comes close. It runs:   For any abstract X, X is X to some person, at some time. And combining these gives us good starting points for attacking a problem of any magnitude: the things the perception of those things the desires for those things the person(s) desiring or perceiving the context(s) in which the desiring or perceiving is taking place Aspiring problem solvers: we have a  lever . Let's go and make the earth move  for someone! Image: Wikimedia Commons

Auto Did Act

You are watching me and a machine interacting with the same system. Our actions are, to the extent that you can tell from your vantage point, identical and the system is in the same state at each point in the sequence of actions for both of us. You have been reassured that the systems are identical in all respects that are relevant to this exercise; you believe that all concerned in setting it up are acting honestly with no intention to mislead, deceive, distort or otherwise make a point. The machine and me performed the same actions on the same system with the same visible outcomes. Are we doing the same task? This is a testing blog. You are a tester. You have been around the block. More than once. You perhaps think that I haven't given you enough information to answer this question with any certainty. What task is being performed? Are the visible outcomes the only outcomes? To what extent does skill and adaptability form part of the task? To what extent does interpretation

Forward Looking

Aleksis Tulonen  recently asked me for some thoughts on the future of testing to help in his preparation for a panel discussion. He sent these questions as a jumping-off point: What will software development look like in 1, 3 or 5 years? How will that impact testing approaches? I was flattered to be asked and really enjoyed thinking about my answers. You can find them at The Future of Testing Part 3  along with those of James Bach, James Coplien, Lisa Crispin, Janet Gregory, Anders Dinsen, Karen Johnson, Alan Page, Amy Phillips, Maaret Pyhäjärvi, Huib Schoots, Sami Söderblom and Jerry Weinberg. Image:  https://flic.kr/p/pLsXJh

Going Postel

Postel's Law - also known as the Robustness Principle - says that, in order to facilitate robust interoperability, (computer) systems should be tolerant of ill-formed input but take care to produce well-formed output. For example, a web service operating under HTTP standards should accept malformed (but interpretable) requests from its clients but return only conformant responses.  The principle became popular in the early days of Internet standards development and, for the historically and linguistically-minded, there's some interesting background in this post by Nick Gall . On the face of it, the idea seems sensible: two systems can still talk - still provide a service - even if one side doesn't get everything quite right. Or, more generally: systems can extract signal from a noisy communication channel, and limit the noise they contribute to it. The obvious alternative to Postel's Law is strict interpretation of whatever protocol is in effect and no talking -