Skip to main content

Posts

Showing posts from September, 2018

CEWT Lean Coffee

At CEWT #6 we used Lean Coffee as a way to reflect on some of the threads we'd discussed during the day . Here's a few brief, aggregated comments and questions that came up. Different Perspectives Claire and Helen's talk was about how the testers and test manager on a project had a very different view on the quality of the testing. The developer perspective is also interesting, and often different. Whose needs are being met by the testing? Which lens are we looking through: experience, knowledge, context, ..? Good testing is inherently perspective-based. The relative rule . What about outside of software, e.g. in laboratory science? Stop Working and Start Thinking . What makes good tests? Consistent, deterministic, a specific outcome. Really? What about if the software is non-deterministic? Isn't testing about information? Is gathering data (e.g. performance data) testing? There needs to be a pass/fail. Really ? Is the tester the best per

The Factor The Matter

At CEWT #6  we discussed what good testing and good testers are. We didn't set ourselves the mission of coming up with some kind of definition, we didn't explicitly take that mission on during the course of the day, and to my mind we didn't converge on anything particularly concrete that could form the basis of one either. Reviewing my notes, I thought it might be interesting to just list some of the factors that were thrown into the conversation during the day. Here they are: Good relative to what? Good relative to who? Good relative to when? Good for what? Good for who? Good for when? Goodness can be quantified. The existence of bugs found by non-testers is a way to judge testers or testing. Goodness cannot be quantified. The existence of bugs found by non-testers is not a way to judge testers or testing. Goodness can't be separated from context. Goodness can't be separated from perspective. The value and quality of testing is subjective. Good t

Testing vs Chicken

At CEWT #6 we were asking what constitutes good testing. Here's prettied-up versions of the notes I used for my talk. One line of argument that I've heard in this context and others is that even though it's hard to say what good testing is, we  know it when we see it . But there's an obvious potential chicken and egg problem with that. I'm going to simplify things by assuming that the chicken came first. In my scenario, we'll assert that we  can know what good testing would look like for a project which is very similar to projects we've done before, where the results of testing were praised by stakeholders. The problem I'll address is that we don't have anyone who can do that project, so we have to hire. The question is: What can we do as recruiters, during recruitment, to understand the potential for a candidate to deliver that good testing? I've been recruiting technical staff for around 15 years, predominantly testers in the last t

Look at the Time

I'll be quick. In the kitchen. With tea brewing. Talking to a developer about new code. Exploring  assumptions. An enjoyable conversation. A valuable five minutes. A third party provides an object identifier to our code. We use it as a key because it's our understanding that the identifier is unique. We have never seen identifier collision. Try again: we have never seen identifier collision at any given point in time . Do we know that the third party will not recycle identifiers as the objects they represent are discarded? What would happen if they did? No longer in the kitchen. Tea on desk. Image:  https://flic.kr/p/aCWUN5