Skip to main content

Posts

Showing posts from November, 2021

Red Testing Hood

Angie Jones, The Build That Cried Broken Like the boy who cried wolf, the build that’s repeatedly red for no good reason will not be trusted. Worse, it will likely end up in a persistent red state as people stop looking at the regular failures. Eventually something important will also fail ... and no-one will notice. At CAST 2021 , Angie told us this and other stories about a team she was on, one who found themselves in that bad place but used their wits to escape and lived happily ever after.  Well, perhaps that’s an exaggeration: they tolerated living in a short-term bearable state where a reliable kernel of tests supported development and deployment and a set of flaky tests were worked on to the side. Separating off the flakes was a good move but it was supported by others, including assigning a team member to investigate all failures and setting limits on the percentage of tests that

AI Needs Testers

Tariq King, Towards Better Software: How Testers Can Revolutionize AI and Machine Learning Tariq issued a powerful call to arms in his talk at CAST 2021 . Yes, AI/ML is new technology and, yes, it is solving new problems, and so, yes, testers may find themselves out of their comfort zones in some respect. But there are parallels between machine learning and testing that should give testers confidence that they have something valuable to contribute right now: mapping inputs to outcomes for black boxes, partitioning a possibility space, and data wrangling.  Learning is part and parcel of testing, so why think we can’t learn what’s needed for working with AI and ML systems? And don’t forget that testers come preloaded with universally-valuable skills such as exploring, questioning, and assessing. Without some kind of challenge, Tariq says, AI is going to continue powering the r

Laugh Don't Cry

Laurie Sirois, Quality Isn’t Funny Why didn’t we find this bug in testing?  Without a sense of humour, hearing that kind of question repeatedly could bring people to tears. At CAST 2021 Laurie Sirois encouraged us to deploy laughter to defuse those tense situations, improve our relationships, and grow other people’s perceptions of us. As a manager in particular, lightening the mood can do wonders for a team’s morale, creativity, and sense of safety. Care needs to be taken over the style and timing of the humour used. Sarcasm and inside jokes might work well with trusted peers but may not be appropriate when delivering feedback. Even self-deprecating  humour can make others uncomfortable in the wrong context. Sounds challenging? Don't worry. If you’re not a natural stand-up simply smiling more frequently is a good start and it turns out that sharing surprising insights (the aha!) can be a

Scale Model

Greg Sypolt, Building a Better Tomorrow with Model-Based Testing As he was telling us at CAST 2021 , Greg ’s team have built a model-based testing system and integrated it into continuous integration infrastructure which has scaled to be capable of exhaustively exercising the 30 or 40 deployment scenarios that each of their products supports. The models bring advantages such as describing system behaviour in reusable chunks that are independent of implementation details, making maintenance straightforward, and broadening coverage. Sounds great, and is, but it comes at a price. Getting buy-in for this kind of approach — from both management and the team — can be tricky and  there’s a lot of up-front effort, unfamiliar concepts and technology, and steep learning curves. The models Greg needs can be quite simple because each product is basically a linear navigation through a sequence

Testing Hats Can Be White

Rajni Hatti, Ethical Hacking for Testers Testers should not feel excluded from exploring security concerns just because specialists are available or tooling (such as the ZAP scanner ) is running in continuous integration.  Why? Rajni gave three reasons at CAST 2021 : Testers tend to have a big-picture perspective and so perhaps ideas about where there might be vulnerabilities outside of standard attack vectors.  Testers are more likely to be involved in the design of features and so able to ask security questions or influence the priority of security in development backlogs. Security is a process not a product, and so regular review throughout the cycle is desirable versus throwing a build over the wall to some other team. Naturally, there is opportunity cost associated with additional security testing, so the choice

Don't Just Check, Please

Ben Simo, Computer-Assisted Testing Ben kicked off CAST 2021 with a brief history lesson, tracing the use of the term checking as a tactic in software testing back to at least Daniel McCracken ’s Digital Computer Programming from 1957 and through into his own recent model . Checking for him is a confirmatory activity, focusing on knowns and attempting to demonstrate that what was known yesterday has not changed today.  Checking need not be performed by machine but it’s a common target for automation because it comes with a straightforward decision function: the previously-known state.  In fact, this is for many all of what “test automation” is or can be and numerous regression test frameworks exist to facilitate that kind of work.  Ben would, I think, reject the both the term and the limited thinking about where computer tooling can be valuable for testers. In his model,

The Eye of the Beholder

With the wrong lens, or the wrong position relative to the lens, your vision will be compromised. And this is true no matter how hard or how long you look . You will not see all that you could see and what you do see may not be what you think it is. But you don't wear glasses? We all wear glasses; literally, metaphorically, or both. Every time you engage with a system under test you are viewing it through a lens. So what are some things you can do?  Accept that your view is always mediated by the way you are viewing. Do your best to understand the lenses available to you. Try to choose a lens that is compatible with your intent.  Take opportunities to to test an observation through multiple lenses. Use those tests to gather data on the lenses as well as the system Be open to trying a range of lenses. Attempt to use any given lens in a range of ways. Remember always that observation, like beauty, is in the eye of the beholder. Image: https://flic.kr/p/8s82M2