Skip to main content

Posts

Showing posts from January, 2021

Top Draw

In the last couple of weeks I've published ten sets of sketchnotes for events that I've attended online. That's a lot, for sure, but we're in lockdown right now and there's tons of opportunities to take part in events that in the past would've been unavailable. I had a handful of aims: learn something in the areas of software testing and development practice my sketchnoting practice getting my written thoughts in order quickly and concisely I signed up for a bunch of interesting-looking meetups, webinars, ask me anythings and panel discussions. During an event I sketched and afterwards challenged myself to write up a summary, opinion, or thoughts inspired by the session on the same day or, at the latest, within 24 hours. I find that sketching helps me to listen actively and record less, more valuable, content. Being able to get essentials typed up efficiently is tremendously valuable to me, particularly at work, and especially if I'm documenting as part of

Secret Sauce

This afternoon I attended Complete Your QA Strategy with Manual Testing , a webinar and demo by Yi Min Yang & Alistair Heys , hosted by the Test Tribe . The speakers are both from Saucelabs and, at least from where I was sitting, the content of the presentation was either relatively generic or pitching the Saucelabs product line and in neither case much about test strategy. Unsurprising, but I am interested to see and hear how vendors talk about their offerings and, in this case, the highlight was a statistic that over 90% of Sourcelab's top 500 customers use the platform as a way to fire up enviroments for hands-on testing as well as for automation, despite Saucelabs being known for the latter. Much more engaging was the brief demo which showed how quick and easy it can be to configure a virtual machine with pretty much any major OS and browser, choosing from a large range of versions for each. For remote pairing purposes, it's possible to invite others to join the intera

What Could Possibly Go Wrong?

Tonight I attended What could possibly go wrong? Ethics and software development with Fiona Charles and James Christie , hosted by Quare Meetcast .  I could listen to Fiona and James talk on this all day. Although they both protested that they are practitioners rather than philosphers, the ethics of the software business is a topic they've both lived and thought long and hard about. Here's a handful of points that stood out for me: Act ethically for the benefit of society. Not focussing only on the cash, but instead on the quality and value, will make customers happy and result in fewer software-related disasters. Don't be ethical in secret. If you find unethical activity, evaluate your level of comfort in your workplace, and the risk you'd take by blowing the whistle, and then either blow it or leave if you possibly can. It's possible for an organisation to be carelessly unethical. We can help them to avoid this by asking "what could possibly go wrong?"

Being Taught to Report

This evening I attended Ask Me Anything - Test Reporting with Elizabeth Zagroba hosted by Ministry of Testing . As expected it was rammed with nuggets of wisdom. Here's a handful: Test reports are not necessarily the end. Give enough detail that your work can be understood and perhaps questioned and then the conversation can start. Test reports are not just for others. You can use them to clarify your thinking, understand your coverage, step back and choose where to go next. The reaction to your test reports is important. If no-one's listening, perhaps you need to change something. Better still, ask what your stakeholders want to hear from you as well or instead. If you're not sure whether something makes sense to report, ask someone you trust before reporting it. One style of report does not fit all uses. Format, content, length, style, and so on can all vary depending on the context. You can report during testing as well as afterwards. Externalise your thoughts for your

Learning to Script

  This morning I attended How I Learn New Scripting Languages , a webinar by Rob Sabourin hosted by the Test Tribe . In it, Rob laid out general principles and specific steps that he recommends for anyone new to scripting, or to a particular language. His general points included: Don't get tied to a particular language or platform; it'll get in your way at some point. Don't sweat the syntax; you can always look it up. Do learn how to learn; this will serve you well everywhere, not just in picking up a new language. I don't think I'd argue with any of that but I'm less sure that my specific steps would be the same as his. For example, he recommends several foundational topics be reviewed before opening up a text editor. I wouldn't make that a precursor to starting. It's not that his ideas have no merit — they absolutely do, and to testing in general — just that they can be learned later, and tied to practical programming concepts. As it happens, I'm

Down With the Kids!

Tonight I attended Not Your Parents' Automation , a webinar by Paul Grizzaffi hosted by the Association for Software Testing . Automation Assist is what Paul is calling non-traditional uses of automation for testing purposes. For him, the trad approach — the one your parents are comfortable with — includes regression tests and smoke tests, typically run on some kind of schedule to give feedback on behaviour changes in the system under test compared to some kind of oracle. There is merit in that kind of stuff — the oldsters do have some wisdom — but there's much more that can be done with programmatic tooling. Paul gave a bunch of examples, such as data cleansing to migrate gold masters from one framework to another test account creation to speed up registration of new testers on a system under test file checking help to take away donkey work and leave a tester to free to apply skill and judgement high-volume rand

Whose Quality is it Anyway?

Last night I attended Influencing Quality & tackling the problems that come with it... , a panel webinar hosted by Burns Sheehan and chaired by Nicola Martin . The panellists were: Marc Abraham , Head of Product Engagement at Asos Antonello Caboni , Engineering Coach at Treatwell Marie Drake , Principal Test Automation Engineer at NewsUK Pedro Vilela , Engineering Manager at Curve As you might guess from that line-up the material covered was broader than simply testing, taking in subjects such as quality, value, hiring, and the delivery process across the engineering organisation. The signal to noise ratio was high, so my sketchnotes capture only a few of the points made on each topic. The idea that quality is a whole team responsibility came up several times. Quality is a notoriously slippery and subjective concept, so in the Q&A I asked whether any of the speakers had ever tried to create a Definition of Quality for their team, something like a traditional Definition of Done

Risks Not Fears

This afternoon I attended From Fear To Risk: Redefine What Drives Your Enterprise Testing Strategy , a webinar featuring Jenna Charlton and Alon Eizenman , hosted by Sealights . In the first session, Jenna presented on risk from a very broad perspective and, in the second, Alon talked about how Sealights' tooling focuses on a narrow slice of (code-based) potential risks in a way which they hope complements the wider approach. Jenna wants risks to be quantifiable and definable and scrutinisable. Fears, for her, are none of those things. To quantify a risk, she scores history (data about previous behaviour, including probability of error, etc), complexity (of the application, the context, the data, the build process, etc), and impact (or more correctly, business concern about impact ) on a scale of 1 (low) to 5 (high) and then combines them using this formula: total risk = impact * maximum_of(history, complexity) This is an interesting informal variant of a more common calculation w

Speaking Out

This afternoon I attended How to become a Conference Speaker , a webinar by the programme chair for EuroSTAR 2021, Fran O'Hara . Although Fran covered a fair bit of material specific to that conference, there was also a lot of good, general advice for aspiring conference speakers, dealing with why someone might want to speak, what makes good content, and how to write a compelling abstract. EuroSTAR 2015 was my first test conference and the first time I'd presented at an event of any size . I've done it a few times since then, and also reviewed submissions for several conferences . While I wouldn't call myself an expert on this stuff, my own experience chimes with Fran's suggestions.

Cypress Thrill

 Last night I attended Cypress: beyond the "Hello World" test with Gleb Bahmutov . Here's the blurb: Gleb Bahmutov, a Distinguished Engineer at Cypress will show how to write realistic Cypress tests beyond a simple "Hello World" test. We will look at reusable commands, starting and stopping servers, running tests on CI, creating page objects, and other advanced topics. Everyone who is just starting with Cypress or is an advanced user will benefit from attending this free meetup. I'm always interested in building up my background knowledge around test tooling so a presentation from an expert with a bit of depth but still suitable for a relative newbie seemed ideal. I also thought I could take the opportunity to practice my sketchnoting on a technical talk, something I've failed at in the past. Last things first, then: I still found sketchnoting hard. In this area, where I know concepts but not much specific, I don't have

Practical AI

Last night I attended Using Artificial Intelligence , a webinar hosted by BCS SIGIST , the Special Interest Group in Software Testing of The Chartered Institute for IT. In different ways, the two presentations were both concerned with practical applications of AI technology. The first speaker, Larissa Suzuki , gave a broad but shallow overview of machine learning systems in production. She started by making the case for ML, notably pointing out that it's not suited for all types of application, before running through barriers to deployment of hardened real-world systems. Testing was covered a little, with emphasis on testing the incoming data at ingestion, and again after each stage of processing, and then again in the context of the models that are built, and then again when the system is live. To finish, Larissa described an emerging common pipeline for getting from idea to usable system which highlighted how much pure software engineering there needs to be around the box of AI t

How To Test Anything (In Three Minutes)

  I was very happy to contribute to QA Daily 's Inspirational Talks 2021 series this week but, in case you're here for an unrealistic quick fix, I have to tell you that the three minutes in question is the length of the videos and not the time you'll need for testing. So how do I test anything? These are the things I've found to be helpful across contexts: I know what testing means to me. I find tools that can me to achieve that testing. I'm not afraid to start where I am and iterate. If that sounds interesting, there's greater depth in How To Test Anything and the related (40 minute) talk that I did for OnlineTestConf in 2020: