The Association for Software Testing is crowd-sourcing a book, Navigating the World as a Context-Driven Tester, which aims to provide responses to common questions and statements about testing from a context-driven perspective.
It's being edited by Lee Hawkins who is posing questions on Twitter, LinkedIn, Mastodon, Slack, and the AST mailing list and then collating the replies, focusing on practice over theory.
I've decided to contribute by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be.Perhaps you'd like to join me?
--00--
"Are observability and monitoring part of testing?"
You'd like a simple answer first? OK, here you go: yes, they can be, but simply having, say, instrumented code, dashboards, alerting, or system logs is not itself testing.
Can I give you a more nuanced answer too? I can? Great!
To start with, I find "observability" a difficult term to like. It's often used to mean, as I hinted above, instrumented code with some layer of visualisation but, for me, it's a property of a system rather than an artefact in its own right.
Logging and telemetry data are artefacts, but observability is an attribute: some measure of the degree to which it is possible and convenient and practical to observe the kinds of data important to you right now for your system as it runs or after it has run.
Plumbing Honeycomb in might give you a bunch of artefacts but it takes thought and effort to give your system observability, just like buying a canvas, brushes, and paints won't make you Jefferson Tester.
Talking of testing, let's turn to that next. In your question do you mean the field of testing, the tools of testing, the act of testing, or something else? Don't worry, let's say that, for this answer, testing is an activity in which I am looking for information that will help to answer questions about the system under test.
Logs and telemetry data, along with other deliberate, incidental, and accidental outputs of a system are all data. Sometimes that data will be valuable in answering the question I have and other times less so or not at all.
Most tooling that you'll integrate for logging, monitoring, and telemetry will have a facility for interrogating the data it produces. This can help to answer questions, and so be part of testing, but can also be used productively to explore that data looking for questions to ask.
For example, last week my team deployed a new service to production and then explored the data produced by our telemetry to seek anomalies or patterns. When we found some ("Look at this!") we tried to understand them in order to begin to learn about how the service behaves in that context.
Armed with this new understanding we can enhance our telemetry, craft better dashboards and alerts, tweak the infrastructure, or change the product. I'd regard that as testing too.
Does this help to answer your question?
Image: Mutual Art
Comments
Post a Comment