Skip to main content

Posts

Showing posts with the label Observability

Look at This!

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Are observability and monitoring part of testing?" You'd like a simple answer first? OK, here you go: yes, they can be, but simply having, say, instrumente...

You Can Tidy the Data

This week Sime suggested  Tidy Data by Hadley Wickham  for the Test team book club: A huge amount of effort is spent cleaning data to get it ready for analysis, but there has been little research on how to make data cleaning as easy and effective as possible. This paper tackles a small, but important, component of data cleaning: data tidying. Tidy datasets are easy to manipulate, model and visualize, and have a specific structure: each variable is a column, each observation is a row, and each type of observational unit is a table.  Messy data needn't be bad data, but it might not be in a format that makes it easy to process. Many tables used for data presentation will contain implicit variables, such as person or result in the table here: If you've ever generated, aggregated, or inherited data of any scale for analysis you're almost certainly already familiar with the basic ideas. You've probably also done informally, with much cursing, copy-pasting, and...

Looking at Observability

The Test team book club is reading Guide: Achieving Observability from Honeycomb , a high-level white paper outlining what observability of a system means, why you might want it, and factors relevant to achieving and getting value from it. It's not a particularly technical piece but it's sketched out to sufficient depth that our conversations have compared the content of the guide to the approaches taken in some of our internal projects, the problems they present, and our current solutions to them. While I enjoy that practical stuff a great deal, I also enjoy chewing over the semantics of the terminology and making connections between domains. Here's a couple of first thoughts in that area. The guide distinguishes between monitoring and observability . monitoring: "Monitoring .. will tell you when something you know about but haven't fixed yet happens again" and "... you are unable to answer any questions you didn’t predict in advance. Moni...