Over on the Association for Software Testing Slack our book club facilitator, Zenzi Ali, is guiding us through Elisabeth Hendrickson's Explore It! Last week she posed this question:
"When joining a new team/job/project, how do you determine the core capabilities of the software you’ll be testing?"
I surprised myself a little with my answer. Not because of what I described, but because I hadn't mapped it out this way before. It was only when I stepped back to try to summarise that I noticed the pattern in my intuitive behaviour. So here's what I wrote, edited so that it's less of a stream of consciousness.
--00--
Rob Meaney has a great talk, A Tale of Testability, where he describes how his CODS criteria (controlability, observability, decomposability, simplicity) were applied to a product he worked on to make it more testable.
Now, I'm not suggesting that someone walks into a new team and shouts about re-engineering for testability before they've even taken their coat off! I am suggesting that the same four perspectives can be valuable ways to explore a product to learn what it's about.
In the last 18 months I've joined teams, joined projects, and inherited products and each time I've found value in this approach.
Control. I like to try to get a copy of the system under test running locally, under my control. This can be tricky and perhaps initially it'll only be possible in the environment the developers are using to build the software, including IDE etc. That's OK. I want to be in charge of the configuration, inputs, and so on as I explore.
Observe. A significant part of that exploration is changing things and seeing what effect they have, when, and where, and how, and why. That's part of my observation. Having control makes this much easier. If you're testing in some shared environment you're much more likely to get interference.
I'll also look for information that I can triangulate with what I'm seeing. I'll talk to anyone who will talk to me within and outside my team, particularly consumers of the application I'm working on. I'll look for doc, including bug reports, project notes, etc. These can be interesting to give historical perspective. I'll try to get demos of the software under test too. Ideally from multiple people and different angles.
Decompose. I will be looking for places that can be explored separately, or decomposed. Microservices would be one obvious example of this but monoliths that write temporary files are also examples. If there's a temporary file in some processing pipeline it's a vector for me to inject data.
Unit tests can be a cheap route to decomposability. Inside them I can access (more or less) any function I want at a low level.
Simplify. The simpler the system the easier it is to create a model of. And I will be making models of its data flow, architecture, input-output mappings, and so on.
As the question says this is early in my time on the project I will tend to be prepared to simplify my learning by treating some parts as black boxes until I have time to go deeper. The source code is a good example. I'll try reading and tracing execution in the code but, to be honest, I find this hard in a large codebase. I don't try to boil the ocean though. Little and often is fine, cycling round as I learn more and can cross-reference behaviour and coded intent.
Image: The Pragmatic Bookshelf
Comments
Post a Comment