I've created a state model-based testing tool for the service I work on using Altwalker. I call the tool a walker because it walks the model, interacting with the service as it traverses edges and making assertions about the state in each node.
The service itself is stateless so I've actually been building a model of the kinds of interactions that it allows.
Initially I modelled the journeys our clients make and this was helpful to get a better understanding of our client needs and gripes and also some of the nuances of our API that I hadn't been aware of before.
However, while building and exploring the model I've discovered some other permitted, functional, consistent, but unintended ways to interact with the service.
I decided to keep them in the model because they exist and can be exercised and if their behaviour changes it might tell us we've done something we weren't expecting. But I've been adding configuration options to allow them to be turned off to provide a "vanilla" model when that's desirable.
I had got the model to the point where I thought it covered the key properties of our system when we introduced a new feature. The primary impact of this feature was to add extra data to certain responses.
Naturally, I wanted to use the walker to help me test the feature and, in order to get a feeling for the feature quickly, I wondered how I could do that without needing to change the model and teach the wallker about it.
So I tried making the walker assert that the feature was not seen, which it could do by checking for the presence of a particular field. Then I ran 1000 times against the service.
Inspecting the walker's logs showed that the assertion had fired on 87/1000 runs. Me and the developer sampled from those results and satisfied ourselves that the new data in the response was there legitimately given the request payloads.
I also searched the other 913 logs for payloads that had some of the properties that could invoke the feature and checked that, in those cases, it was correct to not include new data in the response
This was a quick experiment to run, gave some initial reassurance that the right kinds of things were happening, and complemented the other testing that's been done with crafted test cases and exploration using other tools.
If there was any oddity, it was the frequency with which the assertion fired. Intuitively it seemed a little high.
Digging into that I rediscovered that the walker was hard-coding some values that clients would expect to vary and this was biasing the requests in a certain direction.
That was an interesting result from a testing perspective because it suggested a way that I could control the system behaviour outside of the normal range of usage.
So I added a parameter to the walker's configuration file to let that value be set but had it default to simulate what our clients would do.
Testing is learning, they say. And they're right, but it's not just learning about the system under test. In this experiment I learned, for example:
- a way to exploit the walker for faster feedback.
- a way to control the behaviour of the walker to provoke a particular effect.
- that the approach I've taken to configuration is probably not going to be flexible enough.
Image: https://flic.kr/p/ZmQyVu
Comments
Post a Comment