This week I was testing a new piece of functionality that will sanitise query parameters on an API endpoint. The developer had implemented a first cut and I was looking at the code and the unit tests for a few minutes before we got together for a walkthrough. After exercising the new function through the unit tests, trying other values, and checking that the tests could fail, I wanted to send some requests from an external client. I already had the service set up in IntelliJ and running under the debugger, so ... Sending requests from Postman and inspecting sanitised values in a breakpoint was OK, but there was too much jumping back and forth between applications, clicking on fields, scrolling and so on. Each iteration was many seconds long. Luckily, we'd already decided to log sanitised strings for monitoring and later review, so I could disable the breakpoints and just look at the service console. That was better but still too heavy for me. Ther
Last night I attended a Consequence Scanning workshop at the Cambridge Tester Meetup . In it, Drew Pontikis walked us through the basics of an approach for identifying opportunities and risks and selecting which ones to target for exploitation or mitigation. The originators of Consequence Scanning recommend that it's run as part of planning and design activities with the outcomes being specific actions added to a backlog and a record of all of the suggested consequences for later review. So, acting as a product team for the Facebook Portal pre-launch, we listed potential intended and unintended consequences sorted them into action categories (control, influence, or monitor) chose several consequences to work on explored possible approaches for the action assigned to each selected consequence In the manual there are various resources for prompting participants to think broadly and laterally about consequences. For example, a product can have an effect on people other than its u