Skip to main content

Posts

Showing posts from May, 2012

The User Inexperience

I think a lot about the user experience when I'm testing . Not only for conformance with the UX guide we refer to but also in less tangible, subjective, respects such as how the software will  feel  to different kinds of users. The UX guide can't help here. Although it has value and the potential to save time - for example, by reducing the number of discussions about capitalisation policy for dialog title bars  to merely single figures per release - at close to 900 pages it is not a shining example of usability itself and it doesn't try to answer the question  who's a user?  In particular, it has only a handful of mentions of new users outside of the section on first-timers which itself is largely about setting up the product rather than using it. Inexperienced users will at various points form a significant enough proportion of your userbase that they merit special attention and when that happens, you'll need to think about lack of experience with your prod

Modal-Driven Development

If extracting feature motivations, requirements and priorities from stakeholders is an art, presenting the analysis requires artfulness. The MoSCoW method suggests using English words M ust, S hould, C ould and W on't as an alternative to purely numeric priorities to make it more transparent that not everything listed will be delivered. But reversing the MoSCoW rules can help to obtain  the implicit prioritisations. You'll frequently hear "we must have a solution to X" or "Y should be improved in this release" or "we'd like to get Z into the payload if we could ". Key verbs like this  can be mapped to prioritisations, e.g. P1 (must), P2 (should), P3 (could). We use these clues to bootstrap priority discussions and, perhaps surprisingly, I often apply it to my own bug reports to get an idea of my intuition on an issue. You ought to beware that there will not b e a 1:1 mapping between key word and priority, and negation does not help. Fo

Can The Modeller Control The View?

One of the reasons that software testing is challenging, both intellectually and practically, is that the information about the state of the system under test is partial. It's part of the testing role to formulate a model (or, more usually, a cloud of overlapping, incomplete and contradictory models) that represent our best view of the system at any given time and we've developed a collection of monochrome boxes  that reflect the idea that access to source code can help make sense of it. But even that doesn't equate to an understanding of the model that the software has when it operates. For example: The tester may not follow the source code (completely). External libraries may implement a substantial part of the functionality but appear minimally in the source. Interactions with other layers, such as the operating system for file operations, will form part of the model without being part of the codebase. If the source code is compiled, it may be optimised in ways th