We don't have any business analysts at our place but I'm interested in the role and the potential insights a BA can bring to software development. Historically, I've wanted my testing to be concerned with more than whether the desired scope was implemented: I want it to wonder whether that scope is targeting the right problem and, if it is, whether some other scopes could solve it too, and what the relative merits of each are.
I think this kind of work overlaps with the things a BA might do, but perhaps with sharper tools, and I perceive that there's crossover in the skill set too. To give my thoughts some more grounding I looked for an intro book and chose Business Analysis Agility by James Robertson and Suzanne Robertson.
Why that one? Because it talks end-to-end about software development in contemporary environments, because it's got worked examples, and because I liked what Johanna Rothman's five-star review on Amazon said about it:
This book builds on Gottesdiener's "Discover to Deliver" and Patton's "User Story Mapping." Both those books talk about iterating over the planning and requirements. This book specifically talks about safe-to-fail probes. Note the wording. Not just safe-to-fail, but probes. (I did a little happy dance with that.)And I agree that there's much to like in the book, not least nuggets like these that smack of experience:
Part of the problem we set out to address is to dispel the notion that “anything upfront” is bad. Without some upfront analysis, projects have no scope and are simply shooting in the dark. The trick is to make anything upfront as short and effective as possible. We can show you how to do that. (Kindle location 296-298)
The problem is that we often don’t know what the problem is. (471-472)
A common problem with silos is that teams often feel constrained to deliver a solution contained within their own silo. (4140-4141)My confirmation bias does its own little happy dance when I read this kind of thing:
Your solution must solve the right business problem. There is no other way to deliver value. (335-335)
Performing an analysis does not mean that we want to delay delivering a solution. It means we want to deliver the right solution. (586-588)
Note that the value propositions are technologically agnostic. You are describing an outcome, not how something is to be done. (1175-1176)As the title suggests, much of the content revolves around agile practices. Boiling it down, the book promotes iterative cycles of plan-do-check-act at appropriate granularities, and often embedded within one another. Readers who've been around software development may find little new in large chunks of it, but I don't mind that so much: the material places the BA role in a wider context and each reader is different.
To reiterate that last point, although context diagrams are common currency in some fields the name was new to me. The idea of putting the system under development in a black box to help to understand the inputs and outputs of business events while remaining agnostic about the implementation is compelling and something I've only done informally in the past.
Where I felt less satisfied was in the conversation around value. The worked examples provoke discussion about value and impact and the importance of understanding those things. Here's a few quotes:
Value comes in many forms, but it must be a real value. If it is real, it is also measurable. (1218-1218)
Value is delivered when your solution enables your customer to do something useful or pleasurable that he could not do before. (1215-1216)
You should be able to look at your value proposition and assess the impact it has on the target audience. (1251-1251)
Without an impact, the solution is unlikely to deliver much value. (1242-1242)
A value proposition describes the value the customer receives when you solve his problem. (1156-1157)
a combination of questions would tell you if you have delivered the required value. (1223-1223)
if there are enough subjective questions, those serve to make the value measurable. (1231-1232)But I didn't find the worked examples took me anywhere near far enough through a lifelike scenario where an understanding the potential value(s) was gained, let alone measured, or compared.
To be fair, I've been around the block enough times to understand that a concept like value can be a movable feast. However, I've also spent much time teasing out my own understanding of terms fundamental to my work, such as testing, value, and quality, and I'd have liked to have seen something less vague here.
Weinberg famously collapses the quality of a thing down to a statement about value:
Quality is value to some person.and then further collapses value to be the amount that that person would be prepared to pay to get the thing.
This, while perhaps seeming reductionist or even crude, has the benefit of transparency and makes feature X for customer segment Y directly comparable with feature A for customer segment B in an important respect.
The book does also cover quality, or rather qualities ("the usability, the security, the look & feel, the performance, and so on"), in particular with reference to acceptance criteria.
In short, provided the functionality is met, it is the qualities that determine the acceptance or rejection of your product. (3554-3554)
The quality needs at some stage become part of your acceptance criteria. This means that you must be able to measure them if acceptance testing is to have any meaning. While a look & feel quality can legitimately be “stylish and attractive,” it must have a measurement if it is to be tested. (3650-3652)
So you can measure whether the product meets its quality needs with a fit criterion: 60% of customers return to the site within 3 months. If this is achieved, the product is considered to be “stylish and attractive.” (3663-3666)Reading this last quote, I find myself asking Really? and wondering about construct validity. There are any number of reasons why customers might return to a site that they do not consider stylish and attractive, terms which are in any case intrinsically subjective.
The interesting thing for me here is that I share the strong desire to understand whether or not some feature fulfils a customer or business need. I also desire to try to closely couple success criteria and the way in which they are to be measured (or be clear about the extent to which the metric is a proxy).
But perhaps that's telling me that there's no so much crossover between business analysis and testing as I thought. In fact I felt somewhat seen, and laughed out loud, when I read this:
We find that most business analysts are not great testers, but they’re great at writing acceptance criteria. (3401-3402)Despite my quibbles I'm still happy that I bought the book but I'd love to see a much deeper dive into ways to assess, measure, and compare the value of disparate options. Any recommendations?
P.S. The authors offer various templates for helping with the kinds of business analysis that they describe: Volere.
Image: Amazon
Edit: John Cutler suggested Cost of Delay as a possible approach over on Twitter.
Comments
Also context for the system is the top level diagram in https://c4model.com/
Post a Comment