Friday, October 1, 2021


A while ago my team was asked for estimates on a customer project that had become both urgent and important. Unfortunately, but not unusually, there was a reasonable amount of uncertainty around the customer need and two possible approaches were being proposed. 

It fell to me to organise the team's response.

First off, should we refuse to estimate? Woody Zuill talks compellingly about #NoEstimates approaches. In Control, or the Fear of Losing It I summarised his perspective as:

We think that we estimate because it gives us control. In reality, we estimate because we fear losing control. The irony, of course, is that we aren't in control: estimates are inaccurate, decisions are still based on them, commitments are also based on them, projects overrun, commitments are broken, costs spiral, ...

Ron Jeffries has a typically nuanced take on the idea in The #NoEstimates Movement:

How to apply #NoEstimates isn’t entirely clear. Does it really mean that all estimates are bad? If not, which ones are OK? How can we tell the difference between an estimate that’s useful enough that we should do it, and one that is pernicious and never should be done?

And I find George Dinwiddie to be a pragmatic guide, noting in Software Estimation Without Guessing there are many ways to estimate and they do not all suit all people in all circumstances. The key is to find a useful approach at an appropriate cost, given the context.

In this case, I felt that we were being asked to help the project team to move past a decision point. My instinct was that analysis was probably more important than precise numbers, and I wanted to keep effort, and team interruption, to a minimum. 

This is what I did...

I drafted a document that listed the following for each of the two implementations (let's call them A and B):

  • what I understood were concrete requirements for each
  • assumptions the team would make in order to generate estimates
  • risks associated with each project, the process we were in, and estimating itself

I delivered this quickly and requested immediate feedback from the stakeholders. This clarified some aspects, identified things that I had missed or got wrong, and exposed differences in perspective amongst the sponsors. It also showed that I was taking the work seriously.

Next, I made a spreadsheet with a rough list of feature items we'd need to implement for each of A and B, and I passed that by the team to skim for obvious errors.

Finally, the team got together on a short call. We briefly kicked around tactics for estimating and decided between us to each give a low (or optimistic) and high (or pessimistic) estimate for each line item for each of A and B. We did this on the count of three to avoid biasing each other, and we wrapped up all of our uncertainties, worries, assumptions, and so on into the numbers. 

For each item I dropped the lowest low and highest high into the spreadsheet (like the example at the top) and totalled the values to give very crude error bars around potential implementation routes for each version of the project. 

I updated the document with this finding and delivered it back to the project with a recommendation that we de-risk by choosing B given the urgency of delivery. 

The stakeholders accepted the suggestion and my work was done.

Retrospecting, then: I was very happy with the process we bootstrapped here and I would use something like it again in similar circumstances to enable a decision.

To be clear, I would not trust the absolute numbers we created but I would have some faith that the relative comparisons are valuable. In our case, B was about half the size of A and this accorded with intuition about the amount of uncertainty and the complexity of A over B.

Also important is the context in which the numbers are set. Explicitly listing the assumptions, risks, and approach gives us a shared understanding and helps to see when something changes that might affect the estimates.

Choosing not to  unpack everyone's personal feelings on every number was a real efficiency gain. Gut instinct is built on data and experience and we can access it unconsciously and quickly. Taking a low and high number emphasises to stakeholders that there is uncertainty on the figures.

I tried to choose a pragmatic, context-based, approach to estimation, where the numbers might be somewhat brown* but, along with the contextual information, facilitated a decision. On another time, in another situation, I might have refused, or done something different. #SomeEstimates.

* I am indebted to Jason Trenouth for the concept of a brown number, so called because of the place they're pulled out of.

No comments:

Post a Comment