Putting a price tag on time can sound scary. A common fear is that the estimate of value will be misused as a tool for planning – much like other estimates of time or cost often are. When estimates are later proven to lack the precision they never had, those who underestimate the complexity of product development sometimes respond with far too simple corrections. We wrote about this in Black Swan Farming:
The point about being “obvious only in retrospect” is an important one when it came to the Cost of Delay calculations themselves – but perhaps more importantly, management’s view of the numbers produced. The typical response, which we heard more than once at Maersk Line, was the desire to “hold people to account” for their value estimates. In fact, we heard various ideas about how to punish or reward the individuals or groups raising ideas, depending on whether their idea turned out to be as valuable as they had predicted.
Doing this would have had a negative impact though, driving the organization back to being risk averse and slow when evaluating benefits. Business Process Owners would be incentivized to only raise ideas where they could be sure of the result, like headcount reductions or other cost savings. Focusing only on reducing cost becomes a zero-sum game. At some point, the organization needs to also focus on creating value for its customers, and in doing so increase the size of the market.
The reality is, innovation and product development are not sure things. It is more like what venture capital firms do – a series of small bets, some of which will hopefully pay for all the others. They attempt to pick winners, but they are still hostage to the market and the probability of success is low. Innovation requires that we test new ground and break things, discovering the best solution. Probe, sense, respond: Amplify the positive signal, dampen the negative signal. In this way, product development is not so much Black Swan hunting, but more like Black Swan farming.
The fear of implied false certainty is particularly prevalent in organisations that have struggled with false precision before — usually with estimates of effort or delivery dates. It is a natural response to avoid situations where the “language of certainty” can be abused again. The problem of prioritisation doesn’t go away though, so alternative solutions are employed.
Relativity
Perhaps the most common approach to prioritisation, especially with “Agile” teams, is relative ordering. In this case the value is implied by the order, but the value assumptions are usually hidden. This makes tradeoff decisions difficult. Relative ordering can work well for independent teams, where the stream of demand can be funnelled through a single person (that we like to call the HiPPO). Where this can work is where the person fulfilling the Product Owner role is totally focused on developing the product and works closely with the team to help with tradeoff decisions.
Relative ordering breaks down quite quickly if there are multiple stakeholders though. It starts to stray into what I call the Eurovision model. If there are multiple teams serving multiple demand streams the system starts to seize up. Add dependencies between teams and it creates the sort of mess that only superhero Project Managers enjoy. As a rule of thumb, I find that the efficacy of relative prioritisation breaks down at around 20 or 30 items. When the number of items become much higher than this it becomes very hard to keep track of the reasons behind the ordering, especially when new items arrive. Applying Fibonacci numbers to the value side of the equation doesn’t help one iota as they aren’t comparable without common units.
Rounding
One alternative is to estimate value in real money, but constrain the estimates to a simplified set of pre-defined slots. This might be with Fibonacci ($10k, $20k, $30k, $50k, $80k, $130k, $210k etc) or even orders of magnitude ($10k, $100k, $1m, $10m, etc.) This generalization into course groups does involve some trade-offs though. The first is that it doesn’t prevent the batching of features/requirements together — either in order to push them up to the next bracket, or simply to improve the perceived benefits. Secondly, it is generally good practice to retain more significant figures in the intermediate stages of calculations. Rounding too early can introduce rounding errors. Remember, these are not cost of delay figures yet! We don’t actually want to compare benefit figures as this leads to competition and larger batches. Nor do we want to transmit the benefits figure as a signal. What is missing is the “per unit of time” aspect. For instance, let’s say $2.3m is the estimate of benefits of a project in the first year. Assuming that these benefits ramp up to a sustained peak we can approximate cost of delay to roughly $44k per week.
Cost of Delay is not the figure we compare, but it is the information we want to transmit as a signal of urgency and enable better tradeoffs. It may seem like a small thing, but when you express Cost of Delay as a per week $ amount it really does create a different sense of urgency. More than anything, you want to get people away from thinking that value arrives on an annual basis as a large single payment, and that being late by a few months is merely a question of KPIs. The truth is, for most ideas sitting around waiting or queued up somewhere in the development process, value is leaking away.
So, how could we reduce the fear of false certainty? One alternative is to provide a range. By adjusting some of the most sensitive variables we can estimate an upper and lower range of where the value is more likely to fall. The very first template I used for this showed the benefits estimate as a range, with a probability curve and the upper, lower, and best estimates of value. For many features it may be that the lower bound is actually negative. This is akin to “out of the money” financial options, for which the payoff is negative i.e. the cost of buying the option. With this approach you get a better feel for the key assumptions behind the value and can take action to test your hypothesis early.
We are still not quite at the point where we compare options though, so don’t round off just yet. For a raw CD3, we need to divide by duration: how long is this option likely to block the development pipeline? Comparison based on Cost of Delay alone only makes sense if all the options block the pipeline for the same amount of time.
The fear of false certainty is understandable. Decisions still need to be made though. The alternatives to prioritising like relative ordering and rounding have other tradeoffs that are worth considering. Providing a range rather than a single Cost of Delay number can reduce the implied certainty. There may be other mechanisms? Feel free to share in comments below if you have any ideas…