Challenges with Cost of Delay and CD3: Duration

A couple of week’s ago I received an email asking how for help with applying Cost of Delay and CD3 to some potentially difficult cases. I enjoy these challenges. For an idea to survive, it needs to be stressed to see how it responds. Maybe the idea is completely flawed (e.g. the Geocentric model of the universe). Maybe it was the best we had, but something else supersedes it, rendering it obsolete (e.g the horse and carriage). In lots of cases, the idea only applies in certain contexts, and not in others (various examples). Ultimately, if an idea is so brittle that we can’t use it to answer any practical questions, it deserves to fade away, serving only as a marker of the progress we’ve made.

A quick CD3 style “triage”

The email contained 5 points – too much to try and answer in one post. So, I will split it up into at least a couple of posts. To decide where to start, perhaps I could practice what I preach and apply CD3 to the way I schedule these?

After a 5 minute review of all five points, it seemed to me that the last point (#5) probably had the highest Cost of Delay. Why? Well, I know that it is urgent because the person asking tells me they are planning to blog on the topic in the very near future. It also seemed valuable to the person asking, since they state it applies to all of the previous four. I assume it may also be valuable to the wider community because I get a lot of questions in this area – so it will probably help the greatest number of people too.

Answering #5 first probably also has the shortest Duration, because I have already dealt with similar questions before, and I feel that the words will flow fairly easily (could be wrong on this). Combining these three factors together (value, urgency and duration) I’m pretty sure that the highest CD3 score would be #5. If I get bogged down, I can always switch after a reasonable time box. (Adding round robin scheduling to a very basic triage using CD3 helps to “cut the tail” with things that turn out to be far more intractable than they might seem at first blush.

Enough waffle. Let’s crack on…

#5: Duration

We start out with much common ground: we agree there is value in using historical data and performing Monte Carlo simulations to model potential durations for a given number of work items. We also agree that duration is not an independent variable. Like almost everything in Product Development, duration is stochastic in nature.

There is a reference to Troy Magennis’ work suggesting that duration is typically a Weibull distribution. Larry Macherrone’s analysis of shape parameters gets mentioned too. All good stuff. There follows some discussion about different shape parameters, the effects of multi-tasking and how it shifts the distribution to the right, making the duration longer. I agree with all of this.

As best as I can understand it, the challenge they have with Duration could be summarised in three main points:

  1. The spread of Duration is typically “huge”, from very small to very big.
  2. A large part of the actual Duration is due to waiting time and therefore unrelated to the work itself
  3. Because of these two things, our ability to determine the duration is very poor.

Conclusion: don’t bother with Duration, just use Cost of Delay.

I would of course agree with the first point about the spread. For me though, this is why some attempt to roughly sort features in some way is economically useful. I would also agree with the second point – I make this point myself in this video and almost all of the talks I’ve done. I find it curious that organisations are obsessed with improving the part of the process (Development) that is likely to yield the smallest result in terms of reducing cycle time!

Where we diverge is the third point. Because the spread is large (and despite the effects of waiting time), economically speaking, it is still useful to take account of duration estimates when prioritising or scheduling. There are of course examples where this is unecessary, say where the team are using the Goldilocks technique for slicing work. What this means that they are estimating at the same time as they slice. But I would agree that dividing every item by duration is mostly pointless in this case. Note though that even if they were to, it doesn’t break anything. The CD3 algorithm handles this gracefully and still gives an optimal scheduling.

Of course, this assumes that someone else has already made the portfolio level decisions about which initiatives to focus on. At that level, understanding how long the different options might block the pipeline for before they start to deliver any value is incredibly valuable. I would not choose to discard this information if you already have it, and if you don’t I’d suggest spending a little time asking the teams for some information about likely duration is economically sensible.

Having said all that, precision in Duration is not something I would focus on. CD3 can happily take any rough and ready input for Duration (Story points, T-shirt sizes, or any other proxy for size you care to invent. As long as the correlation with actual duration is better than random you will still get a better economic outcome than if you were to ignore that input. Because the spread is so high, it doesn’t even have to be that accurate to still be valuable information!
Accuracy vs Precision
What seems to be missing from these discussions is a fair appraisal of the counterfactual. What is the alternative? As Don says, when you’re running from a bear, you don’t have to outrun the bear, you only have to be a little bit faster than the other potential sources of food.

Beyond this, I also can think of two additional reasons to not ignore Duration:

1) Stakeholder Signalling

The first is that making some use of duration in your scheduling algorithm sends a clear signal to the system that we have a preference for smaller changes. Without this, there is an incentive to maximise the size of your request so that you get as much as you can before the team is redeployed onto something else. Using CD3, if you can work out the 20% of your idea that delivers 80% of the value, then your priority goes by by a factor of 4. Pointing this out to stakeholders has them aligned to work with the development team to figure out the smallest thing they can do to deliver value. This works better than any “MVP” oriented system I’ve seen. (I put MVP in inverted commas for a good reason, but that is for another blog post).

I have witnessed time and again the positive effect of using CD3 where you have various stakeholders competing for the same scarce capacity of development. For the first time they not only have to surface their assumptions about the value and urgency in estimating the Cost of Delay – but by using CD3 to inform scheduling they are also incentivised to make it as small as possible. Much better to deliver lots of small things that have high Cost of Delay than lots of unknown size things where the assumptions about Cost of Delay is hidden away in the HiPPOs head.

In short, ignoring Duration increases the chances of gaming.

2) Learning

Asking questions about the assumptions of Value and Urgency help the team to understand the problem first (rather than any potential solution). Beyond this, there is no substitute I’ve seen though for asking about how big something is to have teams consider the complexity, risk and size of the investment and options. This encourages a switch into System 2 to dig a little deeper and do a little analysis – which can reveal that something is either much easier than thought, or (more often) much more difficult and/or risky. Asking the Duration is in some respects a head-fake. It just happens to also be economically useful as part of the scheduling algorithm.

Estimating Duration doesn’t have to be scary or a means for abusing the team (by turning it into a commitment). Don’t throw the baby out with the bathwater!