Here’s an interesting question:

“I have been busy trying to figure out Cost of Delay but I’m stuck on a certain question — hopefully you can enlighten me.

An example: Let’s say I have an ice cream stand that is currently at capacity, selling 10 ice-creams per week. I have two options to boost capacity:

Option A – Add a Super Spoon that will increase output by 10 ice-creams per week. This would take ~2 weeks.

Option B – Build a Second Machine that will increase output by 30 ice-creams per week. This would take ~4 weeks.

Applying CD3 — Cost of Delay Divided by Duration:

Option A – Super Spoon: 10 / 2 = 5

Option B – Second Machine: 30 / 4 = 7.5

*If I first build A, and then B, over 6 weeks I will get a total of 100 ice creams
If I first build B, and then A, over 6 weeks I will get a total of 120 ice creams.*

*Using CD3, delivering B first instead of A first gives me an increase in output of 20%.*

*Now, here is my question: the difference between the CD3 of A (5) and B (7.5) is 50%. Why do I only get a 20% increase in output if I build B first? I expected the CD3 to be in relation of the output gained by building B first. Why is there only an increase of 20 ice-creams and not 50 ice-creams (100% of ice-creams + 50% = 150).*

*Do you see where I’m coming from?”*

Yes, I think I understand. You were expecting the *percentage difference between CD3 scores* to be the same as the *percentage difference in outcomes *when comparing two *alternative scheduling options*.

### Visualise the problem

It’s a bit easier to understand if we visualise what’s happening. Here’s what A then B looks like:

For the 2 weeks we are building the Super Spoon, we have the original capacity of 10 ice-creams/wk. When the Super Spoon is ready, our capacity then increases by 10 ice-creams per week to a total of 20 ice-creams/week. It then takes us another 4 weeks to build the Second Machine, after which our capacity increases by another 30 ice-creams/wk up to 50 ice-creams/wk in week six.

If we instead do the highest CD3 first, we get the following:

In this scenario we spend 4 weeks building the Second Machine, during which we have the original capacity of 10 ice-creams/wk. When the Second Machine comes online, our capacity increases by 30 ice-creams per week to a total of 40 ice-creams/wk. It then takes us another 2 weeks to build the Super Spoon, bringing our capacity up by another 10 ice-creams/wk up to the same 50 ice-creams/wk in week six, exactly the same as if we do them the other way around.

### Cost of Delay is a rate

The units of Cost of Delay are: *change in outcomes per unit time*. In this example, the outcomes we are interested in is more ice-creams, so the Cost of Delay is measured in ice-creams per week. The units on the vertical axis matter. For *every week* we don’t have the Super Spoon, it costs us 10 ice-creams. For *every week* we don’t have the Second Machine, it costs us 30 ice creams. Cost of Delay is a rate.

Notice also, that the area under the curve (the integral) for each of these options is the *number* of ice-creams produced in a given period.

Why am I banging on about units? Well, it sometimes confuses people that Cost of Delay is a rate. You may come across some Cost of Delay graphs where the units aren’t clear. It makes a big difference if you are plotting the *cumulative* benefits or the *rate*, so it’s important to be clear what you’re actually plotting.

I would normally recommend plotting the rate. We are visualising the cumulative benefits anyway (it’s the area under the curve). This way it is easier to see the Cost of Delay – and how that might be changing over time.

So, the outcome for each of these two alternative scheduling approaches can be easily seen from the area under the curves. If we do the highest CD3 first, we get 120 ice-creams in the first 6 weeks, whereas if we do A and then B we only get 100 ice-creams. Same options, simply done *in a different order* result in a 20% increase. It’s basically free ice-cream!

### What are the units of CD3?

Going a step further, we can look at the units of CD3, both to help us understand what it represents and whether CD3 should be proportional to outcomes or not.

To get to CD3 we divide Cost of Delay by Duration (which has units of time). So the units of CD3 are: change in outcomes per unit time, per unit time. The CD3 for the Super Spoon is 5 ice-creams/wk, per week. Likewise, the CD3 for the Second Machine option is 7.5 ice-creams/wk, per week.

We can visualise this as well, since CD3 is the *slope* (or gradient) of a line that matches the change in Cost of Delay over the time it takes to realise that change. Here’s how the two CD3 “gradients” look, scheduled A and then B, like this:

If instead we choose the highest CD3 first, it looks like this:

If you did high-school Physics, you may notice that Cost of Delay is equivalent to velocity and CD3 is equivalent to acceleration. In this analogy, the outcome would be “distance travelled”.

If we integrate the velocity curve (between two specific timeframes) by adding up the area under the curve, we would get total distance travelled. This is equivalent to “Delay Cost Incurred”.

By scheduling the highest CD3 first, we are in fact choosing the option that **accelerates our outcomes **the most.

### So, why isn’t the difference proportional?

Much like Acceleration and Distance, the difference between two CD3 and their associated Delay Cost Incurred *should* be proportional — but it has to be a fair comparison! We would need to treat the increase as linear (not a step function) and would have to do this over the *same* timeframe (say 4 weeks).

This is how that would look for the Super Spoon:

Which can then compare to the Second Machine:

7.5 to 5 ice-creams/wk per wk is a 50% difference in acceleration. This is proportional to difference between 60 and 40 ice-creams over a 4 week period – a 50% increase in outcomes. So they *are* proportional. Or, at least they *would* be if the change was linear and over an equal timeframe.

In the example given though, the change isn’t linear though –we only get the benefits *after* we’ve completed each option. And the timeframes are different (2 weeks vs 4 weeks). Of course, if there is a way to break each of these down and deliver some of that increase in output earlier, then that is of course worthwhile doing. You should still prioritise the option that has the highest CD3 though, as this will accelerate your desired outcome (ice-creams) the most.

The other difference to consider is that the two improvements are delivered sequentially (A then B, OR; B then A). As a result, comparing the outcomes of two different scheduling approaches will no longer be proportional to the difference in the individual CD3 scores.

So, hopefully this helps to explain why the difference in outcomes isn’t proportional to the CD3 scores. It’s still worth doing, though. Who’s gonna say no to free ice-cream?