Transforming hidden assumptions into hypotheses

I posted an answer on Quora the other day in response to the question about how to gamify sprints as a way of motivating teams:

Try sharing with the team the #CostOfDelay of the things they are working on in $/week. For example:
“The Cost of Delay for this story is $50,000/week. What this means is that delaying this costs us the equivalent of $50,000 for every week we don’t have it”. By communicating the value and urgency of the things they are working on the team’s motivation will be higher, they can prioritise better, make better trade-off decisions and focus on value and speed.

Even better: express the assumptions behind the #CostOfDelay as hypotheses, and ask the team to invalidate the assumptions as quickly as possible. They will then create the feedback loops necessary to learn whether the value is there or not. No need to “incentivise” them to do this — just ask! No team wants to be wasting time building stuff no-one will use. “Every week we’re not building the wrong thing is another week we can focus on building the right thing”. No need to “gamify” it. Product Development is enough of a game without making up fake games or rewards.

Also, attempting to motivate teams with extrinsic rewards will destroy any intrinsic motivation. Don’t do that. You may see a temporary increase in focus from a reward system, but the effects won’t last. It’s also quite likely that the team will game the reward system very quickly. They’re engineers, not kids. There’s no better motivation than knowing the value that the things you’re building will hopefully generate.

There’s an (emphasis added) important point there – about transforming assumptions into hypotheses. Rather than throwing ourselves into development, blind to the false certainty or opinion, we can convert the most important uncertainties into questions and ask the team to invalidate these as quickly as possible. The truth is that much of what we develop in pixel-perfect detail with a whole host of nice to have bells and whistles turns out to be largely worthless. Better to find that out after a few weeks than a few months or even years. Speed of learning is crucial.

I sometimes hear people assert that quantifying cost of delay is the stuff of wild ass guesses or makey-uppy – and therefore a pointless waste of time. I can understand this perspective, partly because I have seen an awful lot of business cases. In a previous role I reviewed and sifted through hundreds of cost-benefit analysis, many of which were being used to try and justify and gain approval for many millions worth of funding. Given that I’ve probably reviewed more business cases than anyone I know, you can rest assured that I’ve also seen an awful lot of games being played with numbers. I know all that. I’m not naive. But I’m also not stupid: we still need to make decisions. We can’t just stick our heads in the sand and hope it all this complexity and uncertainty just goes away. When it comes to portfolio decisions in particular we don’t just have to decide what to invest in from a whole host of possible options, we also need to be able to figure out when to stop and move on to something else. Only an idiot would make these decisions without some effort to consider the options. And, in product development, the primary differentiator between the different options is often the Cost of Delay.

Given the stochastic nature of value and the large dollops of uncertainty that exist it is tempting to characterise any ex ante analysis (before the event) to understand value and urgency as useless “guesswork” and “fake numbers”. If you were to take a handful of business cases and analyse them after the fact (ex-post) you will of course discover that a whole bunch of them were “wrong”. What is less obvious is that we are just as likely to be wrong about the ex-ante analysis of the winners – typically underestimating. There are precious few studies in this area, but those that do exist find no skew to the high-side of the benefits. They are not precise, of course – but what idiot would expect that given the stochastic nature of the game we are playing? No, these false accusations based on a false dichotomy. It’s not #NoNumbers vs #PerfectNumbers. I’ve seen people’s understanding of the Cost of Delay improve remarkably with even a little analysis.

What remains largely un-challenged in this argument is the alternative in the #NoNumbers camp. I’m yet to hear one that doesn’t boil down to relying on the gut-feel of the HiPPO, which has proven to be way off the mark.

As I attempt to emphasise when teaching people how to quantify the Cost of Delay – whilst the numbers are really useful, perhaps more important than the numbers is the process of surfacing our assumptions about value. Having surfaced the assumptions we are much better equipped to question these, and crucially, design effective experiments to quickly invalidate or discover a better understanding of the value.

What we are looking for is a more scientific approach to value. We know that we aren’t good at predicting – but that doesn’t mean we should ignore this and just run with gut-feel. Much better to surface assumptions, convert these into hypotheses and test them as quickly as possible. For me, this is a key part of tilting the playing field of product development.

More on improving the asymmetry of product development here.

Workshops and training on quantifying Cost of Delay here.

Other tilts you might want to try here.