How might we think about the potential value of the products and services we could develop? Is there some way of structuring our thoughts so as to more quickly surface the potential value? I want to explain a bit about the background and development of a framework that attempts to do just that. This framework has been used successfully in a number of organisations and has really helped them improve their shared understanding and visibility about the value of options they could develop.
In the early naughties I was working as a consultant with PriceWaterhouse Coopers. The client was the Environment Agency and we were looking at how and where they spent over £500m p.a. on projects to reduce flood risk in England and Wales. One of the problems I discovered quite early in the 7 years I spent working with the Environment Agency was that the system basically encouraged them to be penny wise, but pound stupid.
The broken public finance system
As there was no tracking of the Total Cost of Ownership (TCO) of the vast asset base, basic maintenance and repairs were seen as easy areas to cut costs. The inevitable result of this was that the assets deteriorated quickly and needed to be replaced sooner.
The money for doing this came from a different source and was accounted for differently. If what the asset did was really valuable, justifying the CapEx through the usual Investment Appraisal wasn’t a problem. (It was of course massively time consuming and involved an army of Engineering Consultants, but the money would get approved eventually – another story for another time).
On the flip side, if the asset wasn’t reducing economic risk by much, justifying it’s replacement required some economic somersaults, which were often picked up. Since getting the money to replace was difficult these assets would receive more maintenance, with regular care and attention. “We’re not going to get another one, so we’d better look after this one!” seemed to be the thinking.
The strange result of the that some of the most critical assets received virtually no maintenance at all and were allowed to degrade, while some of the least valuable assets were maintained to pristine condition. From both a risk management and an asset management perspective it was all upside down. The perverse incentives were clear and led to weird, wrong results.
If only the Environment Agency was an outlier. This sort of thing is sadly quite common though. You can see the very same effect in many asset bases that have been managed by the public sector. Train and tube networks, gas and electricity, water supply, telecoms, road networks, hospitals, schools, etc.
A simple solution
So this is the solution we developed. At the high-level (where we measured, managed and made investment and funding decisions) we considered four possible actions to manage flood risk: two focused on probability and two on consequences.
Probability is predominantly reduced by providing a measurable Standard of Service (SoS). This basically translated to a specific height above datum.
- Sustain the SoS =
- Improve the SoS +
Consequences are predominantly measured in the value of things at risk (and to some extent their ability to get out of the way of a flood event when it is forecast).
- Maintain Consequences =
- Reduce Consequences –
Now what this does is that it enables us to track and manage the Total Cost of Ownership for an agreed Standard if Service – something that OpEx vs CapEx made very difficult, especially when the replacement invariably involved a change in the SoS provided. Now all investment (OpEx and CapEx) was combined, with the target being to reduce the TCO by “sweating” the assets.
The idiom that applies here is “a stitch in time saves nine”. On the other hand, if you’re doing three or four stitches of maintenance every year or so, it probably makes sense to take the 9 and renew the asset and reduce the maintenance. There are models you can build that help you manage the SoS in such a way as to minimise the TCO whilst still maintaining the reduction in probability.
The “Pivot”
When I then joined Pearson and started looking at the investment and funding problem in their I.T. I found strikingly similar parallels. In effect, there was plenty of money available to spend on chasing new customers, but existing customers (who might go elsewhere) were expected to put up with really painful experiences that would have been easy and relatively cheap to fix. Unsurprisingly, customer churn is a problem for many organisations.
Not all organisations are like this though, some make sure that every employee knows what the lifetime value of a customer is. This means that customers are less likely to suffer from the ridiculous amounts of failure demand we see in lots of companies.
And it wasn’t just on the customer side. Pearson was spending loads of money chasing the smallest efficiency gain. Getting funding to address a problem that may result in increased costs was more difficult to justify. Decision-makers are sometimes a bit blind to the risks and “hard benefits” seem to get more attention.
Again, the solution to this was to start measuring value of requested changes in a slightly different way. At the high-level there would be four possible outcomes from delivering an idea: two focused on Revenue and two on Cost.
- Sustain Revenue =
- Increase Revenue +
- Avoid Cost =
- Reduce Cost –
At this point you’ll no doubt have picked up on the parallels. These four outcomes essentially became the four “buckets”. An idea or request for change to a system or application would effectively contribute to one or more of these buckets.
The Implementation
So this is how we actually did it. When one of our business partners called up with a a request (usually they just wanted an estimate) we would go and sit down with them and start asking questions. The first question we would ask is “what” is the idea. The problem here is that people too quickly jump to the solution. We would try to bring them back to the problem they were trying to solve. We would then help them to write a brief synopsis of the problem/idea.
Once we had something that we both understood well enough we would then switch to the second question: Why?
Why would we solve this problem?
Why does that matter?
Why is that important?
Why does that matter to our customers or users?
Why does that matter to us, our organisation?
The exact form of asking “Why?” five times or more would of course differ.
What was the same though is that eventually we would arrive at one or more of the four buckets mentioned above. I have seen this done now for literally thousands of features and in every case the underlying reason behind everything that is asked for arrived at one or more of these four value buckets.
We would capture the “what” and the “why” and write these onto a simple A5 template. The reason for this is to communicate that this need not be a massive analysis process. It was supposed to be fast and relatively simple – and predominantly based on a conversation rather than a big Word document that never gets read.
Getting to numbers
I learned very well in my 7 years working with the environment agency that having multiple targets for which there is no means for comparison was a recipe for confusion and muddled thinking. It is this sort of confusion and muddled thinking that gave us the “balanced scorecard” that my old colleagues at PwC have made so much money from.
And so, I insisted that everything be denominated in the same units. Not “bananas” or relative points or some other lazy proxy for value. We already have a proxy for value, and it’s one we all use every day. We all make loads of decisions for ourselves and on behalf of others using this proxy for value: money.
Plenty of critics will at this juncture point out that not everything can be tied back to a dollar or pound, and I would agree. Money is not the thing ultimately or even actually aimed at, but it is a really useful way of comparing options. So whilst money is often not what really counts, it it still incredibly useful to count it. Or, as Don Reinertsen counters this with a useful quip: “you may ignore economics, but economics won’t ignore you”.
So, just because it’s hard to agree on what numbers to use, that doesn’t mean we shouldn’t try. Part of the value of attempting to get to numbers is in the discussions themselves. We discover what assumptions we each make about what is valuable, or why this or that should be done. And yes, there are often a whole bunch of unknowns, not least of which is how customers will or won’t react to the things we build. This is called ex-ante appraisal and we do it all the time. We make some assumptions based on what we do know, and we fill in the gaps to help us arrive at a decision.
At least part of the reason for writing this down is to show that this simple model was actually developed and tested in a public sector environment, where profit is not the objective. And that may be true for you as well. You may be a not-for-profit or even a startup who (at least initially) are less interested in pure profit. For the Environment Agency, the objective was to reduce Flood Risk. For a startup, that is likely to be things like “Reach”, “Engagement” or some related measures of your growth in numbers of users and how much they are using the things you are developing.
To be really explicit: what matters is not the $ figures. What matters is that you have the conversations about what is valuable. Part of this should be working out some way of converting those measures into something that enables you to compare options, and of course, understand the cost of delay of the things you are working on. Don’t get distracted, or worked up, by the emphasis on numbers – you’re simply trying to find a way to simply and quickly communicate the value of what it is that you’re trying to achieve. Given how much effort we often put into getting our estimates right, something we know to be full of uncertainties – it seems strange that we are so reticent about putting the same effort into thinking about what drives everything we do. Surely it is value that should be our higher-order bit?
To conclude, allow me to reference an article, written in 1958 by William B Cameron. The article was titled “The Elements of Statistical Confusion Or: What Does the Mean Mean?”. In it, Cameron talks about how hard it is to do statistical analysis, (something that we unconsciously do all the time whether we know it or not). In it, he makes the following argument:
Counting sounds easy until we actually attempt it, and then we quickly discover that often we cannot recognize what we ought to count. Numbers are no substitute for clear definitions, and not everything that can be counted counts.
What I take from this is: when we estimate the value of the things we are doing we need to recognise that the resulting numbers are not an excuse to stop thinking. Developing, using and improving on some clear definitions of what is valuable should help us think. The numbers and algorithms are more like scaffolding to help guide our thinking about what is valuable. That’s what this simple framework is.
Ultimately, you can ignore the numbers if you want. You can even fight the numbers. But in the end, the numbers that do matter will win.