Everywhere we look in the system for delivering value with software there are opportunities to make improvements that might work well for one part, but have a negative effect on the whole. Systems Thinking is about understanding the purpose of the whole and making improvements with the whole in mind.
To quote Wikipedia:
“Systems thinking is… based on the belief that the component parts of a system can best be understood in the context of relationships with each other and with other systems, rather than in isolation”. [emphasis mine.]
For instance, let’s consider the funding and approval process. Approving and funding a large “batch” of requirements for which benefits, costs and delivery date are known makes it much easier for those who are approving. They can easily ensure value for money, check alignment with business strategy and enforce alignment with technology strategy. Because of the time and cost of pulling all the key individuals together to review it appears to make perfect sense to do this as efficiently as possible: review and approve them as a large batch, monitor their progress to the delivery date as a large batch and finally track the benefits as a large batch.
Or does it? Unfortunately, what works well for the project approval process has a number of rather negative effects on the whole system. In systems thinking, this is the dreaded “unintended consequences”. Here are three effects we often see as a result of large batches in the upstream funding and approval process:
- Possibly the worst effect is that it drives analysis work upstream of the approval process. How else can we “know” the costs, benefits and delivery date of the requirements in the batch? What starts out as a mechanism for controlling costs ironically results in driving up costs (and time) and pushing work “off the page”, reducing visibility and control.
- The second is that valuable and urgent new ideas must now effectively wait in line for consideration. For some, this may mean months and months of waiting. Whilst the paperwork for approval might seem quick and simple to the designers, any binary (yes/no) control system like this often ends up overly bureaucratic, complex and full of red tape. Big batch processes typically get bigger and slower over time. Time and time again we see examples where “No” doesn’t actually mean “no”. Instead, it means, “go and do more work” before you eventually have made the case for it to be approved and pass through the gate.
- Since they are approved as a batch the requirements then typically travel together through the system. Because of the size of the batch, this can look like a snake eating a rodent — slow and uneven flow. For individuals downstream from the approval and funding process it often means you are threatened with starvation and little work for a significant period of time before then finding yourself working overtime and weekends when the batch gets thrown over the fence (often with a now unrealistic delivery date attached).
There are more than the above three, but fundamentally, big batch funding and approval processes drive work upstream and make it difficult to smoothly flow work quickly through the process. The batching creates a myriad of inefficiencies in the end-to-end process that the up-front approvers have poor visibility of.
Lean and Agile principles and practices offer solutions to this: dynamic prioritisation, pull systems, small buffers, breaking the work down, funding the capacity not the work, etc. But whatever practices you call on to solve these problems, it must be approached with a systems thinking mindset. This requires:
- an understanding of how the whole system works, from the lightbulb moment to the point when value has been delivered.
- an appreciation and anticipation of the effects that any changes might have.
- an understanding of the various trade-offs, in particular between control and speed.
- an understanding of which measures of the system really matter.
- PDSA feedback loops that measure the effect of change on those measures.
And here, we’re just looking at the funding and approval part of the system! Consider the potential in the other “components” of the system that delivers value via I.T. such as the provisioning of hardware and environments. The message I’m trying to convey is that delivering value through the product development pipeline is a dynamic complex system with many moving parts and interactions. A reductionist approach, that focuses on individual parts of this system will likely yield improvements and efficiencies in that area – but often leads to an overall deterioration of the performance of the end-to-end system. I’ve attempted to illustrate the issue in one aspect of the upstream process, but there are many other areas that need to be considered from a systems thinking perspective.
Ultimately, improving the delivery of value through I.T. requires a systemic approach that considers all of the moving parts: from the lightbulb moment all the way through the system as that idea is refined, realised and released to end-users and customers. If you think about it, Systems Thinking in I.T. is not just a nice to have, it’s essential.