Most Projects Fail
Over 80% of major projects fail badly on cost, and/or schedule and/or production rate (1). The average cost overrun is 33%; on a $4 billion project that is $1.5 billion. Schedule overruns and production impairments cost atleast that much again. Consequently, we are leaving billions of dollars on the table.
Why does this happen? One reason it happens is because major projects in the oil patch are now more complex than we are capable of effectively managing.
Simple Definition of Complexity
Complexity is a region on the continuum between simple and chaotic (2).
Simple: A simple system has few parts and the parts are independent. Failure of one part does not affect the other parts.
Complicated: More parts. Parts are still largely independent. Failures are usually contained within a subsystem.
Complex Cascade: Many parts and the parts are interconnected. Failure of one part will often cascade to affect other parts in ways that are difficult or impossible to predict.
What Evidence Is There That Complexity Actually Causes Project Failures?
Figure 1 was published by Neeraj Nandurdikar of IPA (3). The Y axis is a measure of return on investment; the X axis is reservoir size. When we explore for oil we hope to find large reservoirs because that is where we’ll make the most money. Right?
This is true – but there comes a point at which return on investment peaks and then declines. If we accept reservoir size and related project size as proxies for complexity, then this curve illustrates that beyond a certain level of complexity, we usually fail. We fail the worst on our largest and most important projects.
To understand why this is true, let’s take a dive into complexity science. Fortunately, there are scientists studying complexity, so we understand a great amount that can help us deal with it (2,4). Complexity scientists study networks consisting of nodes and connections between the nodes, as illustrated in Figure 2.
The networks vary according to several parameters:
- The number of nodes varies.
- The intelligence or adaptability of the nodes varies. A node can be a binary (on-off) switch. It can be a human being with subject matter expertise. Or anything in between.
- Complex systems vary with the density of the interconnections. In most networks of interest the connections are sparse (most nodes are connected to only a few other nodes).
- And the interconnections between nodes vary in strength or bandwidth.
The simplest network worth studying features simple nodes, sparsely populated connections and connections with varying bandwidth. You might think that such a network will not have any interesting properties, but you would be wrong!
Your brain is one such network. It consists of neurons (on/off switches) and connections between them (synapses). The connections are sparse and vary in bandwidth. This very simple network has achieved consciousness!
Another good example is an ant colony. A single ant has few skills – a very simple rule set. Alone in the wild it will wander aimlessly and die, but put a few thousand together and they form a culture. They build and defend nests, find food and divide the work.
A scientist studying a single ant could not predict ant culture. Ant culture is not a property of the individual ants. It is a property of the system; the culture emerges. An emergent property is a property of the system that is not a property of the individual nodes. The sum is more than the parts.
CPS’s and CAS’s
In the ant colony, the nodes (the ants) are not very intelligent. Each ant has a very simple rule set which is hard wired into its brain. Each ant does its part without thinking and without ever changing it’s behavior. This type of network is called a Complex Physical System or CPS.
A network with intelligent nodes, such as human teams, is called a Complex Adaptive System orCAS. The intelligence of the nodes gives CAS’s tremendous power, but that power comes at the cost of coordination and motivation losses.
If you double the number of ants on a mound construction project, you will probably get twice as much work accomplished.
If you double the number of engineers on your project, you will not get double the work. Doubling the number of humans doubles the work potential – work potential is linear with team size, but those humans you add to the project will not automatically know what to do, and even when they do, they may not want to do it.
Coordination and motivation losses increase non-linearly with team size. The larger the team, the less efficient it is likely to be. There comes point where adding staff will actually decrease work output as shown in Figure 3.
Sources of Complexity
A project consists of two interacting networks; one CPS (the kit) and one CAS (the human organization). We should expect some surprises (emergence) from the interaction of these two complex systems. Further, there are many sources of complexity as summarizedin Figure 4.
Why is it Important to Work on Complexity?
Much of recent project complexity creep has happened in an environment of exceedingly high oil prices. Historically, oil prices have been below an inflation adjusted $40/bbl. No one knows what the future will bring, but if you can’t make money at $40 going forward, you might not be making any money at all. Survival is one good reason to worry about complexity.
There isn’t one solution. In general the mission should be to:
- Assess the inherent project complexity.
- Simplify to the extent practicable.
- Manage the remaining complexity.
Each cause of complexity shown in Figure 4 must be addressed separately. Part 2 of this series addresses potential solutions to each identified cause.
- 2011, Ed Merrow, Industial Megaprojects, Concepts, Strategies, and Practices for Success, Wiley
- 2011, Kaye Remington, Leading complex Projects, Gower
- 2015, Neeraj Nandurdikar, blog and private email
- 2009, Melanie Mitchell, Complexity, A Guided Tour, Oxford University Press