Fundamental dichotomies in project planning:
- The only place a detailed end to end project plan would be effective is where you don’t need it.
- The only place a detailed end to end project plan would help, it can’t be reliable enough to be useful.
“Economists were put on this planet to make astrologers look good.”
Leo McGarry, The West Wing
Predicting the future is difficult.
For millennia, we’ve tried as a species, and by and large, we have been very, very poor at it. Our better efforts are where we have effective models and strong data feeding them, and empirical methods for updating the model, but time and time again, we go for finger in the air, gut feel, “experience tells me” subjective calls.
When we put together projects, we implicitly recognise that predicting the future is hard, and is going to be unreliable, because as every authoritative text on Project Management says right up front:
“A project is a temporary endeavour with a defined beginning and end (usually time-constrained, and often constrained by funding or deliverables), undertaken to meet unique goals and objectives”
and to manage the risks of this one-off work, we add in lots of non-value-adding cost, generally known as Project Management and all its works, whose need to feel in control – often above and beyond actually being in control – also regularly distract the actual value-creating team members from doing the real work that the sponsor wants and is ultimately paying for.
If we understood that work was fundamentally predictable, and the method and effort required for value-creating transformation of a deliverable was well understood and stable, then we wouldn’t bother with all of this; we’d save the overhead of probably the most expensive person on the budget, establish an explicit process, with well-defined rules to handle the known process exception types. Done right, this will optimise your team’s capacity to deliver the work that’s coming in, particularly if you have an eye on the normal types of Waste.
In this scenario, you’d have a model and data to be able to assess and plan by all the standard constraints:
a deliverable of type X (scope and quality) costs £Y (from sources Y1, Y2… Yn), and arrives Zdays after initiation, requiring inputs A, B & C
Or alternatively you could express the same throughput as:
from our team of P people subdivided into P1, P2… Pn, we can expect a weekly throughput of T deliverables, distributed across types X1, X2 and X3
both within a certain range, which over time would reduce as you stabilised and improved the process. Individual pieces may vary a bit, but the law of large numbers for relatively well known and stable work will mean that the errors will largely cancel each other out.
End to end project planning then becomes a simple matter of drawing a rough straight line from day 0, scope 0 to deadline and scope 100%, and acting on significant (ie out of control limits) variances to it – simple enough that one ‘manager’ could handle about a dozen simultaneously.
This means you’re managing by the overall numbers rather than using Gantt charts that would need updating many times a day as individual pieces move around within the range of statistical noise. You’re still predictable enough – within control limits – to know when action is needed on a variance, which is surely the primary reason for managing to a plan.
Let’s go back to the classic raison d’etre of delivering by a project, rather than a process.
It’s a unique and temporary endeavour. You don’t know what’s going to be involved at the start, as you have limited pre-existing data on both the work and the team, and even less on the combination of the two.
Let’s remind ourselves why we plan:
- To provide a forecast of when the work can be completed, and how much it will cost, and therefore whether it can be completed within the constraints
- To optimise the delivery, minimising the waiting time of either people or work
Both of these rely on effective estimating, which is an assessment of “How long will it take these people to complete this work?”
By the simple fact that we’ve established a project, rather than a process, to do the delivery, we’ve already accepted that we’re on to a loser here. The only sound practise for estimating is to provide a range, but the moment you reveal a plan, the simple psychology of human beings is to have an irrational faith in a single number, to demand a single number and to measure you against whatever number you first announced, regardless of the sizable unknowns sitting behind it.
Oh your delivery organisation may officially acknowledge this by making available contingency budget, but you just try spending any of it…
And how would you plan for learning and improvement? How would you forecast the kind of increase in stability and reduction in elapsed time per deliverable expressed in this chart, containing real project data?
Regarding optimising the delivery, plans are great for understanding and optimising to avoid waiting due to dependencies – especially complex ones where you’re relying on external parties – but to do this effectively and communicate the critical message “I need X by date Y” you also need to have a great more confidence in your own estimates than is usually possible. Similarly, by expressing that dependency to an external team, you are asking them to do the same.
When I was planning dependencies for a delivery team across a portfolio of 20 projects, I always put at least a week of contingency onto my external dependencies – in a 6 week project timeframe – knowing that the real dependency date that allowed us to deliver to our constraints would then only be missed 30% of the time, as the external teams’ estimating and therefore ability to meet my dependencies was also entirely unreliable.
Optimising a plan for maximising people usage is also not a reality-based exercise. Leaving aside the basic falsehood that “Maximised busy = maximised value creation”, in nearly 20 years of working in projects, over hundreds of projects, many of them as or very close to the PM, I have never seen that Nirvana of a perfectly resource-levelled plan where everyone is 100% loaded at all times, and there is no waiting of work either. I’ve spent quite literally weeks of my life fighting with MS Project and other planning tools to try to get to it, but it’s never, ever happened.
And that’s simple and stable as it’s based on single-number estimating, which as we’ve seen, is a basic falsehood. It’s a walk in the park compared to what happens when the work actually starts and you hit the real unacknowledged variation. It’s exactly why we have an Issues Process – to be able to react when the utterly predicable happens time and time again: our plan is a gross oversimplification to the point of uselessness.
So in a project, plans – as oversimplified models of a complex, unknowable in advance reality – are near useless when faced with reality. To match the reality at a level of detail good enough to be useful, you’d have to spend all day updating them, and no-one wants to do that, except perhaps those with a professional and financial interest in expanding the perceived need for Project Management.
If your reality is simple and stable enough to plan robustly, then you’ll be far more efficient delivering in a standing process with explicit rules, not a project at all. In which case, you don’t need an item-level plan; manage by the numbers.
“Project Planners were put on this planet to make economists look good
Bootnote:Recommend this post
Latest posts by Martin Burns (see all)
- Colocated Teams, Distributed Organisations - January 20, 2018
- Christmas Reading: 6 Books That Will Change Your Thinking - December 22, 2017
- Now Is Not The Time for #NoProjects - December 14, 2017