Estimating increases your knowledge in uncertainty, creating value through changing commitments to options.
Now I’ll say at the outset, I agree that estimating is a hazardous exercise, filled with sources of error, some of which are themselves poorly understood. And that this is particularly true if you are estimating something that truly has little or nothing in common with anything you’ve previously delivered (otherwise you’re simply estimating by analogy, which is perfectly fine).
And yes, sometimes organisations estimate because estimating is what they do. It can simply be a ritual.
And yes, the outputs of estimating are frequently misused far beyond their prescribed capabilities.
When you see a "good" estimate, ask yourself: "how much did that estimate cost?" and "what could we have built with that money? #NoEstimates
— Vasco Duarte (@duarte_vasco) August 8, 2014
Estimates should not be confused with quotations or deadlines (although they may be a factor in defining both of those). They are not promises, nor should they be only concerned with cost and schedule. They are an expert assessment of all kinds of factors, including quality (another misused and misunderstood word!), people capacity and technical capability.
However, let’s think about a couple of the needs that are often behind a request for an estimate, rather than simply going on and doing the work. Often the thinking is as a method of risk reduction – reduce the uncertainty around the risk that the work will cost more than it’s worth, or (worse) that it will cost so much that we can’t afford to finish it, so don’t get any value. And you could apply the same questions to schedule, technical risk or any other constraint.
Now you might think that this is a foolish quest for certainty. But usually the people looking for estimates aren’t that stupid. They’re generally not expecting a perfect estimate, that is correct to the penny.
But what they are looking for is a material reduction in uncertainty. That gives them a better idea of what to expect before deciding to do it, and gives them a more solid option to not do it. Effectively, what you’re doing is converting a commitment into an option, which is (nearly) always a worthwhile exercise.
Options are valuable. Options expire. Don’t ever commit early without knowing why.Chris Matts
If you’re creating value, it follows that the work that you undertake to create it cannot be waste.
The bigger question therefore is not whether estimating is valuable activity, but is it valuable enough? Is the value of the option worth more than you pay for it? Can you create the same (or near enough) option in an easier way? This is why lightweight rapid relative estimating (Blink Estimating, Storypoints or even Sweet Wild Ass Guesses) can be powerful. All these methods have their own weaknesses, but they’re all more useful than not doing them. Even if all the estimating work does is give you added discipline to understand the work better before committing to it.
So yes, we need to be very careful not to boil the ocean in creating estimates that outweigh the value they create.
If you’re thinking about producing ‘good’ estimates, stop, and think instead about producing ‘good enough’ ones.
Cautionary Epilogue
My programmer friends I’m sure will be thinking at this point: what about a technical spike? Absolutely, that would be another method of assessing technical opportunities and risks which might inform other estimated factors.
However, there’s often a bias inherent there: assuming that work you understand better will give the better outcome. It’s the kind of thinking that often leads to Dilbert-thinking: that management must be incompetent idiots because they can’t do what we do, and we don’t understand (or even see) what they do. And leads people to create axiomatic, circular definitions for Estimating as A Bad Thing.
Which is often the kind of thinking that got us here.
It’s a variant on the power play of We should be in charge around here.
It’s ugly. Don’t do that. You’re usually up against better power players when you try it anyway.
Confirmatory Epilogue
That was 638 words, but this summarises it beautifully in 16:
Before you're asked to estimate, ask: "Can this assumption be proven faster, cheaper in smaller batch?" (see prev tweets) #NoEstimates
— Henrik Ebbeskog (@henebb) August 7, 2014
- Compulsory Basic Training - May 14, 2019
- New Slides: Meaningfully Reframing PI Planning - May 14, 2019
- SAFe RTE Class Review - November 22, 2018
Estimates Are Not Waste by Martin Burns is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Good post!
I especially like the point that you make about estimation being “valuable enough?”.
That indeed is a very critical question that often remains unexamined.
However, you seem to take as granted the following:
– estimation always reduces uncertainty (from: “reduce the uncertainty around the risk that the work will cost more than it’s worth”). This is not always the case, and in fact we see that many projects – that were estimated – end up costing a lot more than estimated (which is not always a bad thing, mind you).
– estimation is always useful (from: “but they’re all more useful than not doing them.”). In this post you do not consider at all the negative consequences of estimation – namely things like goal displacement and political pressures that come with estimates created very early in the project that end up defining the actions we take later on – even if reality is at that point different.
I also agree that “quick” estimation practices are valuable. However, it is not my experience that Story Point estimation is quick (although it is *relatively* quicker than some other options). In fact Ken Power presented some years ago a technique for Story Point estimation called “Silent grouping”, where they could estimate several hundred stories in a few hours. We would not need techniques like these if story point estimation was inherently quick.
I’d love to read/hear more from you regarding these quick estimation practices. I believe that those are better alternatives to the current accepted practices in project estimation. Namely Blink estimates or the slicing heuristic that @neilkillick wrote about in his blog.
Thanks for sharing your ideas. Would be great to continue this conversation in a more interactive form in a conference near you! 🙂
Hi Vasco
Thanks for your feedback, here and on Twitter
To respond to your challenges (and I’m happy that I might be wrong in this!):
By going through the process, you know more than you did before, even if it’s a better understanding of how much uncertainty is involved.
Now that doesn’t mean that the estimate is right, but before the estimate, your range of uncertainty is zero to infinity. Afterward, it might be within an order of magnitude (or two). It might be as simple a categorisation as Pawel Brodzinski’s scale.
But that’s less uncertainty than we started with.
Also, note that we’re not just talking about cost, but all delivery constraints.
I completely agree that there is a risk of negative outcomes. But reducing uncertainty is always useful in and of itself. Now, how do we reduce the risk of negative outcomes?
As you say, it’s relatively quick, but it depends on where you start. This should always be a direction towards ‘better’, rather than a static entry criterion. ‘Good enough’ should always be ahead of us.
Something that occurs to me about the slicing to “small enough” (separating off the TFB things for further slicing) and then counting method is that it’s exactly what you’d do on a Lean Manufacturing system: separate off your runners from your specials and then operate fast flow on the runners.
That way, the specials don’t clog up the runner flow (high variation makes the utilisation/delay curve much worse, per Reinertsen).
When I proposed this in about 2008, people told me I was mad, because software cannot and should not behave that way…
Good point on separating the “runners” from the specials: in fact I think you are on to somethong. In separating the runners from the specials we are in fact separating the planning of the work (project management) from the execution of the work, which I practice segregated the risk (specials) outside the normal day to day execution of the work.
In high variability work like software this segregation is immensely valuable!
Thanks for sparking this thought! 🙂
In your view, which out of Project Management and execution is the runner and which the special?
Continuing on our twitter discussion.
I stated I like the idea of thinking: “If we quit/stop after next iteration, what will it look like then?”
But I sense different “levels” of estimates here. One is the “all-in” thinking. And another is “what’s next?”. My tweet you’re mentioning in the original post above is aimed towards: “What’s next assumption to test?” i.e the learning we seek.
But it seems to me we still need some judgment about “how long does it take to build something so we can validate our assumption/hypothesy?”
But I’d rather like to estimate a two week (or two month or other “smallish”) experiment than an “all-in” 8 month project. And maybe focus on how experiment can be as small as possible (like slicing or maybe better cut waste from being built – i.e doesn’t contribute to the learning we seek)?
And I’m thinking that if it’s possible to cut down building an experiment to test an assumption to just a day or couple of days; estimates aren’t really needed..?