Midland branch delegates listened to David Simms give a talk entitled "Why can't people estimate: Estimating bias and mitigation".
David outlined some very salient points about estimation errors including:
- Estimates are biased by human optimism
- Outside views help give a sanity check to estimates
- Estimates are made at a point in time, and should be refined and re-estimated.
David conceded that estimates were a vital part of the bidding process for projects as executives needed to make decisions about cost and time. However, he pointed out that there was a lack of reviewing of previous projects for future learning and for improving estimates. In particular, a common problem was that only closed out projects were reviewed for history and we ignore at our peril factors such as de-scoping, failure to complete, those that over-ran and the factors behind that. In conjunction with this the point was emphasised that good measurement goes hand in hand with good estimation.
A key theme throughout the presentation was the need to keep the estimate live; it should be constantly reviewed along with risk management or progress on your plan as you gain more clarity on your assumptions within the estimate. Reviews should definitely take place when any of the key assumptions are changed or broken as it will fundamentally alter the estimate. In reality, estimates are often based on assumptions that are not written down and, suggested that assumptions needed to be listed, ranked in terms of their impact and risk and then communicated.
Executive demands for numbers meant that many people relied on averages (the flaw of averages) and that averages should indicate uncertainty and variation, but which is often ignored.
The risk to companies of project failure due to poor estimates can be far greater than the cost of the individual project; better estimates can protect both project success and the company from failures.
David concluded that whilst bottom up is a good estimating process, it does not allow for the analysis of alternatives however, parametric models based on the people, process, technology, platform and project size can be established to determine project success rates based on data and also allow the effect of changes to be evaluated.
A copy of the presentation can be viewed below on the APM Resources web page.
Andrew Bell & Emma Carroll-Walsh