In 1994, a group of scholars at the University of British Columbia sat down with their department chair and began planning a psychology textbook. The lead researcher was Daniel Kahneman — at the time already a world-renowned psychologist, a man who had spent his career studying the systematic errors of human cognition, including overconfidence in predictions. The group estimated the project would take roughly two years to complete. Kahneman himself polled the other members, and their consensus was the same: two years, perhaps two and a half if things went slowly.

Eight years later, the textbook was finished.

When Kahneman later described this episode in his 2011 book Thinking, Fast and Slow, he noted something remarkable: at the time the group was making their prediction, one member of the team — a curriculum expert — had privately acknowledged that he had never seen a project of this kind completed in fewer than seven years. Several had simply never been completed at all. That information was available in the room. No one used it. Everyone based their estimate on the specific plan in front of them, the resources they had assembled, and the optimism that naturally accompanies a fresh start.

Kahneman, who had already co-published foundational research on cognitive bias and had spent years documenting the failures of human judgment, predicted two years for a project that would take eight. The irony is not incidental. It is the point.

"Plans are best-case scenarios. People consistently underestimate the time, costs, and risks of future actions while overestimating the benefits." — Daniel Kahneman & Amos Tversky, 1979


What the Planning Fallacy Is

The planning fallacy is the systematic tendency to underestimate the time, cost, and risks of future actions while overestimating the likelihood that a plan will proceed as intended — a bias that affects individuals and organizations alike, and that persists even when the planner is aware of it and even when historical base rates are available.


Planning Fallacy vs. Reference Class Forecasting

The planning fallacy is most reliably countered by a technique called reference class forecasting — a method developed precisely to circumvent the cognitive errors that generate the fallacy in the first place. The distinction between how planners typically forecast and how reference class forecasting works illuminates exactly what goes wrong in naive planning.

Dimension Planning Fallacy (Inside View) Reference Class Forecasting (Outside View)
Unit of analysis The specific project being planned A reference class of comparable past projects
Data source The current plan, intentions, and assumed conditions Historical base rates from similar completed projects
Treatment of risk Risks acknowledged in the abstract, not quantified Risks embedded in the statistical distribution of outcomes
Typical outcome Cost and time estimates at or below best-case scenarios Estimates that include systematic upward adjustments
Who developed it Identified by Kahneman and Tversky (1979) as a cognitive error Developed by Kahneman (2002) and operationalized by Flyvbjerg as a formal planning method
Adoption pattern Near-universal default in planning Rarely used spontaneously; requires deliberate procedural commitment
Error correction Correction requires active intervention and external perspective Error correction is structural — built into the method itself

The core difference is epistemic. Inside-view forecasting asks: "Given what I know about this project and this plan, how long will it take?" Outside-view forecasting asks: "Given what has happened to similar projects in the past, what is the distribution of outcomes?" These two questions produce systematically different answers, and the outside view is consistently more accurate.


Cognitive Science: How and Why It Happens

Kahneman and Tversky: The 1979 Formulation

The planning fallacy was named and first formally described by Daniel Kahneman and Amos Tversky in a 1979 paper, "Intuitive Prediction: Biases and Corrective Procedures," published in Management Science. The paper emerged from their broader program of research on judgment under uncertainty — a program that had already produced prospect theory and would eventually earn Kahneman the Nobel Memorial Prize in Economic Sciences in 2002.

Kahneman and Tversky's key observation was that forecasters constructing plans focus on the most plausible scenario of how events will unfold — essentially the best-case trajectory — while giving insufficient weight to the many ways a project can deviate from that trajectory. They described two fundamentally different approaches to prediction: the inside view and the outside view.

The inside view is the default: you look at your specific project, assess your specific plan, and build a forecast based on the logic of that plan. The outside view requires a deliberate cognitive switch: you identify the reference class to which this project belongs and anchor your estimate to the historical distribution of how similar projects have actually performed.

The troubling finding was that even experts, even when prompted to consider base rates, consistently underweighted them in favor of the particulars of the case in front of them. The specific overwhelmed the statistical.

Buehler, Griffin and Ross: The 1994 Student Study

The most-cited experimental demonstration of the planning fallacy came from Roger Buehler, Dale Griffin, and Michael Ross in a 1994 paper in the Journal of Personality and Social Psychology titled "Exploring the 'Planning Fallacy': Why People Underestimate Their Task Completion Times."

Buehler and colleagues conducted a series of studies with undergraduate students, asking them to predict when they would complete specific assignments. In their primary study, students predicted they would complete a personal academic project an average of 33.9 days from the time of the prediction. The actual average completion time was 55.5 days — about 64% longer than predicted. Critically, only 30% of students completed the project by their predicted date, even when they were asked to estimate a completion time by which they were "99% certain" the project would be done. Even under that high-confidence framing, most were wrong.

The researchers also found that participants who were asked to think about how they had performed on similar past tasks — and who gave worse past estimates — still showed no improvement in their future estimates. Explicit recall of past failure did not correct the bias. This is one of the most disquieting findings in the planning fallacy literature: knowing you have been systematically wrong before does not reliably make you more accurate going forward.

In follow-up studies, Buehler and colleagues found that the problem stemmed partly from the way people mentally represent their plans. When imagining task completion, people construct a smooth narrative from the present moment to the completion point — a mental simulation in which things proceed roughly as planned. They do not systematically imagine the categories of things that could go wrong. The simulation is not a representative sample of possible futures; it is a single, optimistic trajectory.

Buehler, Griffin and MacDonald: The 1997 Scenario Studies

In a 1997 paper published in Organizational Behavior and Human Decision Processes, Buehler, Griffin, and MacDonald directly tested the mental simulation hypothesis. They found that when participants were prompted to think concretely about potential obstacles and competing demands — rather than focusing on the plan itself — their completion time estimates became more accurate. This effect was specific: general reminders that things might go wrong had little impact, but structured, concrete consideration of specific obstacles that had disrupted similar projects in the past produced more realistic estimates.

This finding has direct implications for how organizations might structure planning processes. The issue is not lack of information about how plans can fail. It is a failure of retrieval and mental simulation: the obstacles exist in memory and experience, but they are not spontaneously activated when constructing a plan because planning naturally invites forward-looking, optimistic mental simulation.

The Optimism Bias and Motivated Cognition

Psychologists draw a distinction between cognitive and motivational sources of planning error. The cognitive sources — mental simulation, inside-view thinking, underweighting base rates — operate independently of whether the planner wants the project to succeed. The motivational sources introduce a further layer: people are sometimes invested in low estimates because low estimates secure approval, funding, or commitment.

Neil Weinstein at Rutgers University documented the "unrealistic optimism" effect in a 1980 paper in the Journal of Personality and Social Psychology. He found that people systematically rated their own future as more positive than others' futures and rated negative events as less likely to happen to them than to their peers. This optimism is not globally irrational — positive expectations can improve motivation and persistence — but it systematically biases predictions about how plans will unfold.

The relationship between the planning fallacy and optimism bias was explored in depth by Lovallo and Kahneman in their 2003 Harvard Business Review paper, "Delusions of Success: How Optimism Undermines Executives' Decisions." They argued that senior decision-makers in organizations are particularly vulnerable to a form of planning distortion they call delusional optimism: executives selectively attend to information that supports their project's viability, construct scenarios of success rather than failure, and suppress or rationalize unfavorable base rate data. The paper documented how this dynamic produces systematic capital misallocation in corporate strategy — boards approve projects based on business cases that are structurally biased toward underestimation.


The Intellectual Lineage

The cognitive science of the planning fallacy sits within a broader tradition of research on human overconfidence and miscalibrated prediction. Several intellectual lineages converge in it.

Prospect Theory and the Study of Heuristics: Kahneman and Tversky's 1974 paper in Science, "Judgment Under Uncertainty: Heuristics and Biases," established the framework within which the planning fallacy was later identified. Their work on the availability heuristic, representativeness, and anchoring showed that human judgment systematically departs from normative probability theory in predictable ways. The planning fallacy was, for Kahneman, a specific instance of these broader cognitive patterns applied to temporal prediction.

Decision Analysis and Reference Class Forecasting: The statistician and decision theorist John Maynard Keynes had noted in the early twentieth century that investors systematically extrapolate from current conditions rather than from base rates of historical market behavior — what he called the "animal spirits" of investment. The formal development of reference class forecasting as a planning methodology came from Kahneman in a 2002 paper presented to the American Psychological Association, and was subsequently developed into a full planning methodology by Bent Flyvbjerg at the Danish Transport Research Institute and later at Oxford.

The Psychology of Future Thinking: Research by Yaacov Trope and Nira Liberman on construal level theory, developed across a series of papers from 1998 to 2010, provided a cognitive mechanism for why temporal distance affects planning accuracy. Trope and Liberman demonstrated that when people think about events far in the future, they represent them at a high, abstract level — focusing on the "why" rather than the "how." As events draw closer in time, thinking shifts to concrete, low-level representations that include obstacles, sequencing problems, and resource constraints. This means that at the moment a plan is made, the planner is naturally thinking at the level of abstraction where difficulties are least visible.

Organizational Theory: Chris Argyris at Harvard Business School documented what he called "defensive routines" in organizational planning — the tendency of organizations to systematically suppress pessimistic forecasts because they threaten the social dynamics of project approval. His work, developed across several books including Organizational Learning (1978), established that planning errors in large organizations are not purely cognitive — they are sustained by social and political structures that reward optimistic projections and punish cautionary ones.


Empirical Research: The Scale of the Problem

Flyvbjerg, Holm and Buhl: 258 Infrastructure Projects

The most comprehensive empirical documentation of the planning fallacy at the macro scale came from Bent Flyvbjerg, Mette Holm, and Søren Buhl in a 2002 paper in the Journal of the American Planning Association, "Underestimating Costs in Public Works Projects: Error or Lie?"

Flyvbjerg and colleagues analyzed 258 transportation infrastructure projects — roads, bridges, tunnels, railways — across 20 nations, spanning 70 years of data. The sample was large enough and geographically diverse enough to constitute something close to a representative global dataset. Their findings:

  • 90% of projects went over budget. Only 10% came in at or below the original cost estimate.
  • Costs were underestimated by an average of 28% for roads, 45% for rail projects, and 34% for bridges and tunnels. Rail projects, which involve the highest degree of technological novelty and underground construction uncertainty, showed the largest errors.
  • The overrun pattern was consistent across decades and continents. There was no evidence that cost estimation had improved over the 70-year period studied. Projects completed in the 1990s overran their budgets by roughly the same proportion as projects completed in the 1930s.

The consistency of the pattern across seven decades is one of the most striking features of the data. If cost overruns were primarily due to incompetence or poor tools, we would expect improvement as estimation methods improved. The persistence of the error strongly suggests something more systematic — a structural bias in how project teams approach estimation.

Flyvbjerg's interpretation was that the bias had two components: a cognitive component (the inside-view planning fallacy described by Kahneman) and a strategic component (deliberate underestimation to secure project approval, with the expectation that cost overruns would be absorbed once the project was too far advanced to cancel). He named this second component strategic misrepresentation — a polite term for systematic deception in the service of getting projects approved.

The Scottish Parliament Building

The Scottish Parliament building is among the most documented cases of planning fallacy in modern public infrastructure. When the project was approved in 1997, the Scottish Office estimated the cost at £40 million. By the time the building was completed and opened by Queen Elizabeth II in 2004, the final cost had reached £414 million — more than ten times the original estimate.

An independent inquiry chaired by Lord Fraser of Carmyllie produced a 391-page report in 2004 examining how the estimate had been so comprehensively wrong. The inquiry found a series of compounding failures: the original £40 million figure had been based on a preliminary design that was then substantially expanded in scope; cost estimates had not been systematically updated as the design changed; no formal risk assessment had been conducted; and project managers had worked from optimistic projections rather than distributional estimates based on comparable buildings.

The Fraser Inquiry also documented what Flyvbjerg would recognize as strategic misrepresentation: there is evidence that officials within the project understood the estimate was unrealistically low, but that correcting it publicly would have endangered political support for the project. The original figure had already been presented to the public and the Scottish Parliament.

The project ran through multiple lead architects (Enric Miralles died partway through), multiple construction managers, and three separate estimates before the final cost became clear — at which point, as Flyvbjerg's theory predicts, the project was too far advanced to cancel.

Software Projects: The CHAOS Report Evidence

The software industry provides an exceptionally well-documented domain for studying planning fallacy at scale. The Standish Group has conducted annual surveys of IT project outcomes since 1994, published under the name the CHAOS Report. The data, collected from thousands of projects across multiple decades, consistently shows:

  • Roughly 30% of software projects are cancelled before completion.
  • Only 16–20% of software projects are completed on time and within budget (a figure that has improved modestly since the adoption of Agile methodologies, reaching approximately 29% for Agile projects versus 11% for traditional waterfall projects in more recent surveys).
  • The average software project overruns its schedule by 63% and its budget by 45%.

Roger Pressman and Bruce Maxim, in Software Engineering: A Practitioner's Approach (8th edition, 2014), synthesize decades of research showing that software estimation errors are systematic and bidirectional in the expected direction: projects are almost never completed faster or cheaper than estimated, and are very frequently far more expensive and slower.

The software industry response to this pattern generated its own field — software estimation research — anchored by Steve McConnell's Software Estimation: Demystifying the Black Art (2006) and the body of work associated with function point analysis and COCOMO (Constructive Cost Model). These tools are specifically designed to introduce base-rate thinking into software estimation — essentially operationalizing the outside view for a domain where inside-view estimation has historically been catastrophic.


Four Named Case Studies

1. The Sydney Opera House (Infrastructure)

The Sydney Opera House is the single most famous example of planning fallacy in public architecture. When the project was approved in 1957, the New South Wales government estimated a cost of £3.5 million and a completion date of 1963. The building was eventually completed in 1973 — ten years late — at a cost of $102 million Australian dollars, approximately 1,400% over the original estimate.

The overrun had multiple causes that are now classically associated with planning fallacy: scope expansion during construction, technological challenges that had not been anticipated (the roof shells required the invention of new engineering approaches that did not exist when the estimate was made), and a political commitment to the project that prevented realistic reassessment. The architect, Jørn Utzon, resigned in 1966 due to disputes over budget constraints.

The Opera House is instructive precisely because the planning failure was not primarily about incompetence. The engineers involved were skilled. The original design was award-winning and ultimately resulted in a globally recognized masterpiece. The planning failure was cognitive and structural: the estimate was made from the inside view, based on an incomplete design, without systematic reference to the historical distribution of comparable major public buildings.

2. The Denver International Airport Baggage System (Technology)

When Denver International Airport opened in 1995, it opened 16 months late and approximately $2 billion over budget — largely because of a revolutionary automated baggage-handling system that was supposed to be the most sophisticated in the world. The system, contracted to BAE Automated Systems, used a network of 4,000 automated carts, 21 miles of track, and 100 computers to route luggage automatically between check-in, planes, and baggage claim.

The system was a technical failure of the first order. During testing, carts derailed, bags were shredded, and the computers could not synchronize the routing logic reliably at scale. United Airlines, the primary tenant, eventually built a conventional baggage system as a backup and operated it instead. The automated system was finally shut down entirely in 2005, having never operated as designed.

The planning fallacy dynamic here is clear: the BAE system had never been deployed at anything approaching this scale. There was no reference class of successful comparable systems from which to draw distributional estimates of development time and cost. The planners worked entirely from the inside view — the logic of the design — without the check that outside-view estimation would have provided. Because there were no past projects like this one, the reference class problem was compounded: the appropriate response was to widen the confidence interval dramatically, not to compress it.

3. The Healthcare.gov Launch (Software)

When the United States federal government launched Healthcare.gov in October 2013, the website — intended to serve as the online marketplace for health insurance under the Affordable Care Act — immediately crashed under traffic load, returned errors for most users, and was effectively non-functional for its first six weeks of operation. The project, contracted primarily to CGI Federal, had been three years in development. The Government Accountability Office estimated eventual remediation and development costs at approximately $1.7 billion, against an original estimated cost of around $93 million.

The project exhibited virtually every planning fallacy pattern identified in the research literature. The timeline was set by a political deadline — the October 1, 2013 ACA enrollment launch — rather than by realistic assessment of what the software required. Multiple contractors worked on different components with inadequate systems integration. Security testing was not completed before launch. And the Centers for Medicare and Medicaid Services, which managed the project, had essentially no experience managing complex software development at this scale — meaning there was no internal institutional knowledge to counter the optimistic projections being offered by contractors.

Congressional testimony after the fact revealed that internal warnings about readiness problems had been reported to senior officials and not acted on — a pattern consistent with Argyris's documentation of defensive routines in organizational planning, where pessimistic assessments are suppressed because they threaten project approval or political timelines.

4. Home Renovation Projects (Personal Scale)

The planning fallacy is not confined to governments and corporations. Research by Chip Heath and Amos Tversky, as well as a body of work on personal project planning, consistently finds that individuals show the same pattern in everyday life.

Home renovation is among the most studied personal domains. A survey by Houzz, a home renovation platform, found that renovation projects consistently run 20–30% over budget and significantly over schedule, with the primary causes being unexpected structural issues, permit delays, and contractor availability — none of which are unusual in the reference class of home renovations, but all of which tend to be underweighted by homeowners making inside-view estimates.

The home renovation case is psychologically instructive because it removes the organizational dynamics that might explain away the institutional examples. There is no political pressure to underestimate. There is no contractor incentive to make a low bid. The homeowner is estimating for themselves, based on their own plan, for their own benefit — and they still systematically underestimate. The error is cognitive, not strategic.

Roger Buehler and colleagues replicated this finding experimentally. In studies of personal project completion — Buehler, Griffin, and Ross (1994) included personal projects alongside academic ones — the pattern of underestimation held reliably even for tasks with no organizational or political pressures. The inside view, applied to any novel personal project, reliably generates optimistic estimates.


Limits and Nuances

Deliberate Strategic Misrepresentation

Flyvbjerg's work complicates a purely cognitive account of the planning fallacy. In his 2002 JAPA paper and his 2003 book Megaprojects and Risk (co-authored with Bruzelius and Rothengatter), he argues that a significant portion of cost overrun in large public projects is not attributable to cognitive bias at all — it is attributable to deliberate misrepresentation by project promoters who understand that accurate estimates would not survive political or commercial approval processes.

This distinction matters for intervention design. If the planning fallacy were purely cognitive, then education about base rates, structured estimation methods, and reference class forecasting would substantially reduce overruns. If it is partly strategic, then no amount of cognitive debiasing will address the incentive structures that reward underestimation and punish accurate but unflattering projections.

Flyvbjerg's prescription is accordingly structural as well as cognitive: independent cost-benefit analysis, mandatory reference class forecasting for large public projects (which he implemented in the UK from 2004 with HM Treasury), and institutional separation between project promoters and project estimators to reduce the contamination of estimates by advocacy.

Domains Where Planning Accuracy Improves

The planning fallacy is not uniform. Several conditions have been identified under which estimation accuracy improves substantially:

Repeated, similar tasks with rapid feedback: People who repeatedly perform the same type of task under similar conditions develop calibrated estimates over time. Experienced house painters estimating how long it will take to paint a room of given dimensions are far more accurate than homeowners estimating the same task — not because painters are cognitively different, but because they have a functional, embodied reference class built from experience. The outside view is available to them intuitively, through direct memory, rather than requiring active construction.

Decomposition: Research by Mark Byram and colleagues, building on work by Thomas Sheridan at MIT, has found that decomposing a task into its component parts and estimating each part separately — before summing — produces systematically more accurate total estimates than holistic estimation. This effect is sometimes called the "segmentation heuristic" in estimation research. Breaking a project into phases forces the estimator to confront the specific activities involved, which counteracts the smooth-trajectory mental simulation that generates optimistic holistic estimates.

Pre-mortem analysis: Psychologist Gary Klein at Klein Associates developed the "pre-mortem" as a structured estimation aid: before finalizing a plan, the team imagines that the project has already failed and reasons backward to explain how the failure happened. Research by Klein and by Deborah Mitchell and colleagues at Penn State has found that pre-mortem analysis significantly improves risk identification and produces more conservative, more accurate estimates. Klein published this method in a 1998 book, Sources of Power, and it was subsequently popularized by Kahneman in Thinking, Fast and Slow (2011).

Short time horizons: The planning fallacy is most severe for projects extending more than a few weeks into the future. For tasks with a one-to-two-day horizon, the inside view and the outside view tend to converge — there is simply less room for the optimistic narrative to diverge from reality. Construal level theory (Trope and Liberman) predicts exactly this pattern: temporal distance produces abstract construal, which produces the smooth optimistic simulation. Short-horizon planning involves more concrete representation, which means more obstacles naturally enter the mental simulation.

When Projects Are Genuinely Novel

A nuance rarely addressed in planning fallacy research is the case of genuinely unprecedented projects — situations where there is no meaningful reference class. The Manhattan Project, the first Moon landing, the Human Genome Project: these were tasks with no historical comparables from which to draw distributional estimates. Reference class forecasting cannot be mechanically applied when the reference class is empty.

Kahneman's answer to this problem, developed in his writing on the outside view, is that even novel projects share structural features with less novel ones. The Moon landing was unprecedented in its specific goal, but NASA's trajectory through the project — phase completion rates, budget escalation patterns, testing failure rates — followed patterns similar to other large-scale military and aerospace development programs. The reference class does not need to be identical in goal or outcome; it needs to be similar in the structural features that drive cost and time performance.

The Optimism Benefit

A final nuance concerns whether the planning fallacy is, on net, harmful. Some organizational researchers — notably James March at Stanford, in his work on organizational learning and adaptive behavior — have argued that optimistic bias in planning serves a functional role. Organizations that correctly assessed the probability of success of novel ventures might never attempt them. The planning fallacy, viewed from this angle, is the cognitive fuel that powers risk-taking, entrepreneurship, and technological development.

The empirical literature on this point is mixed. Lovallo and Kahneman (2003) argue that the net effect is negative — that the resource misallocation caused by optimistic bias outweighs the motivational benefits, because optimistic projects that fail consume resources that could have been allocated to better-forecasted alternatives. But Mathew Hayward at the University of Melbourne and colleagues have documented cases in which moderate entrepreneurial overconfidence is associated with better startup outcomes than either extreme pessimism or extreme optimism — suggesting a possible inverted-U relationship between planning optimism and performance.

What is clear is that the planning fallacy is not simply a bug to be eliminated. It is a systematic feature of human cognition with both costs and benefits, and interventions aimed at correcting it should be calibrated to the domain and the stakes involved, rather than applied universally.


References

  1. Kahneman, D., & Tversky, A. (1979). Intuitive prediction: Biases and corrective procedures. Management Science, 12, 313–327. https://doi.org/10.1287/mnsc.12.4.313

  2. Buehler, R., Griffin, D., & Ross, M. (1994). Exploring the "planning fallacy": Why people underestimate their task completion times. Journal of Personality and Social Psychology, 67(3), 366–381. https://doi.org/10.1037/0022-3514.67.3.366

  3. Buehler, R., Griffin, D., & MacDonald, H. (1997). The role of motivated reasoning in optimistic time predictions. Personality and Social Psychology Bulletin, 23(3), 238–247. https://doi.org/10.1177/0146167297233003

  4. Flyvbjerg, B., Holm, M. S., & Buhl, S. (2002). Underestimating costs in public works projects: Error or lie? Journal of the American Planning Association, 68(3), 279–295. https://doi.org/10.1080/01944360208976273

  5. Lovallo, D., & Kahneman, D. (2003). Delusions of success: How optimism undermines executives' decisions. Harvard Business Review, 81(7), 56–63. https://hbr.org/2003/07/delusions-of-success-how-optimism-undermines-executives-decisions

  6. Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux. https://us.macmillan.com/books/9780374533557/thinkingfastandslow

  7. Flyvbjerg, B., Bruzelius, N., & Rothengatter, W. (2003). Megaprojects and Risk: An Anatomy of Ambition. Cambridge University Press. https://doi.org/10.1017/CBO9781107050891

  8. Trope, Y., & Liberman, N. (2010). Construal-level theory of psychological distance. Psychological Review, 117(2), 440–463. https://doi.org/10.1037/a0018963

  9. Klein, G. (1998). Sources of Power: How People Make Decisions. MIT Press. https://mitpress.mit.edu/9780262611466/sources-of-power/

  10. Weinstein, N. D. (1980). Unrealistic optimism about future life events. Journal of Personality and Social Psychology, 39(5), 806–820. https://doi.org/10.1037/0022-3514.39.5.806

  11. Standish Group. (2020). CHAOS Report 2020: Beyond Infinity. The Standish Group International. https://www.standishgroup.com/sample_research_files/CHAOSReport2020-The%20Standish%20Group.pdf

  12. Flyvbjerg, B. (2008). Curbing optimism bias and strategic misrepresentation in planning: Reference class forecasting in practice. European Planning Studies, 16(1), 3–21. https://doi.org/10.1080/09654310701747936

Frequently Asked Questions

What is the planning fallacy?

The planning fallacy is the systematic tendency to underestimate the time, cost, and risks of future actions while overestimating the benefits — a pattern that persists even when the planner has direct experience of similar past failures. Kahneman and Tversky named the phenomenon in a 1979 paper, distinguishing between the 'inside view' (focusing on the details of the specific plan) and the 'outside view' (using base-rate data from similar past projects). The planning fallacy results from exclusive reliance on the inside view: planners focus on their particular project's best-case scenario rather than on the statistical distribution of outcomes from comparable endeavors.

What did Buehler, Griffin, and Ross's 1994 study find?

Buehler, Griffin, and Ross's 1994 Journal of Personality and Social Psychology study asked students to predict when they would complete their psychology theses. The average predicted completion time was 33.9 days. The average actual completion time was 55.5 days — 63% longer than predicted. Only 30% of students completed the thesis by their predicted date even when asked for a completion time under which they were 99% confident. In follow-up experiments, reminding subjects of their past prediction failures did not reduce subsequent overconfidence. The bias proved remarkably resistant to correction even when subjects were explicitly aware of their history of underestimation.

How widespread are cost overruns in infrastructure projects?

Bent Flyvbjerg, Mette Holm, and Søren Buhl's 2002 analysis of 258 infrastructure projects in 20 nations and 5 continents found that 90% experienced cost overruns. Average overruns were 28% for rail projects, 34% for bridges and tunnels, and 20% for road projects — measured in constant prices. The researchers found no evidence that forecasting accuracy had improved over the 70 years covered by their data. The Sydney Opera House is the canonical example: estimated at £3.5 million in 1957, it opened in 1973 at a cost of AUD $102 million, 1,400% over budget and 10 years late.

What is reference class forecasting?

Reference class forecasting, developed by Flyvbjerg as the primary debiasing strategy for the planning fallacy, requires planners to base predictions on the statistical distribution of outcomes from a reference class of similar past projects — the 'outside view' — rather than on the specific details of the project being planned. For a new rail project, reference class forecasting would begin with the finding that 90% of rail projects exceed their cost estimates by an average of 28%, and use this as the starting point for prediction before adjusting for specific features of the current project. The method is explicitly endorsed by the UK Treasury's Green Book for major public expenditure appraisals.

Why does the planning fallacy persist despite known failure rates?

Lovallo and Kahneman's 2003 Harvard Business Review analysis identified several reinforcing mechanisms. Cognitive: the inside view feels more relevant and controllable than abstract base rates. Motivational: optimistic forecasts generate enthusiasm, secure funding, and win competitive bids — there are organizational rewards for optimism and penalties for realistic pessimism that seems defeatist. Political: project champions know that realistic projections would often lead to rejection, and so have incentives to present best-case scenarios. Flyvbjerg's research suggests that a substantial portion of infrastructure cost overruns may reflect strategic misrepresentation rather than pure cognitive bias — planners who know better but present optimistic forecasts because that is what secures approval.