In the autumn of 1972, before Richard Nixon boarded Air Force One for Beijing, a researcher at the Hebrew University of Jerusalem asked a group of subjects to predict what would happen. Would Nixon meet Mao Zedong? Would the United States and China establish permanent diplomatic relations? Would both sides describe the summit as a success? The subjects wrote down their probability estimates for fifteen specific outcomes, then waited. Nixon went, met Mao, and returned. And then Baruch Fischhoff and Ruth Beyth did something no one had quite done before: they went back to their subjects and asked them to recall what they had originally predicted.

The results, published in 1975 in the journal Organizational Behavior and Human Performance, were quietly devastating. Subjects whose predictions had come true remembered their original estimates as higher — more confident — than they had actually recorded. Those whose predictions had failed remembered their estimates as lower than written. In every direction, memory had bent toward the outcome: people misremembered their prior beliefs as more consistent with what actually happened. The past, viewed from the present, had reorganized itself around the known ending. Fischhoff and Beyth called the phenomenon "I knew it all along." The literature that followed would call it hindsight bias, and it would prove to be one of the most robust, consequential, and difficult-to-defeat systematic errors in the entire field of cognitive psychology.


What Hindsight Bias Actually Is

Hindsight bias is not a single, simple distortion. Researchers have identified three distinct but related components that together constitute the full phenomenon. The first is memory distortion: after learning an outcome, people misremember their prior predictions as closer to the outcome than they actually were, as Fischhoff and Beyth's Nixon study demonstrated directly. The second is inevitability: events that have occurred are perceived, in retrospect, as having been likely or even certain to happen — what Neal Roese and Kathleen Vohs, in their 2012 review in Perspectives on Psychological Science, called "creeping determinism," the tendency to view the past as having had only one plausible trajectory. The third is foreseeability: the sense that one personally "knew it all along," that the warning signs were clear, the outcome obvious — a judgment that people would not have made before the fact.

These three components often operate simultaneously, but they can be dissociated. A person might correctly remember their prior prediction while still believing the outcome was inevitable. Or they might neither remember their prediction nor feel they "knew" — but still feel the outcome was foreseeable by anyone paying attention. Understanding the three-component structure explains why hindsight bias is so difficult to correct: each component has its own cognitive mechanism, its own causes, and its own consequences.

The bias needs to be distinguished carefully from related phenomena with which it is frequently confused.

Concept Definition Key Difference from Hindsight Bias
Hindsight Bias After learning an outcome, people believe they knew or could have known it beforehand; past predictions are misremembered as more accurate Specifically involves outcome knowledge distorting retrospective judgment of prior belief
Confirmation Bias Selectively seeking, interpreting, and recalling information that confirms existing beliefs Forward-looking; operates before the outcome is known, shaping how evidence is gathered
Overconfidence Effect Tendency to overestimate one's own accuracy or ability across a domain General excess confidence not anchored to a specific known outcome
Outcome Bias Judging the quality of a decision by its outcome rather than the process that produced it Concerns evaluation of decision quality, not distortion of prior belief
Narrative Fallacy Imposing coherent causal stories on events that were actually random or underdetermined Concerns story construction broadly; hindsight bias is a specific judgment about prior knowledge
Availability Heuristic Judging probability by ease of recall, not statistical frequency Based on memory retrieval fluency; hindsight bias involves specific retrospective distortion of probability judgments

The distinction between hindsight bias and outcome bias is particularly important in applied contexts. A surgeon who chose a risky procedure that killed a patient may have made a perfectly defensible decision given the information available at the time — but observers who know the outcome are inclined both to evaluate the decision as poor (outcome bias) and to believe they would have predicted the failure in advance (hindsight bias). The two distortions frequently travel together, and together they constitute a potent source of injustice in professional evaluation.


The Cognitive Machinery Behind the Bias

Why does outcome knowledge so reliably corrupt memory and judgment? The central mechanism, articulated in Fischhoff's foundational 1975 paper "Hindsight = Foresight: The Effect of Outcome Knowledge on Judgment Under Uncertainty," published in the Journal of Experimental Psychology: Human Perception and Performance, is what he termed the difficulty of "unlearning." Once an outcome is known, it becomes almost impossible to reason as though you did not know it. The outcome activates a network of associated knowledge — causes, precursors, relevant facts — that were available before the event but that were not, at the time, obviously salient. In retrospect, those facts reorganize around the known ending and seem to have pointed toward it all along. Fischhoff asked subjects to read accounts of historical battles, told some of them who won, and then asked all of them to judge the probability that each side would prevail. Subjects who knew the outcome assigned it systematically higher probability than subjects who did not — and when told to ignore what they knew and reason as if ignorant, they could not do it. Outcome knowledge was not a layer that could be peeled away; it had integrated itself into how the event was understood.

The mechanism involves what psychologists call sense-making: when people learn that something happened, they construct a mental model explaining why it happened. This explanatory narrative is not neutral — it selectively retrieves and weights evidence consistent with the outcome. George Loewenstein and colleagues have described this process in terms of "creeping determinism": each causal link in the retrospective narrative makes the next link feel more inevitable, so that by the time one reaches the outcome, the whole sequence seems to have been scripted. The narrative is generated automatically and rapidly; it does not require deliberate confabulation.

Roese and Vohs, in their 2012 Perspectives on Psychological Science review, synthesized three decades of evidence to propose that hindsight bias emerges from two cognitive operations working in sequence: first, an automatic sense-making process that generates a coherent causal account of the outcome; second, a metacognitive anchoring process in which the current, outcome-informed understanding serves as an anchor for reconstructing prior belief. Because the anchor is so well-integrated into one's mental model, it is very difficult to adjust away from it when attempting to imagine not knowing the outcome. The reconstruction of prior belief is not retrieved from stable memory storage — it is re-constructed in real time from current understanding, which is already contaminated by outcome knowledge. This is why hindsight bias in the memory domain is not merely about forgetting: even subjects with written records of their prior predictions still show hindsight bias in their evaluations of foreseeability, because memory distortion is only one of three distinct mechanisms at work.


Intellectual Lineage

The formal scientific study of hindsight bias begins in 1975, though it has philosophical antecedents in skeptical writing about historical certainty. The immediate intellectual context was the heuristics-and-biases research program being launched simultaneously by Daniel Kahneman and Amos Tversky, whose 1974 paper in Science catalogued systematic errors in probabilistic reasoning. Fischhoff's work was independent but complementary: where Kahneman and Tversky focused on how people estimate probabilities before decisions, Fischhoff asked what happened to those estimates after outcomes were known.

Fischhoff's 1975 paper is unambiguous as the founding document of the field. In a series of elegantly simple experiments, Fischhoff gave subjects ambiguous historical scenarios — battles whose outcomes were genuinely uncertain — and manipulated whether they were told the outcome. His central finding, replicated across multiple scenarios, was that knowing the outcome elevated the perceived probability of that outcome by approximately 15 to 25 percentage points. He further showed that subjects who were told an incorrect outcome showed elevated probabilities for that false outcome — demonstrating that the distortion was not produced by accurate memory of historical base rates, but by the psychological impact of being told something happened. His subjects could not "unlearn" even a false outcome when instructed to do so.

The Fischhoff and Beyth 1975 study introduced the memory component, demonstrating that the bias was not confined to probability judgment but extended to autobiographical reconstruction of prior belief. Together, the two 1975 papers established the phenomenon's scope.

Subsequent decades saw systematic expansion. Christiane Stallmach and others in the 1980s explored hindsight bias in medical diagnosis. Hal Arkes, Raymond Faust, and colleagues conducted influential studies in the late 1980s examining the bias in clinical contexts. Scott Plous's 1993 book The Psychology of Judgment and Decision Making synthesized early work and brought the concept to broader academic audiences. Ulrich Hoffrage and Gerd Gigerenzer introduced the concept of "hindsight bias in hindsight" — the documented tendency for researchers themselves to claim they had predicted experimental findings after seeing results — which extended the phenomenon into the sociology of science.

By 2000, the phenomenon was sufficiently well-established that Roese and colleagues began to address the evolutionary and adaptive functions of retrospective sense-making — asking not merely whether hindsight bias was a mistake, but whether it served any useful cognitive purpose. Roese and Vohs's 2012 review in Perspectives on Psychological Science stands as the most comprehensive synthesis of this literature, covering over 800 studies and establishing the three-component framework that now anchors theoretical discussion.


What the Research Shows

The empirical literature on hindsight bias is among the most replicated in all of psychology, spanning more than five decades and dozens of countries. Roese and Vohs (2012) reviewed research indicating that hindsight bias has been documented in virtually every population studied, including children as young as four, professional experts across medicine, law, finance, and intelligence analysis, and elderly adults. The effect is not a laboratory artifact produced by artificial tasks with naive undergraduates — it is a feature of human cognition operating wherever outcome knowledge is present.

Fischhoff's original studies found that knowing an outcome raised perceived probability of that outcome by roughly 15 to 25 percentage points relative to ignorant controls — a large and reliable effect by psychological standards. Replications using diverse scenarios consistently find effects in this range, with larger effects associated with outcomes that are emotionally meaningful, causally coherent, or previously unfamiliar to the subject.

The memory-distortion component has been quantified with particular precision. Fischhoff and Beyth's 1975 Nixon study found that subjects misremembered their predictions in the direction of the actual outcome for roughly two-thirds of the items tested. Crucially, this was not merely a regression-to-the-mean artifact: subjects shifted their remembered predictions specifically toward the events that had occurred, not toward some neutral midpoint. A study by Mark Pohl and colleagues, published in Memory & Cognition in 1996, found that asking subjects to generate explanations for the outcome increased memory distortion, because the explanatory process deepened the integration of outcome information into mental models of the situation.

In the legal domain, research by Kim Kamin and Jonathan Rachlinski, published in Law and Human Behavior in 1995, found that mock jurors who were told that a flood had occurred judged a city's failure to take precautions as negligent at substantially higher rates than jurors who were not told of the flood — even when the information available to decision-makers at the time of the decision was held constant. The hindsight-informed jurors also estimated that the probability of a flood had been much higher than did jurors who made prospective judgments. This study became a landmark in the legal psychology literature, because it demonstrated that hindsight bias could systematically produce unfair verdicts in negligence cases: defendants were being held to a standard of foresight that was itself a construction of hindsight.

Research on medical case review has shown similar patterns. Scott Caplan, Dennis Posner, and David Cheney, in studies examining medical negligence litigation, found that physician reviewers who knew a patient had died rated the appropriateness of medical care as lower than reviewers who evaluated the same cases without outcome knowledge. A study by Arkes, Faust, Guilmette, and Hart (1988), published in the Journal of Applied Psychology, gave neuropsychologists the same case information and asked them to rate diagnostic certainty: those who were told the patient's ultimate diagnosis rated the early symptoms as much more consistent with that diagnosis than those who evaluated the symptoms blind. The same symptoms, read by the same type of expert, were weighted entirely differently depending on whether the answer was already known.

In financial markets, the "I knew it was a bubble" phenomenon has been studied through surveys of investors and analysts following market crashes. Research by Munier and Zaharia (2002) and others found that after major market corrections, investors systematically overestimated the probability they had assigned to a crash in advance, and rated the warning signs as having been more obvious than did pre-crash assessments. The practical consequence is that market participants who believe they "saw it coming" are reinforced in overconfident strategies that make them more vulnerable to the next event they will subsequently claim to have predicted.


Four Case Studies

Case Study 1: The Nixon China Visit, 1975

The Fischhoff and Beyth study of Nixon's 1972 China visit is not only the founding document of hindsight bias research; it remains among the most methodologically elegant demonstrations in all of cognitive psychology. Because the researchers collected written records of subjects' predictions before the event — a design that most subsequent studies replicate using random selection of outcome conditions rather than prospective measurement — they could compare actual prior predictions with recalled predictions directly, without relying on experimental controls.

Subjects predicted fifteen specific outcomes of the trip: whether Nixon would meet with Mao, whether diplomatic relations would be established, whether the talks would be described as successful by both parties, and so on. After Nixon returned, subjects were asked to recall what probabilities they had assigned. The results were systematic and striking. For outcomes that had occurred, recalled probabilities were higher than recorded predictions. For outcomes that had not occurred, recalled probabilities were lower. The magnitude of distortion was correlated with the clarity of the outcome: outcomes that were unambiguously either achieved or not achieved produced larger memory distortions than ambiguous outcomes. Subjects showed no awareness of the distortion — they were confident in their recollections. The written records, which they did not have access to during recall, told a different story.

Case Study 2: The 9/11 Intelligence Failures

When the 9/11 Commission convened in 2002 and 2003 to examine the intelligence failures preceding the September 11 attacks, it confronted a problem that psychologists would have predicted: nearly everyone who testified claimed, in retrospect, that the warning signs had been obvious. The Phoenix Memo, in which an FBI agent in Arizona reported suspicions about Middle Eastern men taking flight lessons; the August 2001 Presidential Daily Brief titled "Bin Laden Determined to Strike in US"; signals intelligence suggesting an imminent, large-scale operation — all of these, in the aftermath of the attacks, appeared to constitute clear foreshadowing. Critics, pundits, and legislators asked repeatedly why no one had acted on such obvious warnings.

The cognitive error embedded in these critiques was identified by the Commission's staff director Philip Zelikow and by several cognitive scientists who commented on the proceedings: the warnings only appeared obvious because the outcome was already known. Before September 11, analysts faced hundreds of threat reports monthly, involving dozens of potential actors and methods. The Phoenix Memo was one document among thousands; the PDB headline sat inside a report containing many other concerns. There was no reliable method, in August 2001, to identify which of hundreds of data points were signal and which were noise. The outcome — 3,000 deaths by coordinated hijacking — made a particular subset of pre-existing data retrospectively salient and causally connected. Intelligence analysts and policymakers who failed to act on those signals in advance were condemned, in hindsight, for a failure of perception that the condemners were themselves incapable of experiencing prospectively.

This is not a defense of any specific policy or analytical failure. It is an observation that hindsight bias systematically corrupts after-action analysis, making genuine errors harder to identify because all non-errors are retrospectively recategorized as errors too. Effective institutional learning requires separating what was knowable in advance from what only became visible after the fact.

Case Study 3: Black Monday, October 19, 1987

On October 19, 1987, the Dow Jones Industrial Average fell 22.6 percent in a single trading session — still the largest single-day percentage decline in the index's history. In the days and weeks following, financial analysts produced a cascade of explanations: portfolio insurance strategies that automated selling had created a feedback loop; markets were overvalued relative to fundamentals; the Plaza Accord had created currency instability; program trading had amplified normal market volatility beyond any sane threshold. Each explanation was compelling. Each was delivered with confidence. And virtually none of the analysts who offered these post-hoc accounts had predicted a crash of this magnitude in advance.

The Brady Commission, which produced the official government study of the crash, documented the proliferation of "obvious in retrospect" explanations and explicitly noted the problem of hindsight. Research by economists including Robert Shiller, who surveyed investors during and immediately after the crash, found that most had no clear causal theory at the time — they sold because others were selling, because prices were falling, because fear was contagious. The sophisticated causal narratives emerged after the fact. The "obvious bubble" was not perceived as a bubble by most market participants before it burst — surveys of investor expectations conducted in the months before October 1987 showed that most expected moderate positive returns over the coming year.

The financial domain is particularly vulnerable to hindsight bias because it combines three conditions that amplify the effect: high emotional salience (financial loss concentrates attention), causal complexity (many plausible post-hoc explanations are available), and high social pressure to demonstrate expert foresight (analysts are professionally rewarded for appearing prescient).

Case Study 4: Medical Malpractice and the Retrospective Physician

Research on medical malpractice litigation has produced a consistent and troubling body of evidence showing that outcome knowledge profoundly distorts how medical care is evaluated. In a series of studies reviewed by David Harley in the British Journal of Psychiatry and by Mark Ogletree and colleagues in medical law journals, physician reviewers who were told that a patient had suffered a negative outcome — a missed diagnosis, a surgical complication, a death — rated the earlier medical decisions in the case as significantly less appropriate than identical cases reviewed without outcome knowledge.

One paradigmatic study design presents reviewers with a case file in which a physician faces an ambiguous clinical situation and makes a judgment call: whether to order additional tests, whether to hospitalize, whether to choose one of two treatment protocols. In the prospective condition, reviewers judge the decision as reasonable given the available evidence. In the retrospective condition, reviewers who know the patient subsequently deteriorated judge the same decision as a clear error — as something a reasonable physician would not have done. The clinical facts are identical; only the known outcome differs.

This finding has direct implications for malpractice litigation, where medical decisions are evaluated by juries who already know that a patient was harmed. No instruction from a judge can reliably insulate jurors from the distorting effect of outcome knowledge, because the cognitive mechanism operates automatically, prior to deliberative judgment. Kamin and Rachlinski's 1995 study tested various debiasing instructions with mock jurors — telling them explicitly that hindsight is a bias, asking them to consider the decision as if they did not know the outcome — and found that while instructions reduced the effect modestly, they did not eliminate it. The only intervention that showed substantial debiasing was asking jurors to generate specific reasons why the outcome might have been unforeseeable — a procedure that mimics the counter-argumentative cognitive process that partially breaks the sense-making narrative.


When Hindsight Is Adaptive

It would be a mistake to conclude from this evidence that retrospective sense-making is simply an error, a defect in an otherwise accurate mind. Roese and Vohs (2012) devote significant attention to the adaptive functions that hindsight bias may serve, and the picture that emerges is nuanced.

The sense-making capacity that drives hindsight bias is, in itself, a crucial cognitive resource. The ability to construct causal narratives after the fact supports learning: understanding why events happened, what preceded them, what factors contributed to an outcome, is essential to avoiding similar outcomes in the future. If people were genuinely unable to extract causal lessons from experience — if they viewed all events as equally unpredictable and equally unrelated to prior conditions — they would be incapable of learning from history in any meaningful sense. The problem with hindsight bias is not that people learn from outcomes; it is that they learn too confidently, and attribute too much inevitability to events that were genuinely uncertain.

The memory-distortion component may serve a psychological coherence function. People who remember their predictions as more accurate than they were experience their judgments as more consistent and reliable, which supports self-efficacy and willingness to make future judgments under uncertainty. There is evidence that extremely accurate memory for prior incorrect predictions — the absence of the hindsight memory distortion — is associated with depressive realism and with lower willingness to engage in future judgment tasks. This suggests that some degree of retrospective enhancement of one's predictive record may be adaptive at the individual level, even if it is epistemically inaccurate.

Hindsight bias also may serve a social coordination function. Retrospective certainty about what was foreseeable establishes shared narratives about what counts as negligence, incompetence, or recklessness — narratives that are necessary for the functioning of legal accountability systems and professional standards. The question is not whether such standards should exist, but whether hindsight-contaminated judgment produces calibrated, fair standards or systematically overcorrected ones.

The limits of hindsight as an adaptive mechanism become most visible in professional contexts that require accurate prospective forecasting. Intelligence analysts who believe they "knew all along" about past attacks become overconfident about the predictability of future ones, and may neglect genuine uncertainty in their assessments. Financial professionals who remember their pre-crash intuitions as prescient may take on risk levels that are not justified by actual prospective signal quality. Medical educators who believe that exam-failure concepts were obviously clear may fail to redesign their curriculum. In each case, the adaptive story told in retrospect creates a false foundation for future decisions that are made prospectively, under genuine uncertainty.


Debiasing and Institutional Implications

The research on reducing hindsight bias is, as Fischhoff himself noted in his original 1975 paper, largely sobering. Telling people that hindsight bias exists and asking them to reason as if they did not know the outcome is partially but not fully effective. The most robust debiasing strategy, supported by multiple studies, involves actively generating reasons why the outcome was not inevitable — constructing a counter-narrative that temporarily competes with the automatic sense-making account. Slovic and Fischhoff (1977), in Journal of Experimental Psychology: Human Perception and Performance, found that this "consider the opposite" procedure reduced the bias meaningfully, though not to zero.

Institutional structures can be designed to mitigate the bias's effects even when individual cognition cannot fully overcome it. Prospective recording — requiring decision-makers to commit predictions to writing before outcomes are known — provides documentary evidence that is immune to memory distortion. Intelligence agencies, hospitals, and financial risk departments that adopt pre-mortem exercises (imagining in advance how a decision might fail, and why) are in part defending against the retrospective certainty that would otherwise conceal the genuine difficulty of the prospective judgment.

Legal systems have begun to grapple with the implications. Several legal scholars, following the work of Rachlinski, have proposed procedural reforms in malpractice litigation that would limit jurors' knowledge of outcomes during the initial evaluation of decision quality, or that would require explicit instruction about hindsight bias as part of jury preparation. The effectiveness of these interventions remains an active area of research.


References

  1. Fischhoff, B. (1975). Hindsight = foresight: The effect of outcome knowledge on judgment under uncertainty. Journal of Experimental Psychology: Human Perception and Performance, 1(3), 288-299.

  2. Fischhoff, B., & Beyth, R. (1975). "I knew it would happen": Remembered probabilities of once-future things. Organizational Behavior and Human Performance, 13(1), 1-16.

  3. Roese, N. J., & Vohs, K. D. (2012). Hindsight bias. Perspectives on Psychological Science, 7(5), 411-426.

  4. Kamin, K. A., & Rachlinski, J. J. (1995). Ex post ≠ ex ante: Determining liability in hindsight. Law and Human Behavior, 19(1), 89-104.

  5. Arkes, H. R., Faust, D., Guilmette, T. J., & Hart, K. (1988). Eliminating the hindsight bias. Journal of Applied Psychology, 73(2), 305-307.

  6. Slovic, P., & Fischhoff, B. (1977). On the psychology of experimental surprises. Journal of Experimental Psychology: Human Perception and Performance, 3(4), 544-551.

  7. Kahneman, D., & Tversky, A. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124-1131.

  8. Pohl, R. F., Eisenhauer, M., & Hardt, O. (2003). SARA: A cognitive process model to simulate the anchoring effect and hindsight bias. Memory, 11(4-5), 337-356.

  9. Rachlinski, J. J. (1998). A positive psychological theory of judging in hindsight. University of Chicago Law Review, 65(2), 571-625.

  10. Hoffrage, U., Hertwig, R., & Gigerenzer, G. (2000). Hindsight bias: A by-product of knowledge updating? Journal of Experimental Psychology: Learning, Memory, and Cognition, 26(3), 566-581.

  11. Christensen-Szalanski, J. J., & Willham, C. F. (1991). The hindsight bias: A meta-analysis. Organizational Behavior and Human Decision Processes, 48(1), 147-168.

  12. Shiller, R. J. (1989). Market Volatility. MIT Press.

Frequently Asked Questions

What is hindsight bias?

Hindsight bias is the tendency, after learning an outcome, to believe that one had predicted or expected it all along — and to misremember one's prior predictions as more accurate than they were. Baruch Fischhoff identified and named it in his 1975 paper 'Hindsight ≠ Foresight: The Effect of Outcome Knowledge on Judgment Under Uncertainty.' Roese and Vohs' 2012 review identified three components: memory distortion (misremembering past predictions), inevitability (events seem like they had to happen), and foreseeability (believing you knew it would happen).

What did the Nixon China study find?

Fischhoff and Beyth (1975) asked participants to estimate the probabilities of specific outcomes before Nixon's 1972 visit to China — whether he would meet Mao, whether the US and China would establish diplomatic relations, etc. After the trip, they asked the same participants to recall their original predictions. Participants systematically misremembered their predictions as closer to what actually occurred. Events that happened were remembered as having been assigned higher prior probability; events that did not happen were remembered as having been judged less likely than the records showed.

How does hindsight bias affect legal judgments?

Hindsight bias in legal contexts causes judges and juries to evaluate decisions by their outcomes rather than by what was knowable at the time of the decision. Kamin and Rachlinski (1995) found that mock jurors who were told that a flood had occurred after a defendant failed to maintain flood barriers judged the defendant significantly more negligent than jurors who were not told about the flood — even when instructed to evaluate only whether the risk management decision was reasonable given knowledge at the time. The outcome contaminates evaluation of the decision.

Why is hindsight bias dangerous in organizations?

Hindsight bias produces false accountability: after a failure, the warning signs look obvious, the decision-makers look incompetent or negligent, and the organization concludes it would have been easy to avoid the disaster. This produces blame without understanding — punishing individuals for failures that were systemically produced, while leaving unchanged the structural conditions that made the outcome likely. It also discourages appropriate risk-taking by making ex-post evaluation of reasonable-but-unlucky decisions appear reckless.

Can hindsight bias be reduced?

Partially. The most effective debiasing strategy identified in the research is consider-the-opposite: explicitly asking people to generate reasons why a different outcome might have occurred. Arkes et al. (1988) found this reduced but did not eliminate hindsight bias. Writing down predictions before events occur — the prospective equivalent of pre-mortems — creates an objective record that cannot be revised by memory distortion. Fischhoff's research suggests that hindsight bias is more resistant to correction than many other cognitive biases because the distortion affects memory itself, not just conscious judgment.