In 1960, British psychologist Peter Wason ran an experiment that has since become one of the most replicated and important in the history of cognitive psychology. He showed participants three numbers -- 2, 4, 6 -- and told them the sequence conformed to a rule. Their task was to discover the rule by proposing other sequences of three numbers. After each proposal, Wason would say whether it followed the rule or not. When participants felt confident they had found the rule, they would state it.

The results were striking. Almost all participants proposed sequences that fit their initial hypothesis -- "ascending even numbers" or "increasing by two" -- and rarely proposed sequences that could disprove it. They would test 8-10-12, then 20-22-24, then 100-102-104, and confidently announce: "The rule is consecutive even numbers." The actual rule was simply "any ascending sequence." Any sequence would have worked: 1-7-42, 3-9-10, 1-2-3. But participants were so focused on confirming their hypothesis that they never tried to break it.

This is confirmation bias: the systematic tendency to search for, favor, interpret, and remember information that confirms what we already believe.

Three Types of Confirmation Bias

Confirmation bias is not a single mechanism but a family of related tendencies that reinforce each other.

When we have a belief or hypothesis, we naturally look for information that would confirm it rather than information that could falsify it. This was the pattern Wason documented: participants tested confirming triples, not disconfirming ones.

In everyday life, this means that when we suspect a colleague is lazy, we notice their late arrivals and missed deadlines more than their completed projects. When we believe a political policy works, we read the supporting studies more carefully than the critical ones. When we are considering buying a car, we research reviews of the model we are already leaning toward more thoroughly than alternatives.

Nickerson (1998), in the most comprehensive review of the confirmation bias literature, catalogued dozens of studies demonstrating this selective search pattern across domains ranging from medical diagnosis to stock picking. His review, published in the Review of General Psychology, concluded that confirmation bias was "perhaps the most significant source of intellectual error in everyday life." The breadth of its influence -- from casual daily judgments to decisions in high-stakes professional settings -- makes it qualitatively different from most other cognitive biases, which tend to be more domain-specific.

Biased Interpretation

When we encounter ambiguous evidence, we tend to interpret it in ways that support our existing beliefs. Two people with opposing views watching the same event will often see evidence confirming their respective positions.

A famous study by Lord, Ross, and Lepper in 1979 gave participants mixed evidence about capital punishment's effect on murder rates. Participants who supported capital punishment rated the pro-deterrence studies as more rigorous and the anti-deterrence studies as flawed. Participants who opposed capital punishment showed the opposite pattern. The same evidence, examined by both groups, made both groups more confident in their opposing beliefs -- a phenomenon the researchers called belief polarization.

The Lord, Ross, and Lepper (1979) study, published in the Journal of Personality and Social Psychology, is particularly important because it showed that simply being exposed to balanced information does not correct bias. When both sides receive the same mixed evidence, the effect is paradoxically to strengthen each group's existing position. This finding has since been replicated in studies on issues from nuclear power to climate science (Druckman, 2012). The implication is deeply uncomfortable: rational engagement with contrary evidence may not move people toward consensus but may instead drive them further apart.

Biased Memory

We remember evidence that fits our beliefs better than evidence that contradicts them. This has been documented in research on eyewitness testimony, political beliefs, and interpersonal judgments.

A 1981 study by Snyder and Uranowitz asked participants to read a biography of a woman named Betty and then, after receiving new information about her, answer questions about the biography. Participants who learned she was a lesbian remembered more details from the biography consistent with a gay lifestyle. Participants who learned she was heterosexual remembered more details consistent with a straight lifestyle. The same text, selectively remembered, supported each group's framework for understanding her.

More recent neuroimaging research has added biological depth to this memory selectivity. Sharot and colleagues (2012), writing in Nature Neuroscience, demonstrated that individuals show heightened activity in the left inferior frontal gyrus and ventral striatum when encoding information consistent with their existing beliefs -- suggesting that confirming information is literally processed differently, at the neural level, from disconfirming information. The brain does not treat all incoming information equally; it physically flags and reinforces what fits.

The Wason Selection Task: A Clean Demonstration

Wason's later work produced an even cleaner demonstration: the Wason selection task, now among the most studied experiments in cognitive psychology.

The task: you see four cards. One side of each card shows a letter; the other shows a number. The cards show: E, K, 4, 7. The rule to test is: "If a card has a vowel on one side, it has an even number on the other." Which cards do you need to turn over to test whether the rule is true or false?

Most people correctly identify E (you need to check whether it has an even number) but also incorrectly choose 4 (you do not need to check whether it has a vowel -- the rule only goes one way) and fail to choose 7 (you must check whether it has a vowel, which would falsify the rule).

The error reveals the bias directly: people look for confirming evidence (E should have an even number; 4 might have a vowel) and miss falsifying evidence (7 must not have a vowel). The correct answer is E and 7. Fewer than 25% of participants in original studies got it right.

The selection task has an interesting context effect: if the rule is stated in terms of social permission rather than abstract letters and numbers -- "If a person is drinking alcohol, they must be over 18" -- most people solve it correctly. We are much better at detecting cheating in social contexts than at testing abstract logical rules. Cosmides and Tooby (1992) interpreted this asymmetry as evidence for evolved "social contract algorithms" -- specialized cognitive modules for detecting violations of conditional rules in social contexts -- distinct from the general-purpose logical reasoning required by the abstract version. On this account, confirmation bias in abstract domains may represent the default operation of a system that was never evolutionarily designed to handle formal logic.

How Pervasive Is Confirmation Bias? What the Numbers Show

The scope of confirmation bias is difficult to overstate. Across hundreds of experimental studies and meta-analyses, it emerges as one of the most consistent and replicable phenomena in cognitive psychology.

A 2011 meta-analysis by Hart and colleagues in the Personality and Social Psychology Review examined 91 studies of selective exposure to information and found a consistent pattern: people preferred attitude-consistent information roughly twice as often as attitude-inconsistent information, with an effect size (d = 0.36) that persisted across issue types, experimental methods, and participant populations. The preference was somewhat stronger for information people had sought out themselves than for information passively received.

In the domain of political belief, the effects are particularly measurable. A 2016 study by Bail and colleagues, later published in the American Sociological Review (2018), recruited over 1,200 Twitter users who followed opposing political accounts for one month. Rather than moderating beliefs, exposure to opposing viewpoints caused Republicans to shift further to the right. Democrats showed a smaller but directionally similar effect. This finding -- that social media exposure to the other side can backfire -- has profound implications for how media environments interact with confirmation bias.

In financial markets, research by Charness and Dave (2017) at UC Santa Barbara found that investors significantly discounted information that contradicted their existing portfolio positions, even when that information was objectively relevant to their investment decisions. The tendency was stronger for larger positions -- the more invested they were, financially and psychologically, the more they discounted contradicting signals.

Why Confirmation Bias Exists: The Evolutionary Argument

Confirmation bias is not a random error. Its ubiquity and consistency across cultures suggest it serves adaptive functions.

Cognitive Efficiency

The human brain processes approximately 11 million bits of information per second but consciously handles only about 50. Confirmation bias is a filtering mechanism: rather than processing all information equally, the brain prioritizes information that fits existing frameworks, which can be processed cheaply because it requires no model updating.

Maintaining consistent beliefs is cognitively efficient. Changing beliefs requires work: evaluating new evidence, updating mental models, reconsidering related beliefs. A brain that changed its beliefs at every contradicting data point would be unstable and slow. Confirmation bias provides a stable baseline.

The Social Argument: Why We Argue

Hugo Mercier and Dan Sperber's 2011 paper "Why Do Humans Reason?" offered a provocative theory: the primary evolutionary function of reasoning is not to reach truth through individual reflection but to construct arguments to persuade others and to evaluate others' arguments. On this view, confirmation bias is not a bug in a truth-seeking system; it is a feature of an argument-construction system.

The theory predicts that people are better at detecting flaws in arguments they disagree with than flaws in their own arguments -- and this is what research shows. The implication is that individual reasoning is systematically biased, but group deliberation, where different people's biases check each other, can produce good collective outcomes. This is the argument for adversarial collaboration, debate, and structured disagreement in institutions.

"Reasoning was not designed to pursue the truth. Reasoning was designed by evolution to help us win arguments. That's why confirmation bias is so powerful and so automatic." -- Jonathan Haidt, The Righteous Mind (2012)

The Accuracy of Past Beliefs

In stable environments, past beliefs are good predictors of what will be true in the future. If you believed that a particular path through the forest was safe yesterday, you are probably right that it is safe today. Persistent beliefs that change only in response to strong contrary evidence are more reliable than beliefs that swing with every new data point.

The problem is that modern environments are less stable than ancestral ones, and the information environments we now live in are specifically designed to exploit this bias.

Confirmation Bias in Practice

In Politics

Political belief is perhaps the domain where confirmation bias is most visible and most studied. The rise of algorithmically curated social media has created environments that are extremely effective at feeding people information consistent with their existing views.

Research by Eli Pariser in his 2011 book The Filter Bubble and subsequent academic work documented how personalization algorithms -- originally designed to show users more relevant content -- create information environments that amplify pre-existing beliefs. When you engage more with content you agree with (which is the natural behavioral pattern), the algorithm shows you more of it. The result is a feedback loop that can pull beliefs toward more extreme versions of themselves.

A Pew Research Center study found that between 2004 and 2014, the proportion of Americans with consistently liberal or consistently conservative views doubled, and the portion who held deeply negative views of the opposing party more than doubled. While many factors contribute to political polarization, experimental research by Settle (2018) demonstrated that the design features of social media platforms -- specifically the role of emotional contagion in sharing -- interact with existing confirmation biases to accelerate this divergence.

A Pew Research study in 2016 found that 62% of Trump supporters and 53% of Clinton supporters said "nearly all" or "most" of their close friends held the same political views as they did. In this environment, disconfirming political information rarely appears organically.

In Investing

Investing is an unusually clear domain for studying confirmation bias because performance can be measured objectively and compared against benchmarks.

Analysts' bias: A 2006 study by Abarbanell and Bernard found that securities analysts systematically underreacted to earnings surprises in a direction consistent with their prior forecasts. When companies reported earnings that contradicted analysts' predictions, analysts revised their forecasts less than the new information justified, effectively discounting disconfirming evidence.

Amateur investors: A classic study by Barber and Odean (2000) found that individual investors who traded most actively earned the worst returns. The mechanism: people traded most when they felt most confident, which was most often after a sequence of confirming evidence. But confidence derived from a run of successes is not the same as predictive accuracy.

Hold vs. sell decisions: Research consistently finds that investors hold losing positions longer than winning positions (the "disposition effect"), partly because selling a loss means acknowledging the original buy decision was wrong. Confirmation bias makes holding feel like maintaining an informed position; the disconfirming evidence from the falling price is discounted.

The financial costs are concrete. Barber and Odean's (2000) analysis of 66,465 individual brokerage accounts found that the most active traders underperformed the market by 6.5 percentage points annually, after transaction costs. Their least active counterparts underperformed by only 0.25 percentage points. The gap is largely attributable to the overconfidence that comes from selective processing of confirming market signals.

In Science

Science is built on methods designed to counteract confirmation bias -- hypothesis testing, peer review, replication requirements -- and yet confirmation bias persists even in scientific practice.

Publication bias: Journals are more likely to publish positive results (a hypothesis confirmed) than null results (a hypothesis not confirmed). This creates a scientific literature that over-represents confirming results, even if individual researchers are behaving correctly. A 2014 analysis by Franco, Malhotra, and Simonovits, published in Science, examined 221 pre-registered survey experiments from a social science research institute. Of those that produced null results, only 48% were written up and submitted for publication, compared to 96% of studies that produced strong positive results. The literature does not reflect what was done; it reflects what researchers and editors found satisfying to report.

Researcher degrees of freedom: Small, legitimate choices in data collection, analysis, and reporting can be made in ways that produce the result the researcher hopes for, without any conscious intent to mislead. Simmons, Nelson, and Simonsohn's 2011 paper "False Positive Psychology" demonstrated that researchers could easily produce statistically significant but false results through ordinary (if undisciplined) research practices.

Reproducibility: The "replication crisis" in psychology and other social sciences -- in which many high-profile published findings failed to replicate when independent researchers attempted them -- is partly attributable to the confirmation-biased research environment that produces and promotes positive results. The Open Science Collaboration's (2015) landmark replication project, published in Science, attempted to reproduce 100 prominent psychological studies and found that only 36% produced results consistent with the originals. Confirmation bias in the original research environment was identified as a significant contributing factor.

In Medicine

A 2020 study in JAMA Network Open found that physicians who had initially diagnosed a patient were less likely to update that diagnosis when new contradicting evidence emerged compared to physicians who saw the case fresh. Initial diagnosis anchored subsequent reasoning.

This matters for patient outcomes. Anchoring -- a close relative of confirmation bias in which initial information disproportionately influences subsequent judgment -- is a documented source of diagnostic error. "A diagnosis fits like a glove -- once you have it, it's hard to take off" is a well-known aphorism in medical education, and it describes exactly this dynamic.

A systematic review by Saposnik and colleagues (2016) in the BMJ Open identified 40 cognitive biases affecting clinical decision-making. Confirmation bias and its close relatives -- anchoring, premature closure, framing effects -- were among the most frequently documented. The review estimated that cognitive bias contributed to diagnostic error in 14% of cases examined, and that roughly half of those errors led to patient harm. In a healthcare system handling hundreds of millions of encounters annually, even a small bias-driven error rate translates to enormous absolute numbers of preventable adverse outcomes.

The Digital Amplification Problem

Modern information technology has not created confirmation bias -- it is as old as human cognition -- but it has dramatically amplified its effects. Social media platforms are optimized for engagement, and engagement is maximized by content that provokes strong emotional reactions. Content that confirms existing beliefs tends to provoke stronger reactions than content that challenges them.

The result is an information ecosystem that systematically favors confirming content in ways that have no historical precedent. Before the internet, a person with strong political views could, in principle, encounter a newspaper column arguing the opposite. The physical co-presence of different viewpoints in a single publication created some diversity of exposure. Algorithmically curated feeds eliminate this accidental exposure. The filter bubble that Pariser described in 2011 has become, in the years since, considerably more tightly drawn.

Researchers at Facebook's Core Data Science team (Bakshy, Messing, and Adamic, 2015) published a study in Science examining the feeds of 10 million users and found that while the algorithm did reduce exposure to cross-cutting political content, individual users' own choices to click selectively reduced it further still. Both the algorithm and the users' own behavior contributed to the narrowing. Confirmation bias, it turns out, does not need algorithmic assistance to create echo chambers -- but algorithmic assistance makes them dramatically more hermetic.

Debiasing Strategies That Actually Work

Research on debiasing has produced an honest verdict: telling people they are subject to confirmation bias has almost no effect. Awareness of bias does not reliably reduce it. The effective interventions are structural.

Consider the Opposite

A research-backed strategy is to explicitly generate reasons why the opposite of your belief might be true. Galinsky and Moskowitz (2000) found that asking participants to "consider the opposite" before making judgments significantly reduced anchoring effects and other biases.

The practical application: before finalizing any significant decision, write down the reasons why you might be wrong. Not perfunctorily -- genuinely try to construct the strongest possible case against your conclusion.

Pre-mortems

Gary Klein's "pre-mortem" technique asks decision makers to imagine that their decision has already been made and has failed catastrophically, then generate reasons why. This reframing bypasses the motivated reasoning that comes from defending a plan you have not yet committed to.

Pre-mortems work partly because they give people permission to voice doubts they would otherwise suppress (since the failure is imaginary and not yet a critique of anyone's real proposal) and partly because prospective hindsight -- imagining you already know the outcome -- activates different reasoning patterns than ordinary prospective analysis.

Mitchell, Russo, and Pennington (1989) showed that asking people to explain a hypothetical future event as if it had already occurred -- what they called "prospective hindsight" -- increased the accuracy of outcome predictions by approximately 30%. The pre-mortem harnesses this same mechanism: by mentally simulating failure before it happens, teams can identify vulnerabilities that forward-looking planning systematically misses.

Red Teams and Structured Disagreement

Assigning someone the explicit role of critic -- a "red team" whose job is to defeat the proposal -- ensures that the strongest disconfirming arguments are made, even if no one spontaneously wants to make them.

This structure overcomes the social pressure to align with the group's apparent consensus (a related phenomenon called groupthink) and guarantees that the decision-making process includes genuine opposition.

Falsificationist Framing

Shifting from the question "is there evidence that this is true?" to "what would have to be true for this to be false?" reframes the information search task. The first question is confirmation-seeking; the second is falsification-seeking. Explicitly framing the search as looking for disconfirming evidence partially overcomes the natural tendency to seek confirmation.

This is essentially Karl Popper's (1959) philosophy of science applied to daily reasoning. Popper argued that the demarcation criterion between genuine science and pseudoscience was falsifiability: a claim is scientific only if it could in principle be proven wrong. The same criterion applied to everyday beliefs asks: what would have to happen for me to change my mind? If no answer comes readily, that is itself a diagnostic signal.

Prediction Tracking

Keeping a record of your predictions and reviewing their accuracy is one of the most reliable ways to recalibrate beliefs over time. People who track their predictions cannot as easily misremember being right more often than they were. Philip Tetlock's research on "Superforecasters" found that the most accurate long-range forecasters were distinguished partly by their practice of tracking predictions and actively seeking to understand their errors.

Tetlock and Gardner's (2015) work, documented in Superforecasting: The Art and Science of Prediction, found that "superforecasters" -- individuals who outperformed intelligence analysts with access to classified information -- shared a consistent epistemic profile. They updated beliefs frequently in response to new evidence, maintained calibrated rather than extreme confidence levels, and actively sought disconfirming information. The key variable was not raw intelligence but the habit of treating beliefs as hypotheses to be tested rather than positions to be defended.

Debiasing Strategies: What Works

Strategy How It Counters Bias Evidence Base
Consider the opposite Generates reasons the belief might be false Galinsky and Moskowitz (2000): reduces anchoring and related biases
Pre-mortem Imagines failure before commitment; activates prospective hindsight Mitchell, Russo, and Pennington (1989): ~30% improvement in prediction accuracy
Red team / structured disagreement Assigns critic role explicitly; ensures opposition is represented Used in military, intelligence, and high-stakes corporate decisions
Falsificationist framing Reframes search from confirming to disconfirming Popper (1959); reduces confirmatory search in experimental settings
Prediction tracking Makes actual accuracy visible over time; undermines selective memory Tetlock (2015): most reliable calibration tool among superforecasters
Adversarial collaboration Partners with critics to design shared tests Produces research designs that both sides accept as fair
Structured deliberation protocols Requires explicit engagement with minority views before deciding Group decision research: reduces groupthink, improves coverage of evidence

The Limits of Debiasing

An important caveat: not all bias should be eliminated. Some degree of confidence in one's beliefs is necessary to act. A person paralyzed by the possibility that every belief might be wrong is not a better decision-maker; they are simply less functional.

The goal of debiasing is not to approach all beliefs with equal skepticism but to calibrate the degree of confidence to the quality of the evidence -- and to create conditions in which genuinely disconfirming information can reach and update our beliefs when it should. In high-stakes, one-way-door decisions, the cost of systematic bias is high. In routine decisions, the cognitive overhead of counteracting it may not be worth the benefit.

There is also an important asymmetry in the application of debiasing. The people most likely to seek out information about confirmation bias are probably already inclined toward epistemic humility; the people most likely to benefit from systematic debiasing are least likely to engage with it voluntarily. This is sometimes called the "sophistication effect" in the debiasing literature: training in critical thinking tends to make intelligent people better at rationalizing their preferred conclusions, not just better at correcting for bias.

The most durable insight from decades of research on confirmation bias is therefore not a technique but an orientation: hold your beliefs as provisional hypotheses rather than established facts, and set up systems -- prediction records, red teams, deliberate search for contrary evidence -- that make updating possible when the evidence warrants it. The alternative is not certainty. It is the comfortable illusion of certainty, maintained at the cost of being systematically wrong about the things that matter most.

Understanding confirmation bias is not about achieving an impossible epistemic purity. It is about being less wrong about the things that matter most.

References

  1. Wason, P. C. (1960). On the failure to eliminate hypotheses in a conceptual task. Quarterly Journal of Experimental Psychology, 12(3), 129-140.
  2. Lord, C. G., Ross, L., & Lepper, M. R. (1979). Biased assimilation and attitude polarization. Journal of Personality and Social Psychology, 37(11), 2098-2109.
  3. Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2(2), 175-220.
  4. Snyder, M., & Uranowitz, S. W. (1978). Reconstructing the past: Some cognitive consequences of person perception. Journal of Personality and Social Psychology, 36(9), 941-950.
  5. Sharot, T., Korn, C. W., & Dolan, R. J. (2011). How unrealistic optimism is maintained in the face of reality. Nature Neuroscience, 14(11), 1475-1479.
  6. Hart, W., Albarracin, D., Eagly, A. H., Brechan, I., Lindberg, M. J., & Merrill, L. (2009). Feeling validated versus being correct. Psychological Bulletin, 135(4), 555-588.
  7. Mercier, H., & Sperber, D. (2011). Why do humans reason? Arguments for an argumentative theory. Behavioral and Brain Sciences, 34(2), 57-74.
  8. Tetlock, P. E., & Gardner, D. (2015). Superforecasting: The Art and Science of Prediction. Crown Publishers.
  9. Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251).
  10. Bakshy, E., Messing, S., & Adamic, L. A. (2015). Exposure to ideologically diverse news and opinion on Facebook. Science, 348(6239), 1130-1132.
  11. Galinsky, A. D., & Moskowitz, G. B. (2000). Perspective-taking: Decreasing stereotype expression, stereotype accessibility, and in-group favoritism. Journal of Personality and Social Psychology, 78(4), 708-724.
  12. Mitchell, D. J., Russo, J. E., & Pennington, N. (1989). Back to the future: Temporal perspective in the explanation of events. Journal of Behavioral Decision Making, 2(1), 25-38.

Frequently Asked Questions

What is confirmation bias?

Confirmation bias is the tendency to search for, favor, interpret, and remember information in ways that confirm what you already believe. It operates at three stages: information gathering (we seek out sources that agree with us), interpretation (we interpret ambiguous evidence as supporting our views), and memory (we better recall evidence that fits our beliefs). The result is that our beliefs become self-reinforcing regardless of whether they are correct, because we naturally filter the information environment to make them seem better supported than they are.

What is the Wason selection task and what does it reveal about confirmation bias?

The Wason selection task is a classic logical puzzle that reveals how poorly humans naturally test their own hypotheses. In the original version, participants see four cards showing E, K, 4, and 7, and are told each card has a letter on one side and a number on the other. The rule to test is 'If a card has a vowel on one side, it has an even number on the other.' Most people correctly choose to turn over E but fail to choose 7, which would falsify the rule if it had a vowel on the back. People naturally look for confirming evidence (E should have an even number) and miss the falsifying evidence (7 must not have a vowel).

Why does confirmation bias exist? Is it a flaw or an adaptation?

Confirmation bias likely evolved because rapid, consistent belief systems were more useful than perfectly calibrated but slow ones in ancestral environments. A hunter who maintained a consistent model of animal behavior and acted on it quickly was more effective than one who constantly revised their beliefs with each new data point. In stable environments with similar people, sharing and confirming beliefs also builds social cohesion. The problem is that these shortcuts misfire in modern environments where we encounter diverse information, face complex decisions, and interact with people whose worldviews differ from our own.

How does confirmation bias affect investors?

Investors who own a stock tend to read positive news about it more carefully than negative news, recall their correct predictions about it better than their incorrect ones, and seek out analysis that supports holding it. This can lead to holding losing positions too long and averaging down rather than cutting losses. Research by Barber and Odean on individual investor behavior found that stocks investors bought subsequently underperformed stocks they sold, suggesting that their confidence in their portfolio decisions was systematically miscalibrated in the direction their existing holdings biased them.

What are the most effective strategies for reducing confirmation bias?

The most evidence-supported debiasing strategies are: actively seeking disconfirming evidence before making a decision (ask 'what would have to be true for me to be wrong?'); using structured devil's advocate processes where someone is assigned to argue the opposing view; considering the opposite by explicitly imagining the opposite conclusion and generating reasons why it might be true; and using pre-mortems that require imagining a decision has failed and generating reasons why. These structured approaches work better than simply telling people to 'be less biased,' which research consistently shows has little effect.