In 2018, three researchers at MIT — Soroush Vosoughi, Deb Roy, and Sinan Aral — published a paper in Science that would become one of the most cited and most discussed findings in the study of information. The paper analyzed the spread of 126,000 news stories shared by approximately three million people on Twitter between 2006 and 2017. It was the largest study of its kind ever conducted, covering essentially the full history of the platform through its rapid growth years. What Vosoughi, Roy, and Aral found was unambiguous and disturbing: false news stories spread faster, farther, deeper into sharing networks, and more broadly across the platform than true stories, and this was true in every content category they examined — politics, business, entertainment, science, urban legends, and natural disasters.

The margin was not small. A false news story was 70 percent more likely to be retweeted than a true story. True stories almost never reached more than 1,000 people; the top 1 percent of false news stories routinely cascaded to between 1,000 and 100,000 people. The depth of sharing chains — how many people away from the original source a story traveled — was longer for false news than true news. And perhaps most importantly for those who blamed algorithmic amplification or bot activity: when the researchers removed all automated accounts from the analysis and looked only at human sharing behavior, the results were unchanged. Humans, not machines, were driving the asymmetric spread of false information.

The finding raised a question with uncomfortable implications: what does this reveal about the human mind and its relationship to information? If true news consistently loses the competition for attention and sharing, what does that tell us about the cognitive and social mechanisms that determine what people believe and spread?

"Falsehood flies, and truth comes limping after it." — Jonathan Swift, The Examiner (1710)


Key Definitions

Misinformation — False or inaccurate information, regardless of the intent of the person spreading it. Misinformation includes genuine errors, outdated information, and false claims shared in good faith by people who believe them. Most misinformation that circulates online is spread by people who do not know it is false.

Disinformation — False information that is deliberately created and spread with the intent to deceive. State-sponsored influence operations, coordinated inauthentic behavior networks, and strategically fabricated content designed to manipulate political opinion or behavior qualify as disinformation.

Malinformation — Accurate or substantially true information deployed with the intent to cause harm — for example, selectively releasing private communications to damage a political figure, or sharing accurate personal information to enable harassment. Malinformation is not false but is weaponized.

Illusory truth effect — The phenomenon in which repeated exposure to a claim increases its perceived truthfulness, even when the claim is known to be false. First described by Hasher, Goldstein, and Toppino (1977) and extensively replicated.

Motivated reasoning — Reasoning that is driven by a desired conclusion rather than by objective evaluation of evidence. People with strong motivations to reach particular conclusions process confirming evidence more readily and scrutinize disconfirming evidence more harshly. Relevant to understanding when and why people resist corrections to false beliefs.

Inoculation theory — A psychological approach to building resistance to misinformation that prebunks false claims by exposing people to weakened examples of manipulative techniques, together with explicit refutation, before they encounter the full-strength misinformation.

Accuracy nudge — A brief intervention that draws attention to the concept of accuracy before a person makes a sharing decision, shown in experiments to improve the quality of content people choose to share.

Backfire effect — The claim that corrections sometimes strengthen false beliefs by triggering identity-protective reasoning. The original finding has not replicated consistently in large-scale studies; the current evidence suggests corrections generally work, though effects are small.

Cognitive fluency — The ease with which information is mentally processed. High fluency is experienced as a positive signal and is often unconsciously attributed to familiarity and truthfulness. The illusory truth effect appears to operate through fluency mechanisms.


The Vosoughi, Roy, and Aral Study: What It Found and Why It Matters

The 2018 Science paper by Vosoughi, Roy, and Aral deserves careful attention because it resolved a methodological problem that had plagued earlier misinformation research: scale. Previous studies of misinformation spread had been limited to specific stories, specific platforms, or short time windows. Vosoughi and colleagues built their analysis on data from the platform's entire accessible history through 2017, covering all stories categorized by six independent fact-checking organizations as true or false.

The study measured spread along four dimensions: breadth (unique users reached), depth (length of sharing chains), speed (how quickly a story was shared), and structural virality (how deeply the story penetrated social networks rather than being broadcast from a single high-follower account). False news outperformed true news on all four dimensions.

The novelty hypothesis — that false news spread further because it was more novel — was supported by the data: false news stories showed significantly higher novelty scores based on the semantic distance between the story and the sharing users' recent Twitter activity. Novel information attracts attention because the brain is wired to monitor for unexpected deviations from the predicted environment.

The emotional arousal hypothesis was also supported: false political news generated higher rates of fear, disgust, and surprise in replies, while true news generated more trust, joy, and anticipation. Emotions associated with threat and negativity — fear and disgust in particular — have well-documented effects on information processing and social transmission, activating systems associated with both avoidance and warning communication.

The study's most consequential finding for policy purposes was the human agency result: removing all suspected bots from the analysis did not change the outcomes. This directly challenged a widespread narrative that blamed the spread of misinformation on automated amplification. While bots play a role in the information ecosystem, the asymmetric spread of false information is primarily a human phenomenon, driven by ordinary cognitive and social processes.


Why Misinformation Spreads: Competing Theories

The Motivated Reasoning Account

The dominant early theory of misinformation belief and spread was motivated reasoning: people believe and share false information because it is congenial to their pre-existing identities and beliefs. Research by Dan Kahan at Yale Law School on "cultural cognition" showed that people's positions on politically contested empirical questions — climate change, gun control, nuclear waste storage risk — were strongly predicted by their cultural identity and group membership. People with strong tribal identities processed evidence in ways that protected those identities, crediting evidence that supported their group's position and scrutinizing contrary evidence harshly.

On this account, partisan misinformation spreads because it tells conservatives what conservatives want to believe and liberals what liberals want to believe. Correcting it is difficult because corrections threaten identity, triggering defensive responses.

The Inattention Account

A competing theory, developed primarily by Gordon Pennycook and David Rand at MIT, argues that most misinformation spread is driven not by motivated reasoning but by inattention: people share misinformation not because they want to deceive or because the content flatters their identity, but because they are not thinking about accuracy at the moment they decide to share.

Social media sharing is a fast, low-friction activity. The dominant motivations are social — people share content to entertain, to signal membership, to express emotion, to connect with others. Accuracy is not the primary criterion people apply when deciding whether to share. When people are prompted to think about accuracy, their sharing behavior improves. This "accuracy nudge" finding — robust across many experiments by Pennycook, Rand, and colleagues — supports the inattention account: if people were motivated to share misinformation regardless of its accuracy, prompting them to think about accuracy would not change their behavior.

Reconciling the Two Accounts

Both accounts capture important aspects of the phenomenon. Kahan's motivated reasoning framework better explains why people resist corrections to highly identity-loaded beliefs — why evangelical Christians reject evolution despite compelling evidence, or why committed partisans maintain beliefs about their party's performance despite contrary data. Pennycook and Rand's inattention framework better explains the everyday sharing of low-stakes misinformation — the health claims, the celebrity gossip, the political memes that do not directly threaten core identities.

The distinction matters practically: if misinformation spread is primarily motivated, structural interventions (accuracy prompts, friction, labels) will have limited effects on the most identity-relevant content. If it is primarily about inattention, structural interventions should be effective. The evidence suggests both: accuracy nudges work for much content but show weaker effects on the most ideologically charged claims.


The Illusory Truth Effect: Repetition as a Truth Signal

One of the most important and counterintuitive findings in misinformation research is that repeating a false claim to debunk it may, under some conditions, make the claim more believable.

The illusory truth effect, established in a seminal 1977 experiment by Lynn Hasher, David Goldstein, and Thomas Toppino, documents that statements rated as false at first exposure are subsequently rated as more true when encountered again. The mechanism appears to involve cognitive fluency: when a statement is re-encountered, the neural pathways associated with processing it are already primed, making processing easier. The brain experiences this fluency as a positive signal — easy-to-process information feels more familiar, and familiar information feels more true.

The effect has been replicated extensively and shows several important characteristics. It operates across time delays of days to weeks. It occurs for clearly implausible statements as well as plausible ones. And — in a finding with profound implications for debunking — it operates even when people were explicitly told at first exposure that the statement was false. A 2018 study by Pennycook, Cannon, and Rand found that statements labeled "FALSE" at first exposure were subsequently rated as somewhat truer than statements seen for the first time, though they were still rated less true than statements labeled "TRUE." Prior exposure mattered even when the prior exposure included an explicit falsity warning.

This finding does not mean debunking is counterproductive — the same study found that truth labels also increased perceived truth, and that the gap between items explicitly labeled true and items explicitly labeled false was maintained. But it does mean that strategies centered on widely repeating a false claim for the purpose of rebutting it carry some risk of increasing the claim's perceived credibility among those who do not closely attend to the rebuttal. Some researchers recommend "truth-forward" correction strategies that emphasize the accurate alternative without repeating the false claim extensively.


The Backfire Effect: A Correction Corrected

For several years after Nyhan and Reifler's 2010 paper in Political Behavior, "When Corrections Fail," the backfire effect was treated as a robust finding: when people are shown evidence that contradicts a politically motivated belief, some of them double down, strengthening rather than moderating the belief. The paper showed that conservative participants given accurate information about the absence of WMD in Iraq, or the consequences of tax cuts on the deficit, sometimes became more confident in their original incorrect beliefs.

The finding was widely cited, influenced communication strategy, and generated a pessimistic consensus that fact-checking was not only ineffective but potentially harmful for the most strongly held partisan beliefs.

Subsequent replication efforts substantially revised this consensus. A 2019 study by Thomas Wood and Ethan Porter, published in the American Journal of Political Science, tested corrections for 52 specific political misperceptions across multiple experiments with large samples and found that corrections reliably moved beliefs in the accurate direction. The backfire effect — belief strengthening in response to correction — appeared rarely and inconsistently across their data. When they directly replicated some of the Nyhan-Reifler conditions, they did not find backfire effects.

The current state of the evidence is roughly: corrections work in that they tend to reduce false belief among those who see them, but effects are modest, they work better for factual than for value-laden claims, and they work better for people with weaker prior commitments to the false belief. The most important limitation of correction is not backfire but reach: the fact-check rarely reaches the same population as the false claim.


Vaccine Misinformation: A Case Study

No domain of misinformation has more clearly documentable public health consequences than vaccine safety claims. The modern anti-vaccine movement traces its origins to a 1998 paper by Andrew Wakefield and colleagues published in The Lancet, which claimed a link between the MMR vaccine and autism. The paper was subsequently retracted by The Lancet after investigation revealed that Wakefield had financial conflicts of interest, had manipulated data, and had conducted invasive procedures on children without ethical approval. Wakefield lost his medical license. The putative MMR-autism link has been examined in numerous large epidemiological studies and is not supported by evidence.

Despite complete scientific refutation, the Wakefield claim spread and persisted. Studies of its diffusion on social media document the characteristic patterns of misinformation spread: initial acceleration driven by emotionally resonant content (parental fear for children's health), network clustering in communities that share skepticism of institutional medicine, and resilience to debunking because the claim resonates with identity and values (parental agency, distrust of pharmaceutical industry) rather than being merely a factual dispute.

Research by Margolin, Hannak, and Levy on vaccine misinformation spread found that the messenger significantly affected whether corrections were effective: corrections from trusted in-group sources were more effective than corrections from out-group sources or from sources associated with institutional medicine. This finding aligns with broader research on persuasion and identity-protective cognition.

COVID-19 produced an accelerated version of the same dynamics. Multiple false claims — about the origin of the virus, the safety of vaccines, and the efficacy of unproven treatments — spread globally at unprecedented speed. The "infodemic," as the World Health Organization termed it, was shaped by the same mechanisms that Vosoughi, Roy, and Aral had documented: novelty, emotional arousal, and the structural advantages that false information enjoys in viral sharing environments.


Prebunking and Inoculation: Building Resistance

If debunking is limited by reach and by the illusory truth effect, can people be made more resistant to misinformation before they encounter it? Inoculation theory — developed by William McGuire in the 1960s and extended to the misinformation domain by Sander van der Linden at Cambridge — suggests they can.

The inoculation approach involves exposing people to weakened examples of manipulative techniques — techniques used across many specific misinformation claims — together with explicit explanation of why those techniques are misleading. The goal is not to address specific false claims but to build generalized resistance to the categories of manipulation that false claims employ: emotional manipulation, false dilemmas, ad hominem attacks, cherry-picking evidence, false attribution to experts, and appeals to conspiracy.

Van der Linden and colleagues developed the "Bad News" online game (badnews.nl), in which players take on the role of a misinformation producer and learn to deploy six manipulation techniques. Participants who played the game showed reduced susceptibility to misinformation in subsequent tests, and the effect held across political groups. A browser-based "Go Viral!" game targeting COVID-19 misinformation techniques showed similar effects.

A 2022 study published in Science Advances, led by Jon Roozenbeek, tested whether short prebunking videos distributed through YouTube's advertising system could inoculate large populations against misinformation techniques. The study reached approximately 1.7 million users in the European Union. Viewers who saw the prebunking ads were better at identifying manipulative techniques in subsequent tests than control users. Effect sizes were modest — consistent with the broader literature on behavioral interventions — but the scale makes even small per-person effects meaningful in aggregate.

The prebunking approach has several advantages over traditional fact-checking: it scales horizontally (it teaches a skill rather than correcting individual claims), it is prospective rather than reactive, and its effects appear to be politically non-partisan. Its limitations include the durability of inoculation effects over time and the challenge of reaching people who are not already somewhat skeptical.


Platform Design and Structural Interventions

The question of what platforms can do to reduce misinformation without compromising free expression has generated both substantial research and substantial controversy.

Friction and accuracy prompts: Twitter's 2020 intervention requiring users to confirm they had read an article before retweeting it reduced sharing of articles from misinformation-linked sources by about 20 percent in the week following implementation. Multiple controlled experiments by Pennycook, Rand, and colleagues have shown that presenting a single accuracy-related question before asking people to rate sharing desirability increases the proportion of accurate content they choose to share. The inattention framework predicts this: if sharing is driven partly by failure to think about accuracy, prompting accuracy consideration improves outcomes.

Context labels and community notes: Fact-check labels and context labels attached to specific pieces of content reduce engagement with labeled content, though the effect size varies by implementation. Twitter's Community Notes (formerly Birdwatch) allows users to collaboratively annotate misleading tweets with context. Research by Wojciech Filipkowski and others suggests rated Community Notes do reduce engagement with labeled tweets. The implied truth effect — whereby unlabeled content appears implicitly endorsed — remains a concern.

Algorithmic amplification reduction: Perhaps the most debated structural question is whether reducing algorithmic amplification of high-engagement content would reduce misinformation spread. Because misinformation tends to be emotionally arousing and therefore high-engagement, algorithms that amplify engagement also amplify misinformation. Reducing algorithmic amplification is not the same as removing content — it simply reduces the speed and scale at which content spreads. Research and policy debates on this question are ongoing, with platforms reluctant to change amplification systems that drive engagement and revenue.

Structural vs. individual approaches: A recurring debate in misinformation research is whether the focus should be on changing individual cognition (through education, inoculation, accuracy prompts) or on changing the structural conditions that make misinformation profitable and widely distributed. The individualist approach assumes people can learn to identify and resist misinformation; the structuralist approach argues that the information environment is designed to exploit human cognitive vulnerabilities and must be redesigned. Both levels of intervention have evidence supporting them; both have limits.

For more on how false information is deliberately manufactured and deployed, see why disinformation spreads and why conspiracy theories spread. For analysis of how epistemic disagreements persist even among experts, see why experts disagree.


References

Frequently Asked Questions

What is the difference between misinformation and disinformation?

The distinction between misinformation and disinformation turns on intent, and it matters both analytically and practically.Misinformation refers to false or inaccurate information, regardless of the intent of whoever is spreading it. Someone who forwards a false health claim because they genuinely believe it is spreading misinformation, even if they have no intention to deceive. Much of what circulates online falls into this category: people share things that are wrong not because they want to mislead but because they found the content surprising, emotionally resonant, or consistent with their existing beliefs and did not verify it.Disinformation refers specifically to false information that is deliberately created and spread with the intent to deceive. State-sponsored influence operations, coordinated inauthentic behavior networks, and deliberately fabricated 'news' designed to manipulate political opinion all qualify as disinformation. The intent to deceive distinguishes disinformation from honest error.A third category, malinformation, refers to information that is true or largely accurate but is deployed with the intent to cause harm — for example, releasing private communications selectively to damage a political opponent, or sharing accurate information about a person to enable harassment. Malinformation is true but weaponized.In practice, the distinction between misinformation and disinformation is difficult to observe, because intent is rarely transparent. A piece of content can begin as deliberate disinformation created by a state actor or political operative and then be picked up and shared by ordinary people who believe it is true — at which point those individuals are spreading misinformation rather than disinformation, even though the original creation was intentional deception.From a public health and policy perspective, the distinction matters because the appropriate response differs. Countering disinformation requires identifying and disrupting its sources; countering misinformation requires understanding the cognitive and social mechanisms that lead people to share inaccurate content in good faith.

Why does false information spread faster than true information?

The 2018 study by Vosoughi, Roy, and Aral in Science — the largest systematic analysis ever conducted of information spread on a social platform — documented that false news stories spread faster, reached more people, penetrated more deeply into sharing networks, and were retweeted more broadly than true news stories across all categories of information, including politics, business, science, and entertainment. The false-news advantage was not driven by bots: even after removing automated accounts from the analysis, human-driven sharing showed the same asymmetry. False news was 70 percent more likely to be retweeted than true news.The researchers proposed two mechanisms to explain this pattern. First, novelty: false stories were more novel than true stories — they contained more surprising or unexpected information. Research on human attention consistently shows that novelty captures and holds attention. Content that tells us something new, unexpected, or counter-intuitive is more likely to be noticed, processed deeply, and remembered. Because true information tends to represent the consensus reality we already know, it is often less surprising than fabricated claims, which are free to be as dramatic as their creators wish.Second, emotional arousal: false news stories generated stronger emotional reactions, particularly those associated with fear, disgust, and surprise. Emotional arousal, particularly negative arousal, is one of the strongest known predictors of content sharing. Content that makes people angry or afraid activates the nervous system in ways that motivate action — including the action of sharing with others.The combination is powerful: a false claim can be crafted to be simultaneously novel and emotionally activating in ways that constrain true information cannot be. Reality is what it is; fabricated content can be optimized for virality. The asymmetry in spread is therefore not surprising from a psychological standpoint, even if its scale was more severe than researchers had anticipated.

Do fact-checks actually work?

The evidence on fact-checking effectiveness is more nuanced than either its proponents or critics suggest. Fact-checks do work — in the sense that they reduce belief in false claims among people who are exposed to them — but their effects are modest, and their reach is severely limited by the information environment.The early literature on 'backfire effects' — the claim that corrections of false beliefs sometimes strengthened those beliefs among people who held them — generated enormous popular and academic attention following the Nyhan and Reifler 2010 paper in Political Behavior. This finding, replicated in several early studies, suggested that fact-checking might be not merely ineffective but actively harmful. However, subsequent large-scale replication studies have largely failed to find consistent backfire effects. A 2019 study by Wood and Porter, published in the American Journal of Political Science, tested corrections of 52 political misperceptions across multiple experiments and found that corrections reliably moved beliefs in the accurate direction — but effects were small, particularly for ideologically charged claims.The current consensus among researchers is roughly: corrections work, but moderately; they work better for claims without strong partisan valence; people who seek out fact-checks tend to already be skeptical of the claims they check; and the asymmetry in reach between viral misinformation and the fact-checks that follow it is severe. A false claim shared millions of times on social media may reach far more people than the fact-check ever will, even if the fact-check is effective among those who see it.Platform-native interventions that add friction to sharing — prompting users to consider accuracy before sharing, adding context labels, reducing algorithmic amplification of contested content — have shown some evidence of effectiveness in experimental settings, though the scale of real-world effects remains debated. The evidence most consistently supports 'accuracy nudges': brief prompts that direct attention to accuracy before sharing decisions are made.

What is the illusory truth effect?

The illusory truth effect is the well-documented phenomenon in which repeated exposure to a claim increases people's perceived truth of that claim, even when the claim is false and even when people already know it is false.The effect was first identified by Hasher, Goldstein, and Toppino in a 1977 paper in the Journal of Verbal Learning and Verbal Behavior. Participants rated the truth of various statements; two weeks later, they rated them again, with the list including some new statements and some they had seen before. Repeated statements were rated as more true than new statements, regardless of their actual accuracy.The mechanism appears to involve cognitive fluency — the ease with which information is processed. When we encounter a statement we have seen before, processing is easier: the neural pathways involved are already somewhat primed. This fluency is experienced as a positive signal and is unconsciously attributed to truth — familiar things feel more true, because in ordinary experience, things we have encountered multiple times usually are reliable parts of shared reality. Familiarity and truth are correlated in most of our experience, so the brain uses familiarity as a proxy for truth. Misinformation exploits this heuristic.A 2018 study by Pennycook, Cannon, and Rand found that the illusory truth effect occurred even when participants had been explicitly told that certain statements were false — the effect of prior exposure persisted despite conscious knowledge of falsity. This finding is particularly troubling for debunking-based approaches to misinformation: repeating a false claim to debunk it may inadvertently make the claim feel more true to some portion of the audience.The illusory truth effect has significant implications for media strategy around misinformation. Repeated amplification of a false claim — even in the context of correction — may increase its perceived credibility among those who do not closely attend to the correction. Some researchers have recommended that debunking strategies emphasize the true alternative rather than repeating the false claim.

What is inoculation theory?

Inoculation theory, developed by psychologist William McGuire in the 1960s and extended to misinformation by Sander van der Linden and colleagues at the University of Cambridge, draws an explicit analogy to vaccination. Just as a weakened form of a pathogen can prime the immune system to resist future infection, exposure to a weakened form of misinformation — together with explicit refutation of its techniques — can prime cognitive resistance to future manipulation attempts.The key insight is prebunking rather than debunking: inoculating people against misinformation before they encounter it, rather than correcting them after. The inoculation approach involves two components. First, a warning that someone may try to mislead them. Second, a weakened example of the misleading technique, together with an explanation of why it is misleading. The combination activates what McGuire called 'resistance to persuasion' — a heightened critical processing mode.Van der Linden and colleagues have identified several general techniques used to spread misinformation that can be inoculated against: emotional manipulation, false dilemmas, red herrings, ad hominem attacks, conspiracy theories, and misrepresentation of scientific consensus. Their 'Bad News' game and its successors have been shown in experiments to reduce susceptibility to misinformation across ideological lines — inoculation appears to be politically non-partisan in its effects, which is significant given the partisan valence of most misinformation interventions.A 2022 study published in Science Advances found that short prebunking videos — deployed through YouTube advertising at scale — reduced susceptibility to misinformation techniques among millions of viewers, though effect sizes were modest and the durability of effects over time remains a subject of ongoing research.Inoculation theory represents a shift in perspective from reactive to proactive: rather than chasing individual false claims with individual corrections, it attempts to build generalizable resistance to manipulation techniques. This scalability advantage is significant given the production scale of modern misinformation.

What platform changes most effectively reduce misinformation?

Research on platform-level interventions to reduce misinformation suggests that friction-based approaches have the strongest evidence base, though effect sizes remain modest and the ecology of real-world platforms is more complex than experimental settings can fully model.Friction refers to interventions that slow down the sharing process without preventing sharing. Twitter's 2020 prompt that asked users whether they wanted to read an article before sharing it reduced sharing of articles from misinformation-flagged sources by approximately 20 percent in the week after implementation. An experimental study by Pennycook, Epstein, Mosleh, Arechar, Eckles, and Rand, published in Nature, found that a simple accuracy prompt shown before a sharing decision increased the accuracy of content people said they would share, consistent with the 'inattention hypothesis' — people share misinformation not primarily because they are motivated to deceive but because they are not thinking about accuracy at the moment of sharing.Context labels — labels attached to content indicating that it has been fact-checked, that it is disputed, or that additional context is available — show mixed evidence. Twitter's 'Community Notes' (formerly Birdwatch) allows users to collaboratively add context to tweets; research suggests rated notes do reduce engagement with the labeled content. However, labels can produce implied truth effects for unlabeled content: if only some false claims are labeled, the absence of a label may function as an implicit endorsement.Reducing algorithmic amplification of high-engagement content is theoretically important but empirically understudied at scale. High-engagement content is algorithmically amplified by design, and misinformation tends to generate high engagement (as Vosoughi et al. documented). Reducing amplification of viral content broadly would address this problem but would also reduce distribution of all high-engagement content, creating tensions with platform business models that depend on maximizing engagement.Media literacy programs — educational interventions that teach people to critically evaluate sources and claims — show weak average effects in experimental studies, though some targeted programs for specific populations show more promise. The evidence suggests that general media literacy training does not reliably transfer to improved performance in naturalistic information environments.