In the autumn of 2016, a gunman entered Comet Ping Pong, a pizza restaurant in Washington D.C., and fired three shots. He had driven from North Carolina to personally investigate a conspiracy theory he had read on social media claiming the restaurant was the front for a child trafficking operation run by senior Democratic Party officials. The conspiracy, known as Pizzagate, was entirely fabricated. There was no trafficking ring, no basement, no evidence of any kind. But the story had spread across Facebook, Reddit, Twitter, and YouTube at a speed and reach that no correction could match.
The incident crystallized something researchers had suspected for years: false information travels differently from true information. It does not merely slip through the cracks of an imperfect media system. It is actively preferred -- spread faster, farther, and with greater enthusiasm by the very people who read it.
In 2018, three MIT researchers produced the most comprehensive scientific answer to why this happens. Soroush Vosoughi, Deb Roy, and Sinan Aral analyzed every verified true and false news story that had spread on Twitter from 2006 to 2017. The dataset contained approximately 126,000 distinct information cascades, involving roughly 3 million people making 4.5 million shares. Their findings, published in Science, were unambiguous: false news was not simply as viral as true news. It was dramatically more viral. And the mechanism was human behavior, not automated amplification.
Key Definitions
Disinformation: False information spread with intent to deceive. The creator knows the content is false.
Misinformation: False information spread without intent to deceive. The sharer believes it is accurate.
Malinformation: True information spread with intent to harm. Leaked private communications are an example.
Information cascade: A sequence of sharing events in which one person shares content, another reposts it, others repost from them, and so on -- forming a measurable tree of propagation.
Illusory truth effect: The cognitive mechanism by which repeated exposure to a claim increases its perceived truth, independent of whether the claim was labeled as false when first encountered.
Motivated reasoning: The psychological process by which people evaluate evidence not to reach accurate conclusions but to reach conclusions they were already motivated to believe. Formalized by Ziva Kunda in 1990.
Inoculation theory: The communication framework in which pre-exposure to weakened forms of manipulation techniques reduces susceptibility to subsequent full-strength disinformation.
Pre-bunking: Applying inoculation theory proactively, before people encounter false content, by teaching the persuasive techniques rather than specific false claims.
Liar's dividend: The secondary benefit of deepfake technology and widespread awareness of disinformation: real incriminating content can be dismissed as fabricated.
The MIT Study: What 126,000 Cascades Revealed
The Vosoughi, Roy, and Aral 2018 study is the foundation of modern disinformation science. Its methodology addressed a core weakness in earlier research: the difficulty of distinguishing true from false information at scale.
The researchers used a dataset of fact-checked stories verified by six independent organizations -- Snopes, PolitiFact, FactCheck.org, Truth or Fiction, Hoax Slayer, and Urban Legends. Each story in the dataset had been classified as true or false. They then traced every instance of that content appearing on Twitter, mapping the cascade structure (who shared from whom) for each item.
The results across every category of information:
| Metric | False News | True News |
|---|---|---|
| Speed of spread | 6x faster | Baseline |
| Reach at peak spread | 10x more people | Baseline |
| Depth of cascade | Deeper branching | Shallower |
| Novelty of content (raters) | 70% more novel | Baseline |
| Bot-driven spread | Not significantly different | Not significantly different |
That final row is among the most important findings. When the researchers controlled for bot activity, the differential spread did not diminish. Automated accounts shared true and false news at comparable rates. The acceleration of false news was attributable to human behavior.
To understand why, the researchers ran a content analysis examining the emotional responses readers reported to the content. False news consistently triggered higher levels of surprise and disgust. True news was more likely to trigger anticipation, sadness, joy, and trust. Surprise and disgust are high-arousal states that motivate sharing. They signal: this information departs from my expectations, and it violates my sense of how things should be.
"Falsehood flies, and truth comes limping after it." -- Jonathan Swift, The Examiner, 1710
Swift wrote this three centuries before Twitter existed. The mechanism he identified -- the asymmetric speed advantage of false information -- has now been measured precisely.
Novelty as the Primary Amplification Mechanism
Why is false news more novel? Because fabrication has no constraints that reality does.
True information must correspond to what actually happened. It is constrained by events, documents, and verifiable facts. False information is constrained only by what sounds plausible enough to be believed and emotionally provocative enough to spread.
Research in information theory and cognitive psychology converges on a core insight: novel information activates the brain's orienting response, a neurological mechanism that prioritizes incoming stimuli that deviate from prior expectations. The orienting response was adaptive in an ancestral environment where unexpected stimuli often signaled threats or opportunities. In an information environment, it means that content which surprises us commands attention and motivates action, including the action of forwarding it to others.
False news writers understand this intuitively. Stories claiming that a politician committed a crime are more surprising than stories confirming that a politician acted predictably. Stories claiming that a common food is secretly toxic are more surprising than reminders that well-established dietary advice remains unchanged. The fabricator's advantage is the ability to write the most surprising version of events unconstrained by what actually happened.
This is why corrections consistently fail to spread as far as the original false content. Corrections are, almost by definition, less novel. "That story was false" is less surprising than the false story itself. A correction rarely generates the orienting response that drove the original cascade.
Motivated Reasoning: Why People Believe What They Want to Believe
The novelty mechanism explains why false news gets shared faster. It does not fully explain why people believe it in the first place, especially when it confirms their pre-existing political or social beliefs.
Ziva Kunda's 1990 paper in Psychological Bulletin established the formal framework for motivated reasoning. Kunda's core finding was that people do not simply evaluate evidence to reach accurate conclusions. They reason toward conclusions they are already motivated to reach, and they use apparently rational processes to get there. The motivation does not produce random beliefs. It produces directional beliefs -- conclusions that serve the person's interests, identity, or prior commitments.
In the context of disinformation, motivated reasoning produces systematic differential skepticism. A false story that confirms a politically convenient narrative receives less scrutiny than an equally false story that challenges it. The person who shares the politically convenient false story has not evaluated it less carefully out of laziness. They have evaluated it with equal care, but the goal of the evaluation was not "is this accurate?" but "can I accept this as true?"
Kunda's framework is supported by decades of subsequent research. A 2018 study by Briony Swire-Thompson, Ullrich Ecker, Stephan Lewandowsky, and Andrew Ecker published in the Journal of Experimental Psychology found that false information endorsed by politically preferred sources was rated as significantly more credible than the same information from non-preferred sources, and that corrections from non-preferred sources were substantially less effective at updating belief.
The Illusory Truth Effect: How Repetition Creates Belief
A separate but related mechanism operates even without motivated reasoning. Lynn Hasher, David Goldstein, and Thomas Toppino published a study in 1977 in Journal of Experimental Psychology: General that established a finding now known as the illusory truth effect: statements that people have heard before are rated as more true than statements they are encountering for the first time, even when the statements are known to be false.
The mechanism is processing fluency. Familiar statements are processed more easily -- they activate stored representations faster, require less cognitive effort to parse. The brain interprets this ease of processing as a signal of truth. Information that feels familiar feels true. This is a reasonable heuristic in a world where truly false information is rare. It becomes a vulnerability in an information environment saturated with repeated false claims.
The practical implications for disinformation are significant. Once a false claim has been repeated widely enough that it feels familiar, even people who were informed of its falsity when they first encountered it may later rate it as credible. A 2019 study by Gordon Pennycook, Tyrone Cannon, and David Rand confirmed that the illusory truth effect operates even for claims explicitly labeled as false at first exposure. The warning that a claim is false does not fully inoculate against the familiarity-based credibility that accumulates through repetition.
"A lie can travel halfway around the world while the truth is still putting on its shoes." -- attributed to various sources
This proverb has been applied metaphorically for centuries. The 2018 MIT study converted it into a measured quantity: false news traveled to 1,500 people six times faster than true news. The metaphor was conservative.
Cognitive Load, Analytical Thinking, and Susceptibility
Not everyone is equally susceptible. Gordon Pennycook and David Rand's research program, beginning with a 2019 paper in Cognition, has identified the most robust individual-level predictor of resistance to disinformation: analytical thinking.
Pennycook and Rand measured subjects' scores on the Cognitive Reflection Test (CRT), a three-item instrument that measures the disposition to override intuitive but incorrect responses in favor of deliberate correct responses. High CRT scorers were substantially less likely to rate false news headlines as accurate and substantially less likely to say they would share false content.
The mechanism, Pennycook and Rand argued, is not political sophistication or domain knowledge. It is the general tendency to think carefully rather than accept the first available answer. This finding is important because it shifts the question of disinformation susceptibility from "who is misinformed" to "under what conditions does careful thinking fail?"
The answer to that question is: when cognitive resources are depleted. When people are distracted, tired, or processing multiple information streams simultaneously, the System 2 analytical processes that would ordinarily flag implausible content fail to engage. They rely instead on intuition, familiarity, and emotional response -- all of which favor false news for the reasons already described.
This has direct implications for platform design. Social media platforms are specifically engineered to maximize the quantity of content processed per session, precisely the condition under which analytical thinking is least effective.
Platform Design and the Amplification of Outrage
Internal research at Facebook, surfaced publicly by whistleblower Frances Haugen in 2021, documented something the platform's own data science team had measured: content that triggered outrage, anger, and divisive emotional responses received significantly higher engagement than content that did not. The reaction button labeled with an "angry" face drove more downstream sharing and commenting than the like reaction for the same piece of content.
The significance is that Facebook's News Feed algorithm was not designed to amplify outrage. It was designed to maximize engagement. But because outrage-generating content happens to generate high engagement, the engagement-maximizing algorithm functionally becomes an outrage-amplifying algorithm. The platform's commercial incentive and the psychological mechanism that makes false news viral are perfectly aligned.
The Haugen documents included internal presentations in which Facebook data scientists warned leadership that the algorithm's amplification of angry, divisive content was causing measurable harm to civic discourse, particularly in countries with weaker media institutions than the United States. The documents showed that leadership was aware of these findings and either declined to act or implemented changes that internal researchers assessed as inadequate.
This is not a problem unique to Facebook. YouTube's recommendation algorithm, as documented by researcher Guillaume Chaslot -- a former YouTube engineer -- systematically recommended increasingly extreme content because extreme content generates higher watch time than moderate content. The same engagement-maximizing logic produces the same radicalization pathway across platforms.
Why Corrections Fail (and What We Got Wrong About Backfire)
For most of the 2010s, the consensus in misinformation research was that corrections not only failed to work but sometimes backfired -- actually strengthening false beliefs when challenged. This was based on Brendan Nyhan and Jason Reifler's 2010 paper reporting a "backfire effect" in politically motivated reasoning.
Nyhan and Reifler have since revisited this research. A 2019 paper in which they and colleagues attempted systematic replication of the backfire effect found that it is not a robust or general phenomenon. Corrections typically do reduce false belief, at least in the short term. The original backfire findings appear to have been driven by specific experimental conditions that do not generalize.
The current scientific consensus, as of 2024, is:
- Corrections help but incompletely. Belief in corrected false claims typically decreases after correction but does not return to the level it would have been if the false claim had never been encountered.
- The false claim continues spreading independently after correction because most people who encounter the correction never saw the original false claim, and most people who saw the original false claim never see the correction.
- Repeated corrections can reduce the illusory truth effect but require sustained effort that is rarely maintained in practice.
- Source credibility matters. Corrections from trusted sources are more effective than corrections from distrusted sources.
The practical failure of corrections is therefore not primarily a psychological backfire problem. It is a distribution problem. False claims outrun their corrections structurally, because the dynamics that made the false claim viral in the first place do not apply to the correction.
Pre-bunking: The Inoculation Approach
If corrections consistently fail to catch up with false claims, the alternative is to intervene before people encounter the false claim. This is the logic of inoculation theory, applied to disinformation by Sander van der Linden and colleagues.
Inoculation theory, developed by William McGuire in the 1960s in the context of persuasion resistance, holds that exposure to a weakened, refuted form of a persuasive argument confers resistance to the full-strength argument. The analogy to vaccination is explicit: a small dose of the pathogen, attenuated enough not to cause disease, trains the immune system to recognize and respond to the real pathogen.
Van der Linden's application to disinformation focuses not on specific false claims -- which change too rapidly for claim-by-claim pre-bunking to scale -- but on the manipulation techniques that false information uses. These techniques are a small, stable set: emotional manipulation, false dichotomies, misleading use of statistics, appeals to fake experts, and conspiracy framing.
The GOGAME (Go Against Misinformation and Engage) research program at Cambridge, and the associated "Bad News" online game, tested whether teaching people to recognize these techniques before they encountered disinformation reduced subsequent susceptibility. Results published in Palgrave Communications in 2017 and subsequent studies showed significant reductions in susceptibility to misinformation in populations that had played the inoculation game versus control groups.
A 2022 collaboration between van der Linden's group and YouTube resulted in pre-bunk videos shown to YouTube users before they encountered potentially misleading content. A randomized trial involving 30,000 participants found that the pre-bunk videos reduced susceptibility to the specific manipulation techniques they addressed by approximately 20%.
"Inoculation is not about making people immune to all misinformation. It is about giving them the cognitive antibodies to recognize manipulation when they see it." -- Sander van der Linden, Foolproof, 2023
The Liar's Dividend
The most counterintuitive consequence of widespread disinformation is the liar's dividend: the effect by which the mere existence of deepfakes and fabricated media creates plausible deniability for authentic incriminating content.
Robert Chesney and Danielle Citron introduced the term in a 2019 paper in Georgetown Law Journal. Their argument was that deepfake technology does not merely create new false content. It degrades the epistemic value of all content, including genuine content, by making the question "could this have been fabricated?" a plausible defense in any context.
A genuine video of a politician behaving corruptly can now be dismissed with: "That looks like a deepfake to me." The claim does not need to be convincing to be effective. It only needs to create enough uncertainty that some portion of the audience withholds judgment. In a polarized information environment, the partisan who wants to disbelieve the genuine video has been given a respectable-sounding justification.
This creates an asymmetric vulnerability. The liar's dividend benefits bad actors who want to deny real evidence. It disadvantages honest actors who want to document real events. The technology that makes it easier to fabricate evidence also makes it harder to authenticate genuine evidence.
What Actually Reduces Disinformation Spread
The research literature supports several interventions with varying degrees of evidence:
| Intervention | Evidence Strength | Notes |
|---|---|---|
| Pre-bunking (inoculation) | Strong | Scales better than debunking; focuses on techniques not claims |
| Friction at sharing (adding delay/prompts) | Moderate | Prompting accuracy consideration before sharing reduces false shares |
| Accuracy nudges | Moderate | Brief accuracy prompts shift attention from engagement to truth |
| Corrections | Moderate | Help in the short term; do not erase original impression |
| Media literacy education | Moderate | Long-term population-level benefit; slow to deploy |
| Platform algorithm changes | High potential; underimplemented | Reducing engagement-maximizing amplification would structurally reduce false news spread |
| Labeling false content | Weak to moderate | Labels help on labeled content but may create "implied truth effect" for unlabeled content |
The most structurally important intervention -- changing platform algorithms to not amplify high-engagement-but-false content -- has the strongest theoretical justification and the weakest implementation record. The commercial incentives that make disinformation spread are built into the business models of the platforms that distribute it.
Conclusion
The spread of disinformation is not a failure of individual irrationality. It is the predictable output of cognitive mechanisms operating in an environment they were not designed for.
Novelty drives sharing because surprising information was historically valuable. Motivated reasoning produces directional belief because reasoning toward pre-existing conclusions is cognitively efficient. The illusory truth effect makes repetition create credibility because familiar information is usually accurate. Emotional arousal drives social sharing because urgent information historically required urgent communication.
These mechanisms are not bugs in human cognition. They are features adapted to different informational conditions. In an environment where fabricators can engineer maximally novel, maximally outrage-inducing content at scale and distribute it through platforms commercially incentivized to amplify engagement, the same cognitive architecture that served our ancestors becomes a systematic vulnerability.
The 2018 MIT study established the baseline: false news spreads six times faster, reaches ten times more people, and is disseminated by humans choosing to share it. Every proposed solution -- pre-bunking, corrections, friction, media literacy -- operates against the structural current of an information environment that rewards the properties that false news is designed to maximize.
Understanding the mechanisms does not neutralize them. But it identifies where intervention is actually possible: in platform design choices, in educational programs that build manipulation-recognition skills before exposure, and in individual habits of attention that pause before sharing the most surprising, most outrage-inducing content that appears in the feed.
The thing that most wants to be shared is often the thing most worth pausing on.
References
Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146-1151. https://doi.org/10.1126/science.aap9559
Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480-498. https://doi.org/10.1037/0033-2909.108.3.480
Hasher, L., Goldstein, D., & Toppino, T. (1977). Frequency and the conference of referential validity. Journal of Experimental Psychology: General, 106(1), 58-60. https://doi.org/10.1037/0096-3445.106.1.58
Pennycook, G., & Rand, D. G. (2019). Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition, 188, 39-50. https://doi.org/10.1016/j.cognition.2018.06.011
Pennycook, G., Cannon, T. D., & Rand, D. G. (2018). Prior exposure increases perceived accuracy of fake news. Journal of Experimental Psychology: General, 147(12), 1865-1880. https://doi.org/10.1037/xge0000465
van der Linden, S., Leiserowitz, A., Rosenthal, S., & Maibach, E. (2017). Inoculating the public against misinformation about climate change. Global Challenges, 1(2), 1600008. https://doi.org/10.1002/gch2.201600008
Nyhan, B., Porter, E., Reifler, J., & Wood, T. (2019). Taking fact-checks literally but not seriously? The effects of journalistic fact-checking on factual beliefs and candidate favorability. Political Behavior, 42, 939-960. https://doi.org/10.1007/s11109-019-09528-x
Swire-Thompson, B., Ecker, U. K. H., Lewandowsky, S., & Ecker, A. (2020). They might be a liar but they're my liar: Source evaluation and the prevalence of misinformation. Political Psychology, 41(1), 21-34. https://doi.org/10.1111/pops.12586
Chesney, R., & Citron, D. K. (2019). Deep fakes: A looming challenge for privacy, democracy, and national security. California Law Review, 107(6), 1753-1820. https://doi.org/10.15779/Z38RV0D15J
Roozenbeek, J., Schafer, M., Linden, S., & Murber, J. (2022). Prebunking interventions based on "inoculation" theory can reduce susceptibility to misinformation across cultures. Harvard Kennedy School Misinformation Review, 3(1). https://doi.org/10.37016/mr-2020-99
Frequently Asked Questions
Why does false news spread faster than true news?
The 2018 MIT study by Vosoughi, Roy, and Aral found false news is more novel than true news, triggering surprise and disgust that drive sharing. False news reached 1,500 people six times faster and was retweeted by humans, not bots.
What is the illusory truth effect?
The illusory truth effect, documented by Hasher, Goldstein, and Toppino in 1977, is the finding that repeated exposure to a false claim increases its perceived truth, regardless of whether the person was told it was false when first encountered.
Does analytical thinking protect against misinformation?
Research by Pennycook and Rand (2019) found that people who score higher on the Cognitive Reflection Test are less likely to believe and share misinformation. Distraction and cognitive load increase susceptibility significantly.
Why do corrections rarely work?
Corrections help but incompletely. The original backfire effect (Nyhan and Reifler) has been substantially revised; corrections typically do reduce false belief, but the corrected belief often persists in attenuated form, and the original false version continues spreading independently.
What is pre-bunking and does it work?
Pre-bunking, based on inoculation theory, exposes people to weakened forms of manipulation techniques before they encounter the real misinformation. Sander van der Linden's GOGAME and Bad News research shows pre-bunking is more effective than post-hoc debunking.
What is the liar's dividend?
The liar's dividend is the effect whereby widespread awareness of deepfakes and disinformation allows bad actors to dismiss genuine evidence as fake. The existence of fabrication technologies creates plausible deniability for authentic incriminating content.
Do social media algorithms amplify disinformation?
Internal Facebook research, surfaced by Frances Haugen in 2021, confirmed that outrage-triggering content receives greater engagement, and that algorithmic amplification of high-engagement content systematically promotes inflammatory and false information over accurate but less emotionally provocative content.