In the spring of 2016, Edgar Welch drove from North Carolina to Washington, D.C. with an AR-15 rifle and a handgun. He had been reading online that a pizza restaurant called Comet Ping Pong was operating a secret child trafficking ring run by senior Democratic Party officials. The claims had spread from a few conspiracy forums to thousands of social media shares in a matter of weeks, picking up specificity and emotional intensity with each iteration. Welch fired his rifle inside the restaurant before confirming no children were present and surrendering to police. No evidence for the conspiracy ever existed. The restaurant's owner and employees received death threats for months afterward.

Pizzagate, as it came to be called, is an extreme case, but it illustrates a dynamic that operates at much lower intensities every day: false or fabricated information, dressed in the visual and rhetorical clothes of legitimate news, spreads through social networks with an efficiency that accurate information rarely matches. The physics of online sharing favor novelty, emotional intensity, and moral outrage over accuracy. And most people, even educated and intelligent ones, are poorly equipped to distinguish reliable from unreliable information using the tools they were taught to use.

This is not a matter of intelligence. Stanford History Education Group researchers found in a series of studies that university students at highly selective institutions, including Stanford itself, were outperformed in evaluating online sources by professional fact-checkers with no relevant academic training. The difference was not intelligence; it was method. The fact-checkers used a technique that the students had never been taught, and that decades of conventional media literacy education had never emphasized.

"False news was 70 percent more likely to be retweeted than true news. It took true news about six times as long as false news to reach 1,500 people." — Vosoughi, Roy, and Aral, Science, 2018


Key Definitions

Misinformation: False or misleading content shared without the intent to deceive. The person sharing it typically believes it is true.

Disinformation: False or misleading content deliberately created and shared with the intent to cause harm or deceive. The person creating or sharing it knows it is false or is deliberately indifferent to its accuracy.

Malinformation: Truthful content shared with the intent to harm, such as leaking private communications, doxing, or selectively publishing accurate facts to damage someone out of context.

Information disorder: Claire Wardle and Hossein Derakhshan's term, developed for the Council of Europe in 2017, for the broader ecosystem of problems created by false, misleading, and weaponized information. The framework is designed to be more precise than terms like "fake news," which was often used as a generic political insult rather than a specific descriptor.

The liar's dividend: Legal scholar Bobby Chesney and cybersecurity researcher Danielle Citron's term for the strategic benefit deepfake technology creates for bad actors who can deny authentic footage by claiming it was fabricated.

Prebunking: Providing people in advance with a weakened version of a misleading argument, along with explanation of the technique being used, to build cognitive resistance before they encounter the full persuasive version. Distinguished from debunking, which addresses misinformation after it has been encountered.


Why False Information Spreads Faster Than True Information

The most rigorous study of misinformation spread to date was published in Science in 2018 by Soroush Vosoughi, Deb Roy, and Sinan Aral at MIT. They analyzed all verified true and false news stories shared on Twitter between 2006 and 2017, covering 126,000 distinct stories shared by roughly 3 million people.

The findings were stark. False stories spread to more people, spread faster, and penetrated deeper into social networks than true stories. The most viral false stories reached an average of 1,000 to 100,000 people, while true stories rarely reached more than 1,000. False news was 70 percent more likely to be retweeted than true news. It took true news approximately six times longer than false news to reach an audience of 1,500 people.

Critically, the effect was driven by human sharing behavior, not by automated bots. When the researchers controlled for bots, the results held. People, not algorithms, were primarily responsible for the spread differential. The researchers examined why: false news was more novel, more surprising, and generated greater emotional responses of disgust, fear, and moral outrage in replies than true news. These are exactly the emotional states that motivate sharing.

The implication is structural and sobering: the social incentives for sharing favor misinformation. People share content to signal identity, demonstrate awareness, and provoke emotional reactions in their networks. Accurate but unsurprising information does not serve these functions as well as false but emotionally compelling information. Fixing this is not a matter of individual motivation; it requires understanding the incentive structure and developing specific cognitive tools to counteract it.

Claire Wardle and the Information Disorder Framework

Journalist and researcher Claire Wardle, working with Hossein Derakhshan for the Council of Europe, published their Information Disorder framework in 2017 as a more precise alternative to the then-ubiquitous but terminologically imprecise "fake news" concept.

The framework distinguishes three types of problematic information on two axes: whether the content is false and whether the intent to harm is present.

Misinformation is false content without harmful intent: a person sharing a rumor they believe, or an outdated health claim they encountered without knowing it was outdated. The appropriate response is providing accurate information and reducing the environmental conditions that allow false beliefs to circulate.

Disinformation is false content with harmful intent: coordinated campaigns, deliberately fabricated stories, or synthetic media created specifically to deceive. The appropriate response includes platform-level interventions, policy measures, and specific counter-messaging.

Malinformation is true content with harmful intent: leaking a private conversation to embarrass someone, sharing accurate but private medical information, or publishing selective truths designed to mislead through omission.

The framework was influential because it clarified that the interventions for each category differ substantially. Educational media literacy may address misinformation effectively; disinformation, which is deliberate and often well-funded, requires different tools.

Kate Starbird at the University of Washington has extended this framework in her research on crisis misinformation, the particular dynamics by which false claims spread during breaking news events, natural disasters, and political crises, when the demand for information is high, authoritative sources are slow to respond, and emotional arousal is elevated. Her research on the 2013 Boston Marathon bombing, the 2016 US election, and COVID-19 found consistent patterns: early, unverified information fills the vacuum left by slow official response, and emotional salience determines what spreads rather than accuracy.

The Failure of Traditional Source Evaluation

For decades, media literacy education taught students to evaluate sources through what became known as the CRAAP test: checking Currency (how recent), Relevance, Authority (who wrote it), Accuracy (is it supported by evidence), and Purpose (why was it written). The implicit theory was that a careful enough internal reading of a source would reveal its credibility.

Sam Wineburg and Sarah McGrew at the Stanford History Education Group tested this theory empirically, and the results were unflattering. In studies published from 2016 to 2019, they gave the same set of online sources and social media content to professional fact-checkers, professional historians, and university students at selective institutions. The goal was to assess source credibility and identify potentially misleading content.

The historians and students were significantly worse at the task than the fact-checkers. The historians, whose professional training had emphasized deep textual analysis, typically read sources thoroughly and looked for internal credibility signals. They were often impressed by sources that appeared authoritative. The fact-checkers, whose professional practice required rapid and accurate evaluation, did something entirely different: they left the source immediately, opened multiple new browser tabs, and searched for what others said about the source. They read laterally.

The finding was replicated across multiple study variations. Internal reading of a source, no matter how careful, is an unreliable method for evaluating it. Context about a source, available from outside, is far more informative than content within it.

The SIFT Method and Lateral Reading

Mike Caulfield at Washington State University Vancouver synthesized the Stanford findings into a practical framework he called SIFT:

Stop before you read, share, or act on a piece of information. The impulse to share immediately is the primary vector by which misinformation spreads. A pause of even thirty seconds to ask "Should I check this?" interrupts the automatic sharing behavior.

Investigate the source before reading the content. Who publishes this? What is their funding, their history, their perspective? This investigation should happen outside the source, not inside it. Open a new tab. Search the source name. Check Wikipedia. The question is not "does this source seem credible?" but "what do reliable external sources say about this source?"

Find better coverage when a claim seems significant. Is this story covered by multiple credible outlets? If a major claim appears only on one source, the absence of corroboration is itself informative. If it is covered elsewhere, what do those sources say?

Trace claims to their original context. Many misleading stories are true facts stripped of context: a real statistic from a different time period, a real quote with the sentence before and after removed, a real image from a different event. Going upstream to the original source frequently reveals context that changes the interpretation entirely.

Caulfield's empirical tests of SIFT instruction, and independent replications, found that students who learned the method significantly improved their accuracy at evaluating online information. The method is teachable and produces rapid improvement.

Prebunking: Inoculation Against Manipulation

The standard response to misinformation is debunking: correcting the record after false claims have spread. Research on the effectiveness of corrections finds they are modestly helpful but consistently insufficient. Once a false claim has been absorbed, correcting it is harder than preventing it from being accepted in the first place. The human mind tends to retain the gist of what it has heard even after being told the specific claim was wrong.

Prebunking takes the opposite approach. Developed by John Cook at George Mason University and Sander van der Linden at Cambridge University, based on earlier inoculation theory work by William McGuire in the 1960s, prebunking exposes people in advance to weakened versions of manipulative arguments, along with explanation of the technique being used, to build cognitive resistance.

The analogy to medical inoculation is precise: a weakened pathogen stimulates immune response without causing disease. A weakened manipulative argument, paired with identification of the technique, builds recognition skills without causing the persuasion effect. When the full-strength manipulative argument is encountered later, the person has a pre-formed response.

Van der Linden's Bad News game, a browser-based simulation in which players practice the techniques of misinformation production (fake experts, conspiracy narratives, emotional manipulation, discrediting opponents, polarizing content), has been tested in randomized controlled trials and found to improve ability to identify manipulation strategies in real content, weeks after playing. A 2022 study published in Science Advances found that short prebunking videos deployed on YouTube significantly reduced susceptibility to manipulative content.

The practical lesson is that learning to recognize techniques of manipulation is more durable and generalizable than fact-checking specific claims. Claims are infinite; techniques are a manageable catalogue.

The Emotional Arousal Signal

Across multiple research programs, emotional arousal, particularly outrage, disgust, fear, and moral indignation, is a consistent predictor of misinformation. This is not because strong emotions are always associated with false information, but because emotional manipulation is one of the primary techniques used to make false information more compelling and shareable.

Researchers at MIT and Stanford, including Pennycook and Rand, have found that content designed to generate strong emotional reactions is disproportionately likely to be misleading. The design is not accidental: fabricators know that outrage is more shareable than balanced or nuanced information, and they engineer their content accordingly.

The practical implication is a counterintuitive corrective: when content generates a strong emotional reaction, that reaction itself should trigger skepticism rather than sharing. The impulse is to immediately share something outrageous. The productive habit is to treat emotional arousal as a signal to pause and verify before acting.

Gordon Pennycook at the University of Regina and David Rand at MIT have studied accuracy nudges: brief interventions that remind people to consider whether information is accurate before sharing it. In multiple studies, these nudges, as simple as asking "Is this headline accurate?" once at the start of a session, significantly reduced sharing of misinformation. The nudge works by temporarily shifting attention from social sharing motivations to accuracy motivations, which are present but typically not activated.

Deepfakes and the Liar's Dividend

Deepfake technology, AI-generated synthetic video and audio indistinguishable from authentic footage to casual observation, represents a qualitatively new challenge for information verification. As of 2024, deepfake generation has become accessible to non-specialists through readily available tools, while deepfake detection technology, though rapidly improving, remains imperfect.

Bobby Chesney at the University of Texas and Danielle Citron at the University of Virginia described what they called the liar's dividend in a 2019 paper in the Yale Law Journal: even if a piece of authentic video clearly shows a real person committing misconduct, the existence of deepfake technology gives that person a credible basis for denial. "That video was fabricated" becomes a plausible defense regardless of whether it is true, and the technical difficulty of proving authenticity creates doubt that benefits the subject of the footage.

In this way, the most significant harm from deepfakes may not be the direct deception they enable but the ambient doubt they create about the evidentiary status of all video. If any footage might be synthetic, and if detecting synthesis requires expensive technical forensics unavailable to ordinary people, the result is an environment where powerful actors can deny authentic documentation of their own behavior.

Detection approaches currently include analyzing inconsistencies in lighting, blinking patterns, edge artifacts around hair and faces, and audio-visual synchronization. These work reasonably well against current-generation deepfakes but will require updating as generation technology improves. Organizations like Truepic and the Content Authenticity Initiative (led by Adobe, BBC, and others) are developing cryptographic provenance standards that would allow content to carry a verifiable record of when and how it was created, which addresses the authenticity problem at the source rather than through after-the-fact detection.

Platform Interventions and What Actually Moves the Needle

Content moderation at scale, the challenge of identifying and limiting the spread of misinformation across platforms with billions of users and millions of posts daily, has been one of the most contested areas of research and policy in the past decade.

The evidence for various platform interventions is mixed. Warning labels on disputed content reduce belief and sharing in experimental studies but have modest effects in field conditions, partly because they are applied inconsistently and partly because their presence on some content implicitly suggests the absence of labels means endorsement. Frances Haugen's 2021 disclosures about internal Facebook research revealed that the company's own studies found that changes designed to reduce misinformation reach often reduced overall engagement metrics, creating institutional resistance to implementation.

Pennycook and Rand's accuracy nudge research, described above, shows the most robust effects for relatively low-cost interventions. The finding that simply asking people to consider accuracy, without providing any specific information, reduces misinformation sharing by roughly 50 percent in some studies suggests that platform design choices, default prompts, and what the interface asks users to attend to, matter substantially.

Algorithmic downranking of content from sources with established patterns of misinformation has shown effects in platform studies, though measuring these effects is methodologically difficult because platforms do not grant full access to their algorithms and data for independent research. Researchers at New York University's Center for Social Media and Politics have documented persistent evidence that content from lower-quality sources continues to circulate significantly even after announced algorithm changes.

Practical Takeaways

Use lateral reading as your default verification method. When encountering a significant claim or unfamiliar source, open a new tab and search for external information about the source before reading its content. This takes two to three minutes and is dramatically more reliable than internal evaluation.

Apply SIFT systematically. Stop, Investigate the source, Find better coverage, Trace claims to their original context. The method works and is teachable.

Treat emotional arousal as a verification trigger. When content provokes strong outrage, fear, or moral indignation, that reaction is a signal to pause and check before sharing, not a signal that the content must be shared immediately.

Learn manipulation techniques, not just specific false claims. False balance, cherry-picking data, misrepresenting consensus, appeal to emotion as substitute for evidence: recognizing these patterns makes you resistant to new instances of each type, not just the specific claims you have already seen.

Check for corroboration. If a significant claim is covered only by one source, the absence of corroboration from other credible outlets is meaningful. Important true stories get covered by multiple independent outlets.

Trace claims upstream. When possible, find the original source for a statistic, image, or quote. The context of the original frequently changes the interpretation provided by the secondary source reporting on it.


References

  1. Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146–1151.
  2. Wardle, C., & Derakhshan, H. (2017). Information Disorder: Toward an Interdisciplinary Framework. Council of Europe.
  3. Wineburg, S., & McGrew, S. (2019). Lateral reading and the nature of expertise. Teachers College Record, 121(11), 1–40.
  4. Cook, J., Lewandowsky, S., & Ecker, U. K. H. (2017). Neutralizing misinformation through inoculation. PLOS ONE, 12(5), e0175799.
  5. Van der Linden, S., et al. (2022). Inoculating the public against misinformation about climate change. Global Challenges, 1(2), 1600008.
  6. Pennycook, G., & Rand, D. G. (2019). Fighting misinformation on social media using crowdsourced judgments of news source quality. Proceedings of the National Academy of Sciences, 116(7), 2521–2526.
  7. Chesney, B., & Citron, D. (2019). Deep fakes: A looming challenge for privacy, democracy, and national security. California Law Review, 107(6), 1753–1820.
  8. Caulfield, M. (2019). SIFT (The Four Moves). Pressbooks.
  9. Starbird, K., Maddock, J., Orand, M., Achterman, P., & Mason, R. M. (2014). Rumors, false flags, and digital vigilantes. Proceedings of iConference.
  10. Pennycook, G., et al. (2021). Shifting attention to accuracy can reduce misinformation online. Nature, 592(7855), 590–595.
  11. Roozenbeek, J., Schneider, C. R., et al. (2022). Susceptibility to misinformation about COVID-19. Royal Society Open Science, 7(10), 201199.
  12. Benkler, Y., Faris, R., & Roberts, H. (2018). Network Propaganda. Oxford University Press.

Related reading: why people believe conspiracy theories, how to think more critically, why disinformation spreads

Frequently Asked Questions

What is the difference between misinformation and disinformation?

The standard distinction, formalized by Claire Wardle's information disorder framework and adopted by most researchers, is that misinformation is false or misleading content shared without the intent to deceive, while disinformation is false or misleading content deliberately created and shared with the intent to cause harm or deceive. A third category, malinformation, is truthful content shared with the intent to harm, such as leaking someone's private information. The distinction matters because the interventions differ: misinformation calls for better information and media literacy; disinformation calls for those plus counter-measures against deliberate manipulation campaigns.

Why does false information spread faster than true information?

Soroush Vosoughi, Deb Roy, and Sinan Aral's 2018 study in Science, which analyzed 126,000 Twitter stories shared by roughly 3 million people from 2006 to 2017, found that false news spread significantly faster, further, and more broadly than true news. The effect was driven by human behavior rather than automated bots. The researchers found that false news was more novel and generated more surprise and disgust than true news, which makes it more emotionally compelling and more shareable. The social currency of sharing interesting information, rather than accurate information, drives the dynamics. This structural incentive operates independently of any individual's intention to deceive.

What practical techniques help identify misinformation?

Lateral reading is the most reliably effective technique identified in research: before reading a source deeply, open new tabs and search for what others say about the source. The SIFT method provides a practical framework: Stop before sharing, Investigate the source, Find better coverage elsewhere, and Trace claims back to their original context. Emotional arousal is a reliable signal that warrants checking: content that generates strong outrage, fear, or moral disgust is statistically more likely to be misleading or manipulated. Looking for primary sources rather than reports of reports, and checking whether other credible outlets have covered the same story, are both effective practices.

How do deepfakes change the misinformation problem?

Deepfakes, synthetic media generated by AI systems trained on real footage, introduce two distinct problems. The first is direct harm: fabricated video or audio that appears to show a real person saying or doing things they did not. The second is what legal scholar Bobby Chesney and cybersecurity researcher Danielle Citron called the liar's dividend in their 2019 paper: even if a piece of genuine video is authentic, deepfakes give anyone in it a credible basis to deny it, claiming any damaging footage was fabricated. The liar's dividend may be more consequential than deepfake deception itself, because it erodes the evidentiary status of authentic video and creates plausible deniability for real misconduct.

Do fact-checking websites actually work?

Evidence suggests fact-checking has modest but real effects. Gordon Pennycook and David Rand at MIT found in a series of studies that reminding people to consider the accuracy of headlines, rather than simply whether they would share them, reduced sharing of misinformation by roughly 50 percent in experimental conditions. Studies of fact-checking corrections find that corrections reduce belief in false claims, though the effect is smaller than the original misinformation's impact and rarely eliminates belief change entirely. Fact-checking appears more effective as prevention (reading a correction before encountering the original false claim) than as cure (reading a correction after the false claim has been absorbed).

What is lateral reading and how do you do it?

Lateral reading is the practice, developed by Sam Wineburg and Sarah McGrew at the Stanford History Education Group, of evaluating a source by searching for external information about it rather than reading it deeply for internal credibility cues. The practical steps are: when you encounter a source making a significant claim, before spending time reading it, open a new browser tab and search for the source name plus terms like 'bias,' 'funding,' 'who runs,' or 'review.' Look at what Wikipedia, established news organizations, or domain experts say about the source. This takes roughly two to three minutes and is dramatically more accurate than trying to evaluate a source from its own content.

How do you talk to someone who believes misinformation?

Research on correction and persuasion suggests several evidence-backed approaches. Avoid direct confrontation or calling the belief 'false,' which triggers identity defense. Instead, ask questions that prompt the person to work through the reasoning themselves, what researchers call motivational interviewing applied to beliefs. Provide a credible alternative explanation that fills the explanatory gap the misinformation occupied, rather than simply removing the false belief without replacement. Focus on the technique of manipulation rather than the specific claim, because recognizing the rhetorical tactic inoculates against future similar claims. Gordon Pennycook's research suggests that simply prompting reflection on accuracy, rather than arguing about the content, is more effective than direct correction.