In 1930s Germany, Joseph Goebbels did not need to convince the majority of Germans that Jews were subhuman. He needed only to make that belief available, to normalise its expression, and to create social environments in which holding it was costless and challenging it was dangerous. The distinction matters enormously. The popular image of propaganda as a factory of lies that simply pours false beliefs into passive minds misses nearly everything important about how mass persuasion actually works.
Propaganda operates primarily on emotion, social identity, and the architecture of what feels normal to believe and say in a given social context. It works not by creating false beliefs so much as by making certain true beliefs feel disproportionately important, by associating out-groups with disgust and fear, and by establishing the terms within which public discourse occurs. The most effective propaganda does not fill people's heads with falsehoods. It fills their hearts with loyalties and their social environments with pressures, leaving the falsehoods to grow naturally in the space prepared for them.
Understanding propaganda is not simply a historical interest. The digital information environment has produced conditions that are, in several structural respects, more favourable to propaganda than any that have existed previously. The cost of producing convincing disinformation has collapsed. Distribution is algorithmic and optimised for the emotional activation that is propaganda's primary mechanism. And the social context in which information is received, which has always been the crucial determinant of what people believe, has been engineered by platforms whose incentives are entirely indifferent to truth.
"Propaganda is not concerned with true or false. It is concerned with effectiveness. The propagandist uses truth when it serves his purposes and rejects it when it does not." -- Jacques Ellul, 'Propaganda: The Formation of Men's Attitudes', 1962
| Propaganda Technique | How It Works | Historical Example |
|---|---|---|
| Repetition | Repeated claims become familiar and feel true | Nazi "big lie"; Soviet slogans |
| Emotional appeal | Bypass rational evaluation through fear or pride | War posters; fear-based advertising |
| Scapegoating | Blame social problems on an out-group | Anti-semitism; anti-immigrant campaigns |
| False dichotomy | Present only two options; exclude alternatives | "You're either with us or against us" |
| Appeal to authority | Use prestigious figures to legitimize claims | Celebrity endorsements; fake experts |
| Bandwagon | Everyone is doing it; join the winning side | Polling manipulation; manufactured consensus |
| Censorship / information control | Restrict access to contradicting information | Soviet media; internet censorship |
Key Definitions
Propaganda: Jacques Ellul's definition, the most sociologically rigorous available, describes propaganda as a set of techniques for psychological manipulation aimed at influencing action, rather than thought, through the mobilisation of emotion and social solidarity rather than rational argument.
Illusory truth effect: The cognitive phenomenon, first documented by Hasher, Goldstein, and Toppino in 1977, whereby repeated statements are judged more credible than novel statements regardless of their actual truth value, because familiarity increases processing fluency, which is misattributed to truthfulness.
Social identity theory: Henri Tajfel and John Turner's framework, developed in the 1970s, describing how group membership activates in-group favouritism and out-group hostility, providing the psychological foundation for us-vs-them propaganda framing.
Emotional vs rational appeal: The distinction between persuasive messages that engage fear, disgust, pride, or anger versus those that provide evidence and logical argument, with research consistently finding emotional appeals more immediately persuasive for most audiences.
Disinformation vs misinformation: Disinformation is false or misleading content deliberately created to deceive; misinformation is false or misleading content spread without necessarily malicious intent. Both produce similar effects at scale.
Jacques Ellul and the Sociology of Propaganda
French sociologist Jacques Ellul published 'Propagandes' in 1962, and it remains the most rigorous sociological analysis of the phenomenon. Ellul's central argument was that most prior analyses, which focused on the specific content of propaganda and the cynical intentions of propagandists, missed the more fundamental sociological dynamics that made propaganda possible and effective.
Ellul argued that propaganda is not primarily an imposition by an elite onto a passive mass. It is, rather, a form of social communication that modern industrial societies require and that their members demand. Citizens in complex democracies face a world they cannot directly observe or comprehend and must nevertheless form opinions and take political action. Propaganda, in Ellul's analysis, is the system that fills this gap. It provides prefabricated interpretations of complex events in forms that are emotionally accessible and socially usable.
His distinction between 'agitation propaganda', which mobilises anger and desire for change, and 'integration propaganda', which reinforces existing social norms and produces conformity, remains analytically useful. Much analysis focuses on agitation propaganda because it is visible and dramatic. But integration propaganda, the constant reinforcement of national myths, cultural assumptions, and acceptable opinion ranges, is far more pervasive and arguably more powerful precisely because it is invisible.
Ellul also argued that education, far from being a protection against propaganda, actually increases susceptibility to certain forms of it, because educated people consume more media, develop greater confidence in their political opinions, and are more likely to feel obligated to have views on everything. This claim remains somewhat controversial, but it anticipates later research on motivated reasoning that found similar counterintuitive results.
The Illusory Truth Effect: Why Repetition Works
Lynn Hasher, David Goldstein, and Thomas Toppino published their foundational study of the illusory truth effect in 1977. Their methodology was straightforward: participants were presented with plausible but unverifiable trivia statements across multiple sessions. Items that had appeared in earlier sessions were subsequently rated as more credible, even when participants could not consciously recall having seen them before.
The mechanism is processing fluency. Familiar stimuli are cognitively easier to process. This ease feels subjectively like recognition, and recognition of true facts is one of the things we have evolved to treat as an indicator of truth. The system works well in stable environments where the things you have encountered before are mostly accurate. It fails badly in information environments deliberately engineered to flood the zone with repeated false claims.
The practical implication for propaganda is direct and alarming. Political slogans repeated sufficiently often acquire an aura of obvious truth. Brand names repeated in advertising become associated with quality through mere familiarity. Claims about out-groups, repeated consistently across media ecosystems, come to feel like common knowledge regardless of their evidentiary basis. Joseph Goebbels stated explicitly that the art of propaganda lies in understanding the emotional ideas of the great masses, and then psychologically presenting these ideas in a way that is compelling. His technical insight about repetition has since been validated by cognitive science.
A 2015 study by Lisa Fazio and colleagues extended Hasher's original findings to demonstrate that the illusory truth effect persists even when participants know the statement is false at the time of initial exposure. Repetition can increase the perceived credibility of claims that people have already consciously rejected. This finding has sobering implications for fact-checking as a counter-propaganda strategy.
Emotional Architecture: Fear, Disgust, and Group Identity
Propaganda's primary mechanism is emotional rather than rational. This is not accidental or merely cynical. It reflects an accurate understanding of how human persuasion actually works. Jonathan Haidt's 'social intuitionist model', developed in a 2001 paper and extended in 'The Righteous Mind' (2012), argues that moral and political judgments are primarily intuitive and emotional, with rational argument serving primarily to post-hoc justify conclusions already reached.
Fear is the most reliably effective emotional mechanism in political propaganda because it activates threat-detection systems that evolved to prioritise immediate action over deliberative analysis. Studies have found that fear appeals are particularly effective when they combine a vivid threat with a specific, feasible action that can be taken to mitigate it, since pure threat without efficacy produces paralysis rather than mobilisation.
Disgust is an especially potent tool in propaganda targeting out-groups. Research by Jonathan Haidt and colleagues has documented the 'moral disgust' system, which evolved to protect against contamination but extends readily to social and moral domains. Political propaganda that frames out-groups in contamination terms, as 'vermin', 'parasites', 'filth', or 'disease', activates this system. Historical analysis by Andrea Wohl and Nils Banerjee has found that dehumanising language using disease and vermin metaphors was consistently present in the propaganda of genocidal regimes, and experimental work by Nour Kteily and colleagues has found that dehumanisation is a measurable predictor of support for discriminatory policies.
Pride and in-group solidarity are the positive emotional complements to fear and disgust. Effective propaganda does not merely demonise the out-group; it celebrates the in-group's history, virtue, and unity. The combination creates a comprehensive emotional package: threat, dignity, solidarity, and a clear enemy.
Historical Examples: From Total War to Social Media
World War I produced what remains perhaps the most intensively studied propaganda campaign in history. The British effort, coordinated initially by the War Propaganda Bureau under Charles Masterman, established many techniques that would be refined over the following century. The use of atrocity stories (some real, some exaggerated, some fabricated) to mobilise public sentiment, the cultivation of journalists and cultural figures as unwitting amplifiers, and the targeting of neutral-country audiences, especially the United States, with carefully crafted material designed to resemble independent reporting.
Edward Bernays, often described as the father of public relations, worked with the Wilson administration's Committee on Public Information during World War I and drew explicit lessons from that experience. His 1928 book 'Propaganda' argued openly that the manipulation of public opinion was both inevitable and, if done by the right people toward the right ends, desirable in a democracy too complex for its citizens to comprehend. His techniques for manufacturing consent, a phrase Bernays used before Chomsky and Herman adopted it, included the creation of front organisations to launder corporate or political messaging as independent civil society opinion.
The digital era has not invented new propaganda techniques. It has removed the resource constraints that previously limited who could deploy them. A 2016 Oxford Internet Institute study found that coordinated inauthentic behaviour, the use of bots, fake accounts, and automated amplification, could shift the apparent social consensus on political questions within hours. The Internet Research Agency's operations in the 2016 US election were not distinguished by the sophistication of their messaging, which was often crude, but by the industrial scale of production and the precision of targeting.
Media Literacy: Promises and Limitations
The standard response to propaganda is media literacy education: teach people to evaluate sources, recognise manipulation techniques, and think critically about the information they consume. The evidence for this approach is more complicated than its advocates tend to acknowledge.
Laboratory studies consistently find that media literacy training improves performance on tests of misinformation detection in controlled settings. Real-world transfer is weaker. Several mechanisms explain this gap. First, media consumption in practice is fast, social, and emotionally engaged, conditions under which analytical skills developed in classroom settings do not automatically activate. Second, media literacy education often focuses on evaluating individual claims, while propaganda operates primarily at the level of emotional framing and social identity, which individual fact-checking does not address.
Third, and most importantly, media literacy is not ideologically neutral in practice. Studies have found that high scores on media literacy measures are associated with greater confidence in people's existing political views rather than greater accuracy. The skills of source evaluation and argument analysis can be and routinely are deployed in the service of motivated reasoning, providing more sophisticated post-hoc rationalisations for beliefs already held for emotional and social reasons.
This does not mean media literacy is worthless. It means that media literacy education needs to be explicit about emotional mechanisms, social identity dynamics, and the specific cognitive vulnerabilities that propaganda exploits, rather than focusing primarily on logical and factual evaluation skills. Work by the Stanford History Education Group and others on 'civic online reasoning' has moved in this direction, emphasising the specific practices that professional fact-checkers use, including lateral reading, which involves immediately checking external sources rather than evaluating a site's own claims about itself.
Structural Solutions: When Architecture Matters More Than Education
Research increasingly suggests that structural interventions in information environments may be more effective than individual-level education. Studies by David Rand and colleagues at MIT have found that adding simple accuracy prompts, asking users whether a headline is accurate before sharing, significantly reduces sharing of misinformation without reducing sharing overall. The mechanism is attention: most misinformation sharing is inattentive rather than deliberate, and a simple prompt shifts processing from automatic to reflective.
Source labelling, friction in sharing flows, and algorithmic reweighting toward authoritative sources have shown measurable effects in platform studies. The challenge is that these interventions directly conflict with engagement-maximisation objectives. Emotionally provocative content generates higher engagement metrics. Platforms that introduce friction into sharing flows or downweight outrage-generating content accept a commercial cost. In the absence of regulatory frameworks that create costs for amplifying misinformation, commercial incentives remain structurally aligned with propaganda's mechanisms.
The Counter-Narrative Problem
One of the most consistent findings in propaganda research is the difficulty of effective counter-narrative work. When a propaganda message has achieved significant reach and emotional resonance, simply providing a factually accurate alternative narrative tends to be less effective than propagandists expect. Several mechanisms contribute to this.
First, the illusory truth effect works against corrections as much as it works for original false claims. Repeated corrections that mention the original false claim in the process of debunking it can inadvertently increase the false claim's familiarity and therefore its perceived credibility. Research by Ullrich Ecker and colleagues at the University of Western Australia on what they call 'the backfire effect' suggested this could make corrections counterproductive, though subsequent meta-analyses have found the backfire effect is not as robust or universal as originally reported. The more reliable finding is that corrections work better when they avoid repeating the false claim prominently, when they provide an alternative causal explanation for the events the propaganda explained, and when they are delivered by trusted in-group sources rather than external experts perceived as ideologically hostile.
Second, and more fundamentally, counter-narrative work that focuses on factual correction misunderstands the primary function of propaganda. Propaganda is not primarily in the business of creating false beliefs. It is in the business of creating and reinforcing emotional and social commitments. Correcting the facts leaves the emotional and social structures intact. A supporter of an authoritarian political movement who encounters a factual correction to a specific claim does not typically abandon the movement; they incorporate the correction as evidence that the mainstream is nitpicking details while missing the larger truth the movement represents.
Counter-narrative work that addresses emotional and social dimensions, by offering belonging, dignity, and an engaging positive identity rather than simply debunking, shows substantially better outcomes in rigorous research. This is one reason why community-based interventions and peer messenger approaches outperform expert-led fact-checking in most randomised controlled trials.
Practical Takeaways
Developing resistance to propaganda is not primarily about becoming more sceptical of specific claims. It is about becoming aware of emotional activation as a signal to pause rather than to respond. When a piece of content makes you angry, afraid, or contemptuous of a specific group, that emotional activation is worth noticing. It does not mean the content is wrong, but it is the precise state in which critical evaluation is hardest and most necessary.
Lateral reading, the habit of immediately searching for what independent sources say about a claim or its source rather than evaluating the claim on its own terms, is the single most evidence-supported individual practice for improving information evaluation. It works because it replaces internal evaluation, which is vulnerable to motivated reasoning, with external calibration.
Recognising the social and emotional dimensions of propaganda is as important as developing factual evaluation skills. The strongest protection against propaganda's emotional architecture is not cynicism, which tends to collapse into nihilism and makes people equally sceptical of true and false claims, but genuine engagement with the question of what legitimate evidence for a claim looks like, combined with awareness of the conditions under which emotional activation is most likely to distort evaluation. Asking 'what would it take to change my mind about this' is a more generative diagnostic question than asking 'is this true or false', because it forces engagement with the epistemic standards that distinguish belief from evidence.
References
- Ellul, J. (1965). 'Propaganda: The Formation of Men's Attitudes'. Knopf. (Original: 'Propagandes', 1962)
- Hasher, L., Goldstein, D., & Toppino, T. (1977). Frequency and the conference of referential validity. 'Journal of Verbal Learning and Verbal Behavior', 16(1), 107-112.
- Bernays, E. L. (1928). 'Propaganda'. Horace Liveright.
- Haidt, J. (2001). The emotional dog and its rational tail. 'Psychological Review', 108(4), 814-834.
- Haidt, J. (2012). 'The Righteous Mind'. Pantheon Books.
- Fazio, L. K., Brashier, N. M., Payne, B. K., & Marsh, E. J. (2015). Knowledge does not protect against illusory truth. 'Journal of Experimental Psychology: General', 144(5), 993-1002.
- Kteily, N., Bruneau, E., Waytz, A., & Cotterill, S. (2015). The ascent of man: Theoretical and empirical evidence for blatant dehumanization. 'Journal of Personality and Social Psychology', 109(5), 901-931.
- Rand, D. G., Pennycook, G., & Epstein, Z. (2019). Fake news, fast and slow: Deliberation reduces belief in false (but not true) news headlines. 'Journal of Experimental Psychology: General', 148(8), 1319-1341.
- Howard, P. N., & Bradshaw, S. (2017). Troops, trolls and troublemakers: A global inventory of organized social media manipulation. Oxford Internet Institute Working Paper.
- Tajfel, H., & Turner, J. C. (1979). An integrative theory of intergroup conflict. In W. G. Austin & S. Worchel (Eds.), 'The Social Psychology of Intergroup Relations', 33-47.
- Vraga, E. K., & Bode, L. (2017). Using expert sources to correct health misinformation in social media. 'Science Communication', 39(5), 621-645.
- McGinniss, J. (1969). 'The Selling of the President'. Trident Press.
Frequently Asked Questions
Is propaganda the same as lying?
Jacques Ellul, who wrote the defining sociological study of propaganda in 1962, argued explicitly that the most effective propaganda is largely true. Propaganda selects and frames facts rather than fabricating them. By choosing which true facts to emphasise, which to omit, and what emotional context to place them in, propagandists can create a profoundly misleading picture without stating anything technically false. This is why fact-checking, while valuable, is an insufficient counter to propaganda: correcting individual false claims does not necessarily disrupt the emotional and social framework in which those claims are embedded.
What is the illusory truth effect?
The illusory truth effect, first documented by Lynn Hasher, David Goldstein, and Thomas Toppino in 1977, describes the finding that people rate repeated statements as more likely to be true than novel statements, regardless of actual truth value. In their original study, participants rated the truth of trivia statements over multiple sessions. Items that had been presented in earlier sessions were rated as more credible even when participants did not consciously remember having seen them before. The mechanism appears to be processing fluency: familiar statements feel easier to process, and that ease is misattributed to truthfulness. This provides a direct cognitive basis for the propaganda principle of repetition.
How does 'us vs them' framing work psychologically?
Social identity theory, developed by Henri Tajfel and John Turner in the 1970s, shows that group membership activates powerful motivations for in-group favouritism and out-group derogation. Propaganda exploits this by constructing clear, emotionally loaded group boundaries. Once an audience accepts a particular group identity and the associated out-group as threatening, their processing of new information is systematically biased toward confirming that framework. Counter-evidence is attributed to enemy manipulation. Favorable events are seen as proof of in-group virtue. The framing creates a self-sustaining epistemic filter that is difficult to disrupt from outside.
Does media literacy education work?
The evidence is mixed and context-dependent. Studies consistently find that media literacy training can improve the detection of specific manipulative techniques in laboratory settings. However, real-world studies show weaker effects, partly because media literacy skills do not automatically transfer from training contexts to real-time media consumption, which is typically fast, emotionally engaged, and social. Research by Emily Vraga and Leticia Bode (2017) found that correction of misinformation was more effective when delivered by trusted sources in the immediate context of the false claim. Structural interventions, such as default source labelling and algorithmic friction for sensationalist content, show more robust effects than education alone.
What makes modern digital propaganda different from historical propaganda?
Three structural differences stand out. First, the cost of production has collapsed: creating professional-quality video, audio, and written propaganda now requires minimal resources. Second, distribution is algorithmic: platforms optimise for engagement rather than accuracy, and outrage-provoking content generates stronger engagement signals, giving emotionally manipulative content a structural advantage. Third, targeting is personalised: historical propaganda necessarily used mass messaging, while digital advertising infrastructure allows propaganda to be precisely targeted to individuals identified as susceptible to specific emotional appeals. This combination of low cost, algorithmic amplification, and precision targeting has significantly changed the operational landscape of influence operations.