John Stuart Mill sat down to write "On Liberty" in the winter of 1854, at the urging of his wife Harriet Taylor, whose influence on the work he would later describe as so profound that it was as much hers as his. The London they inhabited was a city of immense social conformity — a Victorian culture in which dissent from religious orthodoxy could end a career, in which respectable opinion converged on a narrow set of acceptable positions, and in which the pressures of social disapproval functioned as a censorship mechanism every bit as effective as law. Mill found this conformity more dangerous than explicit government prohibition, because its victims rarely knew they were being suppressed. A government ban on a book at least announced itself. Social pressure operated invisibly, shaping thought before it became speech.
The argument Mill made in "On Liberty," published in 1859, was radical for its time and remains contested today: that even false speech must be protected, because we can never be certain we are right. Even the most obviously wrong opinion, Mill argued, might contain a fragment of truth we have overlooked. Even a correct opinion, unchallenged, becomes a dead dogma — held as a prejudice rather than understood as a reasoned conclusion. The freedom to think and speak is not merely a political right; it is an epistemological necessity. We cannot know what we believe if we have never had to defend it against the best contrary argument.
Mill's arguments were written in a very specific historical moment, about a very specific kind of social conformity. What neither he nor Harriet Taylor could have anticipated was that the same arguments would be deployed, one hundred and sixty years later, to contest Nazi marching in a Jewish neighborhood, to fight content moderation algorithms, to defend Holocaust denial on social media platforms, and to oppose campus speech codes. The principle Mill articulated has proven more durable than the specific context it emerged from — and that durability is itself a measure of both the argument's strength and its limits. Free speech is one of liberalism's most contested principles not because it is poorly argued, but because it sits in genuine tension with other values — dignity, equality, safety — that are also well argued.
"If all mankind minus one were of one opinion, and only one person were of the contrary opinion, mankind would be no more justified in silencing that one person, than he, if he had the power, would be justified in silencing mankind." — John Stuart Mill, On Liberty (1859)
| Free Speech Doctrine | Core Claim | Associated Thinker |
|---|---|---|
| Marketplace of ideas | Truth emerges from open competition of ideas | Mill; US First Amendment tradition |
| Autonomy argument | Expression is essential to self-realization | Kant; Scanlon |
| Democratic self-governance | Free speech necessary for democratic deliberation | Meiklejohn |
| Harm principle | Only speech causing clear harm may be restricted | Mill's On Liberty |
| Hate speech restrictions | Harmful speech targeting groups may be limited | European human rights law; Waldron |
Key Definitions
Free speech: The principle that expression should not be subject to prior restraint or punishment by government or other authorities, subject to a narrow set of exceptions for speech that causes demonstrable, direct harm to others.
First Amendment: The provision of the US Constitution, incorporated against state and local governments through the Fourteenth Amendment, prohibiting government restriction of speech, press, assembly, and religion based on content or viewpoint.
Harm principle: Mill's principle that the only legitimate justification for limiting liberty is to prevent harm to others. Central to liberal free speech theory, and contested because of the difficulty of defining "harm."
Marketplace of ideas: The metaphor, first articulated by Justice Oliver Wendell Holmes in his 1919 dissent in Abrams v. United States, that truth is best found through open competition among ideas rather than through authoritative pronouncement.
Chilling effect: The inhibition of protected expression caused by laws or enforcement practices that are overbroad or vague, leading speakers to self-censor to avoid legal risk.
Hate speech: Expression that attacks or demeans people based on characteristics such as race, religion, ethnicity, gender, or sexual orientation. Not a recognized legal category in the US; explicitly regulated in most European democracies.
Content moderation: The practice by private platforms of reviewing and removing user-generated content that violates their terms of service or community standards.
Section 230: The provision of the US Communications Decency Act (1996) that immunizes internet platforms from liability for user-generated content and for good-faith moderation decisions.
Mill's Four Arguments for Free Speech
Mill's case for free speech in "On Liberty" rests on four distinct arguments that are worth separating, because they have different strengths and vulnerabilities.
The first is the fallibilist argument: the suppressed opinion might be true. History is full of positions that were once considered obviously false, dangerous, or heretical and are now recognized as correct. The Catholic Church suppressed heliocentrism. Doctors ridiculed Ignaz Semmelweis's claim that handwashing prevented infection. The legal case for segregation was once defended by respectable authorities as well-reasoned constitutional law. No authority — religious, governmental, or social — has a reliable track record of distinguishing correct heterodox opinion from incorrect heterodox opinion in advance. The only reliable process for sorting true from false beliefs is open contest, which requires that false beliefs be permitted to compete.
The second is the partial truth argument: even a false opinion may contain partial truth. Few contested questions have one side entirely right and the other entirely wrong. More commonly, each side captures something real. The suppression of one side does not merely eliminate error; it may eliminate the partial truth that side represents. Mill used the example of political controversies where both conservative and progressive positions capture genuine goods — stability and change, tradition and reform — whose balance cannot be struck if one voice is silenced.
The third is the dead dogma argument: true opinion, unchallenged, becomes meaningless. A person who has held a correct belief their whole life without ever being required to defend it against serious challenge does not, in any meaningful sense, understand why it is correct. They hold it as a prejudice, not a reasoned conclusion. Mill's concern was not just with the propagation of correct beliefs but with the quality of understanding — a culture in which correct opinions are officially endorsed and never challenged produces a kind of intellectual death.
The fourth is the vital understanding argument: the collision of ideas is necessary for any living, revisable comprehension of truth. Beliefs held in competition with their alternatives are held consciously; beliefs held in isolation are held reflexively. Mill was writing partly against the conformism of Victorian culture, which enforced agreement through social pressure rather than argument — which was, in his view, worse than official censorship precisely because its victims were unaware of their constraint.
The Marketplace of Ideas: From Philosophy to Law
Mill's philosophy entered American constitutional law through a famous dissent. In Abrams v. United States (1919), the Supreme Court upheld convictions under the Espionage Act for distributing pamphlets opposing US intervention in Russia. Justice Oliver Wendell Holmes, who had helped develop the "clear and present danger" test, dissented — arguing that the convictions could not be sustained. In the course of his dissent, Holmes wrote the sentence that would define American free speech jurisprudence for a century: "the best test of truth is the power of thought to get itself accepted in the competition of the market."
The marketplace of ideas metaphor is evocative but imperfect. Real markets fail: they produce monopolies, externalities, information asymmetries, and outcomes that diverge from the public good. The metaphor implies that in an open contest of ideas, truth will win — but the evidence from disinformation research, from the history of propaganda, and from cognitive science about motivated reasoning suggests that the relationship between open competition and truth is considerably messier. False beliefs can spread faster than corrections; emotionally resonant misinformation can outcompete accurate but less vivid information. The marketplace of ideas metaphor tends to underweight the role of power, money, and institutional position in determining which ideas get amplified and which are suppressed through non-legal means.
Yet the alternative — giving authorities the power to determine which ideas are true enough to be permitted — has its own, more obvious problems. The track record of authorities deploying censorship to suppress inconvenient truth is much longer and more consistent than the track record of open debate producing bad outcomes that better censorship would have prevented.
First Amendment Doctrine: What the Law Actually Says
The American First Amendment framework is distinctive in the world for the breadth of protection it provides and the narrowness of the exceptions it recognizes. The Supreme Court has developed, through more than a century of decisions, a doctrine that is complex in its details but coherent in its logic.
The fundamental principle is that government may not restrict speech based on its content or viewpoint — may not prefer one idea over another. Content-based restrictions receive "strict scrutiny" — the most demanding constitutional standard — and are almost always struck down. Content-neutral restrictions (time, place, and manner regulations that apply to all speech equally) receive lower scrutiny and are more often upheld.
The exceptions to First Amendment protection have been carefully narrowed over time. "Incitement to imminent lawless action" — the Brandenburg standard established in Brandenburg v. Ohio (1969) — replaced the earlier, much broader "clear and present danger" test that had been used to suppress socialist and anarchist speech in the early twentieth century. Under Brandenburg, advocacy of illegal action is protected unless the speech is (1) directed to inciting imminent lawless action AND (2) likely to produce such action. Abstract advocacy of violence or revolution — "someday the revolution will come" — is protected. Only direct calls to immediate violence in circumstances where immediate violence is likely are not.
"True threats" — statements expressing serious intent to commit violence against a specific person or identifiable group — are not protected. The Supreme Court refined this in Counterman v. Colorado (2023), holding that to convict someone of making a true threat, prosecutors must show at least reckless disregard as to whether the communication would be perceived as threatening — not just that a reasonable person would find it threatening.
Defamation law, governed primarily by the Supreme Court's landmark decision in New York Times v. Sullivan (1964), requires public figures suing for defamation to prove "actual malice" — knowledge of falsity or reckless disregard for truth. This high standard reflects the Court's concern about defamation law's chilling effect on political commentary and press coverage of public officials. Private figures face a lower standard, but the framework still prioritizes speech protection over reputation protection in most cases.
The Absent Category: Hate Speech
Perhaps the most striking feature of American free speech law, from the perspective of comparative law, is the absence of a hate speech exception. In Matal v. Tam (2017), the Supreme Court unanimously struck down a provision of trademark law that prohibited registering "disparaging" trademarks, holding that the government may not regulate speech simply because it expresses ideas that offend. The government cannot, under current doctrine, prohibit speech that demeans or attacks people on the basis of race, religion, gender, or sexual orientation, no matter how vicious — unless that speech independently falls into one of the narrow prohibited categories (true threat, incitement to imminent lawless action, etc.).
This is not a universal liberal position. Most liberal democracies — including Germany, France, the United Kingdom, Canada, and all EU member states — prohibit some form of hate speech. The philosophical and practical arguments for doing so are not trivial.
The European Alternative: Dignity vs Liberty
The European approach to hate speech reflects a different weighting of constitutional values. Whereas American constitutional law places liberty of expression at the apex of its value hierarchy, European constitutional traditions — shaped by the experience of Nazi propaganda, genocide, and systematic dehumanization of minorities — treat human dignity as a constitutional value that may, in some circumstances, constrain free expression.
Jeremy Waldron's "The Harm in Hate Speech" (2012) provides the most sophisticated philosophical defense of hate speech regulation within a liberal framework. Waldron argues that hate speech is harmful not primarily because of its emotional impact on individual targets but because it undermines the "assurance" that members of targeted groups can reasonably expect — the publicly expressed recognition that they are full and equal members of the community entitled to the same dignity and treatment as anyone else. Hate speech poisons this assurance by broadcasting, publicly and repeatedly, that certain groups are less than fully human or do not deserve equal treatment. This harm is social rather than merely psychological: it is damage to the social environment in which targeted groups must live, work, and participate in democratic life.
Waldron's argument is explicitly comparative: he asks not whether we value free expression (of course we do) but whether hate speech's specific contribution to the marketplace of ideas justifies the dignitary harm it causes. His answer is that the incremental contribution of hate speech to genuine public discourse does not outweigh the harm it causes to the social standing of its targets.
The American counter-argument, in its strongest form, rests less on skepticism about the harm Waldron identifies and more on skepticism about the institutional capacity to regulate hate speech without the cure becoming worse than the disease. Who decides what speech is hateful enough to prohibit? Governments have consistently used broad anti-hate-speech powers against the speech of minority communities and political dissidents rather than against the powerful. In the United States specifically, given the history of government suppression of civil rights speech, labor organizing, and anti-war protest using content-based restrictions, the skepticism about empowering government to determine hateful content seems well-grounded.
Platform Content Moderation: The New Battleground
The First Amendment applies to governments, not private actors. Facebook, YouTube, X, and TikTok are private companies legally free to set their own content standards. Section 230 of the Communications Decency Act (1996) immunizes them from liability for user-generated content and for good-faith moderation decisions, creating a legal framework designed to encourage both the hosting of user content and its moderation.
But the scale at which these platforms operate has transformed the free speech debate in ways that formal legal frameworks do not fully capture. When Facebook has more than three billion users, when YouTube is the primary video platform for most of the world, when X is a significant venue for political speech, these platforms are not merely private companies setting house rules — they are the infrastructure of public discourse. Decisions about what speech is permitted on these platforms have effects on democratic deliberation that are comparable to, and in some cases exceed, the effects of government speech regulation.
Elon Musk's acquisition of Twitter and rebranding it as X, completed October 2022, was the most significant test of what happens when a major platform dramatically reduces content moderation. Musk reinstated many previously banned accounts, reduced the moderation workforce by approximately 80%, and positioned the changes as a free speech restoration. The documented effects included measurable increases in hate speech and harassment, withdrawal of major advertisers concerned about brand association, and reduced use by some journalist and activist communities. The platform did not collapse; nor did it become a reliable forum for high-quality political discourse. The experiment illustrated the real trade-offs in content moderation without resolving the normative debate about how those trade-offs should be struck.
Disinformation and the Limits of the Marketplace
COVID-19 misinformation, election denial, and anti-vaccine content created acute versions of a question that Mill's framework does not fully answer: what should be done about speech that is false, verifiably demonstrably false, and causes measurable harm by spreading through a networked information environment?
The marketplace of ideas metaphor assumes that false beliefs lose in open competition with true ones, and that the best response to bad speech is more speech — correction, rebuttal, argument. The empirical literature on misinformation's spread in digital environments is considerably less optimistic. Research by Sinan Aral at MIT and others has documented that false news spreads faster and further on social media than accurate news, partly because novelty and emotional arousal drive sharing. Corrections reach fewer people than the original false information, spread more slowly, and have more modest effects on belief change. In this environment, the "more speech" remedy for disinformation is weaker than the traditional framework assumes.
The strongest form of the free speech argument can acknowledge this and still resist content moderation as a solution: the question is not whether disinformation is harmful, but whether the institutional mechanisms for suppressing it can reliably distinguish disinformation from inconvenient truth, and whether the costs of empowering those mechanisms exceed the costs of tolerating disinformation. The history of government and platform attempts to suppress "misinformation" includes genuine misinformation but also substantial suppression of correct heterodox claims that violated elite consensus — including, notably, early COVID-19 discussion of lab leak hypotheses that were subsequently deemed more plausible than initial official dismissals suggested.
Campus Speech and the Chilling Effect
The debate about campus speech in the United States has generated more heat than light, partly because it often conflates several distinct questions: what speakers should universities invite; what students may be required to hear; what constitutes protected speech under the First Amendment; and what creates a hostile educational environment under Title IX or Title VI. These are different questions with different answers.
Public universities are bound by the First Amendment and may not restrict speech based on viewpoint or content. Private universities are contractually bound by their own stated commitments to free inquiry and academic freedom, which vary considerably. The "campus free speech crisis" narrative — dominant in certain media environments — overstates both the prevalence and the severity of speech suppression, as documented in surveys of faculty and student self-censorship that, while showing some chilling effects, also show continued robust debate on most campuses.
The genuine tension is between viewpoint diversity — the Mill argument that students benefit from encountering serious versions of positions they disagree with — and concerns about the wellbeing of students from targeted communities who may experience certain speakers' presence as a statement that their equality is subject to debate. This tension is not resolvable by declaring one value supreme. Universities are educational institutions committed to free inquiry; they are also communities whose members have legitimate interests in not being systematically demeaned. The practical question of how to balance these is genuinely difficult and context-dependent.
The Limits of the Principle
Mill's free speech argument is strong but has limits that he did not fully acknowledge. His Victorian context was one in which suppressed speech was typically speech by dissenters against orthodoxy — heterodox religious views, political radicalism, unconventional moral positions. His intuition was that authorities would use censorship powers to suppress correct heterodox opinion. This was historically accurate.
But the architecture of information in the twenty-first century includes dynamics that Mill could not have anticipated. Algorithmic amplification on social media does not merely permit all speech to compete equally; it systematically amplifies emotionally resonant, divisive, and false content because such content drives engagement. The marketplace of ideas in a networked digital environment is not a level playing field among ideas; it is a market in which certain kinds of harmful speech have structural advantages. Treating algorithmic amplification and editorial suppression as equivalent threats to free expression — as some critics of content moderation do — conflates very different phenomena.
Catharine MacKinnon's argument about pornography, made in "Only Words" (1993), pointed to a different structural concern: that some speech does not merely advocate for the subordination of women but enacts it — constitutes the harm rather than causing it at a distance. This framework has been most influential in the regulation of sexual harassment, where courts have recognized that pervasive hostile environment harassment constitutes actionable discrimination even when no single statement would be prohibited as a stand-alone matter. The speech is the harm.
Free speech is a principle that captures something genuinely important about the conditions for democratic self-governance, individual autonomy, and the pursuit of knowledge. It is not a principle that answers every hard case, because the cases that are hard are hard precisely because real values are in genuine conflict — because the expression that must be limited if dignity is to be protected is the same expression that must be permitted if liberty is to be protected. The work of free speech theory is not to declare one value supreme but to think carefully about when and how the genuine tension between them should be resolved.
For related analysis of the justice frameworks that underlie rights debates, see What Is Justice. For how false information spreads in environments where free speech principles constrain intervention, see Why Disinformation Spreads. For the broader analysis of how power shapes whose speech gets heard, see What Is Power.
References
- Mill, John Stuart. On Liberty. John W. Parker and Son, 1859. Available at: https://www.gutenberg.org/ebooks/34901
- Holmes, Oliver Wendell. Dissent in Abrams v. United States, 250 U.S. 616 (1919).
- Waldron, Jeremy. The Harm in Hate Speech. Harvard University Press, 2012.
- MacKinnon, Catharine A. Only Words. Harvard University Press, 1993.
- Brandenburg v. Ohio, 395 U.S. 444 (1969). https://supreme.justia.com/cases/federal/us/395/444/
- New York Times Co. v. Sullivan, 376 U.S. 254 (1964). https://doi.org/10.2307/1120614
- Virginia v. Black, 538 U.S. 343 (2003). https://supreme.justia.com/cases/federal/us/538/343/
- Matal v. Tam, 582 U.S. 218 (2017). https://supreme.justia.com/cases/federal/us/582/218/
- Counterman v. Colorado, 600 U.S. 66 (2023). https://supreme.justia.com/cases/federal/us/600/22-138/
- Sunstein, Cass R. #Republic: Divided Democracy in the Age of Social Media. Princeton University Press, 2017.
- Aral, Sinan, Soroush Vosoughi, and Deb Roy. "The Spread of True and False News Online." Science 359(6380): 1146-1151, 2018. https://doi.org/10.1126/science.aap9559
- Post, Robert. Democracy, Expertise, and Academic Freedom: A First Amendment Jurisprudence for the Modern State. Yale University Press, 2012.