In the spring of 2003, a photographer named Kenneth Adelman was doing unglamorous but important work: flying along the California coastline photographing its cliffs, beaches, and properties for the California Coastal Records Project, an environmental archive documenting erosion. He uploaded the resulting 12,000 images to a public website. The archive was used by scientists, researchers, and occasionally curious members of the public.

One photograph -- number 3850 in the archive -- showed a large cliffside mansion in Malibu. Before 2003 ended, that photograph would be seen by approximately 420,000 people.

Not because it was interesting. Because Barbara Streisand sued to have it removed.

The Incident That Named a Phenomenon

Streisand's legal team filed a $50 million lawsuit against Adelman and the Coastal Records Project, demanding removal of the image on privacy grounds. The suit was reported by journalists and quickly circulated online. People who had never heard of the Coastal Records Project -- and who had no particular interest in aerial photographs of California -- suddenly wanted to see exactly what Streisand did not want them to see.

Before the lawsuit, the photograph had been downloaded exactly six times. Two of those downloads were by Streisand's own lawyers.

The California Court of Appeal dismissed the lawsuit. The photograph remained. And in trying to make it invisible, Streisand had made it one of the most viewed images on the internet.

Two years later, blogger and technology journalist Mike Masnick wrote about a similar incident -- a singer who had threatened legal action to remove a negative blog post -- and reached for the most apt recent example of this dynamic. He called it the Streisand Effect, and the name stuck. Masnick's coinage appeared on his blog Techdirt in 2005 and was quickly adopted by technology journalists and legal scholars as the most economical description of a phenomenon that kept recurring with remarkable consistency.

What makes the Streisand Effect more than a curiosity is how reliably it repeats. The specific technology changes -- lawsuits, DMCA notices, super-injunctions, government blocking orders -- but the underlying dynamic is structurally identical each time. A powerful actor attempts to use institutional force to remove embarrassing or damaging information. The attempt is more visible than the information itself. The information spreads to an audience the suppressor could never have reached.

Defining the Streisand Effect

The Streisand Effect is the phenomenon in which attempts to suppress, hide, censor, or remove information cause that information to spread far more widely than it would have if left alone. The suppression attempt itself becomes the story, drawing attention to the very content the suppressor wanted buried.

The effect has several consistent features:

The suppressor has resources: The Streisand Effect typically involves a powerful actor -- a corporation, a wealthy individual, a government, a public figure -- using legal, technical, or institutional means to remove content. The asymmetry of power makes the suppression attempt itself newsworthy.

The information was previously obscure: If the information was already widely known, suppression cannot amplify it much. The classic Streisand Effect amplifies information that would have remained obscure without the intervention.

The internet provides amplification: Pre-internet, a lawsuit to suppress a newspaper story might succeed in limiting distribution. In the networked environment, a news story about a suppression attempt is itself a story, and it travels to audiences the suppressor cannot reach.

The content is truthful or at least publicly verifiable: Streisand Effects are most dramatic when suppression targets factual information that cannot be retroactively changed, unlike defamatory content that might be corrected.

The suppressor underestimates network speed: Legal systems operate on timescales of weeks, months, and years. Information in a networked environment can replicate and distribute globally within hours. By the time a legal order is issued, the information is already beyond reach.

The Psychology: Why Suppression Creates Desire

The Streisand Effect is not simply irony. It has a robust psychological mechanism: psychological reactance.

In 1966, psychologist Jack Brehm developed reactance theory to explain a consistent observation: when people believe their freedom to access information, make choices, or engage in behavior is being threatened or eliminated, they experience a motivational state characterized by an increased desire for the threatened freedom. Brehm's original framework, published in A Theory of Psychological Reactance (1966), proposed that people place value on having behavioral freedoms, and that any threat to those freedoms triggers a specific motivational state -- reactance -- aimed at restoring them.

The classic demonstration involved children who were shown two identical toys and told one was unavailable. The children consistently rated the unavailable toy as more desirable than before the restriction was imposed -- not because the toy had changed, but because access to it had been restricted.

This mechanism generalizes far beyond toys. When a government bans a book, sales increase. When a health authority recommends against a supplement, demand often spikes. When a company tries to remove an article, the article gets shared more widely. The prohibition signal is, counterintuitively, an advertisement.

In a foundational review of reactance research, Miron and Brehm (2006) found the effect replicated across dozens of experimental contexts, from information access to consumer choice to health messaging. Notably, reactance tends to be stronger when the threatened freedom was previously available and when the restriction comes from an external authority rather than a self-imposed constraint. Both conditions are precisely met in the archetypal Streisand Effect scenario: the information was publicly available (at least in principle), and the restriction comes from an institution wielding power over an individual.

A related psychological mechanism is the scarcity principle, studied extensively by Robert Cialdini in Influence: The Psychology of Persuasion (1984). Cialdini documented that items and information perceived as scarce or difficult to obtain are systematically valued more highly than equivalent items that are freely available. Restrictions on information access trigger both reactance (a motivational response to restore freedom) and scarcity perception (an upward revision of the information's perceived value). The two mechanisms compound each other.

Three Compounding Factors

Reactance is the foundation, but three additional factors amplify it in the internet context:

Curiosity about forbidden information: Restriction signals that something important is being hidden. In a world of freely available information, the thing that someone is willing to sue to suppress must be significant. This inference -- which is not always correct but is not unreasonable -- increases the perceived value of the suppressed content.

The news value of the suppression attempt itself: Journalists are professionally interested in powerful actors attempting to prevent the public from accessing information. The lawsuit or takedown notice is itself a story, independent of the underlying content. This meta-story often reaches far larger audiences than the original content would have.

Network amplification: Each person who shares the story of the suppression attempt reaches their network, who may share further. This cascade effect is asymmetric: once information begins to spread, suppression efforts typically cannot keep pace.

A fourth factor, less often discussed, is motivated sharing. When content is under threat of suppression, individuals who might not otherwise share it are motivated to do so as a political or principled act -- an assertion of the right to access information. The suppression attempt recruits a distributed coalition of people who feel invested in the principle of information freedom, independent of any interest in the specific content.

Notable Examples

Carter-Ruck and the Trafigura Affair (2009)

In October 2009, the British law firm Carter-Ruck obtained a super-injunction -- a court order preventing not just reporting on a story but preventing any reporting on the existence of the injunction itself -- on behalf of the commodities trading company Trafigura. The injunction related to reporting about toxic waste dumping in West Africa, specifically the Probo Koala incident in which waste dumped in Abidjan, Ivory Coast, in 2006 was alleged to have caused widespread illness.

The Guardian newspaper published a cryptic item noting that a parliamentary question had been suppressed by a legal order but could not, by order of the court, reveal the name of the company involved.

Within hours, the name "Carter-Ruck" and then "Trafigura" were trending on Twitter. Internet users collectively identified the suppressed party and distributed the information the injunction was designed to prevent. The Guardian was released from the injunction the following day. The story received global coverage that it would never have received absent the suppression attempt. The story of the suppression became a case study in legal overreach in the digital age, taught in media law courses for years afterward.

The Trafigura case illustrated something specific about the digital age: a legal system designed to suppress information in a pre-internet era, when the number of distribution points was limited, cannot suppress information in a networked world where millions of individuals are potential publishers.

Sony vs. George Hotz (2011)

In 2010, electronics hacker George Hotz published a jailbreak for the PlayStation 3, enabling users to run unauthorized software. The exploit was notable among hobbyists and circulated in technical communities, but it was not mainstream news. Sony filed a lawsuit against him, including a demand for the IP addresses of everyone who had visited his website -- a demand that generated significant public outrage about overreach.

The lawsuit prompted the hacker collective Anonymous to launch a sustained series of cyberattacks against Sony's online services, including the PlayStation Network. The PSN was taken offline for 23 days, affecting approximately 77 million user accounts and costing Sony an estimated $171 million in lost revenue, remediation costs, and regulatory penalties. The case settled out of court with Hotz agreeing not to circumvent Sony's security measures. The Hotz exploit had received modest attention in hobbyist communities. Sony's lawsuit transformed it into a global story and provoked a response that caused far more damage than the original hack -- a quantifiable Streisand Effect with a concrete price tag.

The Right to Be Forgotten and Search Results

The European Union's right to be forgotten principle, established in the landmark Google Spain SL v. Agencia Espanola de Proteccion de Datos ruling by the EU Court of Justice in 2014 and codified in Article 17 of the GDPR, gives individuals the right to request removal of certain links from search results. The right is intended to allow people to escape the indefinite persistence of embarrassing or outdated information.

The unintended consequence is a recurring miniature Streisand Effect. When these requests are reported -- and journalists covering the GDPR routinely report on notable removal requests -- they frequently draw more attention to the suppressed information than the original search result would have. The meta-story about what is being suppressed can reach larger audiences than the suppressed content itself. Researchers at Harvard's Berkman Klein Center for Internet and Society have documented cases where reporting on right-to-be-forgotten requests substantially increased search interest in the very information the requester sought to bury.

Government Suppression and WikiLeaks (2010)

The US government's reaction to WikiLeaks' publication of classified diplomatic cables beginning in November 2010 became a sustained multi-year demonstration of the Streisand Effect at geopolitical scale. Government pressure caused Amazon to remove WikiLeaks from its hosting services, Visa and Mastercard to suspend payment processing, and PayPal to freeze accounts. Each of these actions generated international news coverage, political debate, and motivated a global coalition of supporters to mirror the WikiLeaks content across thousands of servers, in dozens of jurisdictions, far beyond any practical government reach. The cables were downloaded and redistributed more widely as a direct consequence of the suppression campaign than they would have been without it.

DMCA Takedowns as Recommendation Engine

At a lower but more quotidian level, DMCA takedown notices for online content -- music videos, film clips, television segments -- have become so consistently associated with subsequent viewership spikes that some media analysts track takedowns as a leading indicator of attention. The mechanism is well-established: a takedown notice generates news coverage among technology and media journalists, triggers social media sharing of mirrors, and creates the curiosity signal that drives search. The DMCA takedown process has become so familiar that many internet users treat a takedown notice as a recommendation.

Year Actor Action Result
2003 Barbara Streisand Sued to remove coastal photograph 420,000 views vs. 6 before lawsuit
2006/2009 Trafigura/Carter-Ruck Super-injunction on toxic waste story Trended on Twitter; global coverage
2010 US Government Pressured removal of WikiLeaks content Content mirrored across thousands of servers
2011 Sony Sued PS3 hacker George Hotz 23-day PSN outage; $171M in damages
2014 Various EU individuals Right to be forgotten requests Reporting on requests drove searches
2019 Multiple corporations DMCA video takedowns Spike in views; mirrors proliferate

The Organizational Failure That Produces Streisand Effects

The Streisand Effect is not only a psychological phenomenon -- it is also an organizational failure. Specifically, it arises from an organization's inability to model how information environments actually work.

Organizations that successfully avoid Streisand Effects tend to have at least one person in the decision chain who can ask a specific question before any suppression action is taken: "Is this information currently obscure? If we draw attention to it, what is the realistic distribution of outcomes?" Organizations that produce Streisand Effects tend to operate on the intuitive model that more force equals more control -- that if information is causing harm, applying legal or technical force to remove it is the correct response.

This model was reasonably accurate in pre-internet media environments. A large corporation could suppress a small newspaper story through legal pressure, economic leverage, or simple speed advantage. The corporation controlled more distribution channels than the newspaper and could contain the spread.

The networked internet broke this model in two ways. First, it created an enormous number of redundant distribution points -- any individual with a social media account is a potential distributor. Second, it created a category of meta-information -- information about the suppression attempt itself -- that has news value entirely independent of the underlying content. The Streisand Effect is driven not only by people seeking the suppressed content but by people sharing the story of the suppression.

Andrew Chadwick, a professor of political communication at Loughborough University, analyzed this dynamic in The Hybrid Media System (2013), arguing that digital media environments are characterized by what he calls "hybridity" -- the interaction of fast-moving digital information flows with slower but more institutionally powerful traditional media. Suppression attempts that might succeed in one part of this hybrid system (removing content from a particular platform) can fail because the action itself is amplified by other parts of the system (news coverage, social media sharing of the suppression story).

Measuring the Effect

Quantifying Streisand Effects is methodologically challenging because you need a counterfactual: how much attention would the suppressed information have received without the suppression attempt? This is inherently unknowable with precision.

However, several researchers have attempted systematic measurement using natural experiments -- cases where similar information was suppressed in some contexts and not others.

Jonah Berger at the Wharton School of Business studied book bans as a natural experiment in suppression and found that banning a book consistently increased sales and readership, with effects that were stronger when the ban was widely reported and when the banned book was available through alternative channels (Berger and Milkman, 2012). The study provides clean causal evidence because book banning is a discrete, documentable intervention with measurable sales data before and after.

A 2020 study by Jure Leskovec and colleagues at Stanford examined the Wikipedia "blackout" protest against SOPA/PIPA in 2012 -- when Wikipedia made itself unavailable for 24 hours -- and found that the blackout drove an enormous increase in search for "SOPA" and "PIPA," terms that had been relatively obscure before the highly visible suppression of access. The blackout was not technically a Streisand Effect (it was self-imposed rather than externally imposed), but it demonstrated the relationship between access restriction and curiosity response.

When Suppression Does Work

The Streisand Effect is not universal. Suppression efforts succeed regularly -- but their success tends to correlate with specific conditions:

The target audience is small and the suppressor can reach all distribution points: A cease-and-desist to a small local newspaper that has not yet published online may prevent the story from reaching a wider audience. This becomes harder as more distribution points exist.

The content is genuinely defamatory or illegal: Content that platforms will remove on legal grounds (defamation, CSAM, copyright infringement) can be effectively suppressed because every major distribution platform cooperates with valid legal demands. The Streisand Effect applies most strongly to content that is true, embarrassing, but legal.

The suppression is not itself newsworthy: If a removal request is routine, handled quietly, and not worth reporting, it may succeed without amplification. The Streisand Effect requires that the suppression attempt itself is either dramatic enough to be reported or visible enough to create curiosity.

The content's primary distribution channel can be effectively cut: If information exists primarily in one place that can be compelled to remove it, suppression can succeed. In a networked environment with multiple redundant copies, this is increasingly rare.

The suppressor acts before publication: Prior restraint -- preventing publication before it occurs -- is more effective than post-publication removal because the information has not yet been distributed and mirrored. However, prior restraint carries its own legal and reputational risks, particularly in common-law jurisdictions where it is disfavored by courts.

Condition Effect on Suppression Success Example
Few distribution points Increases success probability Pre-internet local newspaper
Content is illegal/defamatory Increases success probability Platform-cooperates on valid legal demand
Suppression is quiet and routine Increases success probability Unreported content removal
Information is obscure and niche Increases risk of Streisand Effect Streisand's coastal photograph
Many journalists aware of suppression Increases Streisand Effect risk Trafigura super-injunction
Content already widely copied Makes suppression nearly impossible WikiLeaks cables

How to Manage the Risk

For organizations and individuals managing sensitive information, the practical implications of the Streisand Effect suggest several principles:

Assess before acting: Before any suppression attempt, ask: is this information currently obscure? If the answer is yes, suppression may be the primary risk of its spread. If it's already widely known, removal attempts have less amplification potential.

Model the meta-story: Before taking any action that could generate news coverage, ask whether the action itself -- the lawsuit, the DMCA notice, the cease-and-desist -- would be reported. If yes, assume the reported action will reach a larger audience than the underlying content.

Reserve legal action for genuinely harmful content: The cost-benefit of legal action is most favorable when the content is false, defamatory, or privacy-violating in a legally cognizable way -- and when the expected audience without intervention is significant. A $50 million lawsuit over a photograph viewed six times is the textbook example of misjudging this calculus.

Get ahead of the story: When damaging information is likely to emerge, proactive disclosure with context is almost always preferable to reactive suppression. Releasing information yourself, with your framing, typically produces far better outcomes than having it revealed by someone else after a failed suppression attempt. Crisis communications professionals call this "getting ahead of the news cycle" -- acknowledging issues before they are forced out by external reporting.

Use informal channels first: A polite, private request to remove content often succeeds where a legal threat fails -- and without the Streisand Effect. Many website operators and journalists will accommodate reasonable requests that do not involve legal escalation.

Seek quiet settlements when appropriate: In defamation and privacy cases, confidential settlements that do not generate public legal filings avoid the amplification that public litigation produces.

Accept that some unflattering information will exist: For individuals and organizations, the internet has permanently changed the ecology of reputation. Attempting to suppress all negative content about yourself is both impossible and counterproductive. The alternative is building enough positive presence that the negative content is contextually less significant. Reputation management practitioners call this approach "content displacement" -- filling the available attention space with accurate, favorable material rather than pursuing the removal of unfavorable material.

The Streisand Effect in Authoritarian Contexts

The dynamics of the Streisand Effect operate differently but no less powerfully in authoritarian political systems, where governments attempt systematic suppression of information.

China's "Great Firewall" -- its system of internet censorship -- is among the most technologically sophisticated suppression systems ever built. It blocks access to major Western social platforms, censors search results, and employs tens of thousands of human content monitors. Studies by Gary King, Jennifer Pan, and Margaret Roberts at Harvard and MIT (2013, 2014) examined what Chinese censors actually suppress and found something counterintuitive: the system does not primarily suppress criticism of the government or the Communist Party. It suppresses collective action potential -- content that could organize people to act together -- regardless of whether that content is critical or complimentary.

The reason is precisely the Streisand dynamic at national scale: suppressing criticism draws attention to the criticism and signals that the government fears it. The Chinese censorship system has therefore evolved toward a more sophisticated approach -- suppressing organization while allowing criticism -- in part to avoid the amplification that visible suppression produces.

Even within this system, Streisand Effects occur regularly. When Chinese censors remove a phrase, that removal is often detected and reported by researchers and journalists monitoring censorship patterns, which draws global attention to whatever the censors were trying to suppress. The censorship monitoring organization China Digital Times maintains a list of "Grass Mud Horse Lexicon" -- terms and phrases that have been censored -- which functions, ironically, as an indexed catalog of exactly what the Chinese government has tried to hide.

The Streisand Effect as a Feature of Networked Information

The Streisand Effect reveals something fundamental about the architecture of the internet. In a world where information can be copied perfectly, distributed instantly, and stored permanently at near-zero cost, the tools that powerful actors traditionally used to control information -- legal systems, physical control of printing presses, economic influence over media -- operate very differently.

A legal system that suppresses information operates at the speed of courts. Information travels at the speed of social networks. The asymmetry is structural, not incidental, and it is not going away.

"The internet treats censorship as damage and routes around it." -- John Gilmore, internet activist, 1993 (widely paraphrased)

This observation, made in the infancy of the public internet, has proven more structurally accurate than Gilmore may have intended. The internet's distributed architecture was, in fact, designed to be resilient against node failures -- to route information around blocked paths. The Streisand Effect is this architectural property applied to information suppression: every attempt to block information creates pressure that finds another route.

This does not mean suppression never works or that all information should be freely accessible. It means that the calculus for suppression decisions has fundamentally changed. The question is no longer "can we prevent people from seeing this?" but "will attempting to prevent them from seeing it cause more people to seek it out?" In most cases involving true, embarrassing, widely duplicable information, the answer is yes.

The organizational, psychological, and technological dimensions of the Streisand Effect converge on a single practical lesson: in a networked information environment, the cost of attempting to suppress true information almost always exceeds the cost of the information itself. The act of hiding something has become, structurally, the most effective advertisement for it.

Understanding the Streisand Effect is, at its core, understanding how attention works in networked environments. It is also, for organizations that face difficult information crises, among the most practically important phenomena in modern communications strategy. The lawyers who advised Streisand to sue had not read Brehm's reactance theory. They had not modeled the meta-story. They had not asked what the photograph had cost her before they filed. They asked only whether they could win in court.

They could not. And even if they could have, the lesson was the same: the legal question and the communications question are not the same question. In the information economy, you can win a lawsuit and still catastrophically lose the attention war.

Frequently Asked Questions

What is the Streisand Effect?

The Streisand Effect is the phenomenon in which attempts to suppress, remove, or censor information cause that information to spread far more widely than it would have if left alone. The act of suppression draws attention to the very content the suppressor wants hidden. The term was coined by blogger Mike Masnick in 2005, named after a 2003 incident involving Barbara Streisand's attempt to remove an aerial photograph of her home from a coastal survey.

What happened in the original Streisand incident?

In 2003, photographer Kenneth Adelman photographed the California coastline as part of an environmental survey documenting coastal erosion. The archive of 12,000 photographs included an aerial image of a cliffside mansion in Malibu. Barbara Streisand filed a $50 million lawsuit demanding its removal. Before the lawsuit, the image had been downloaded exactly six times, two of which were by Streisand's own lawyers. After the lawsuit was reported, the image was viewed by approximately 420,000 people. The lawsuit was dismissed.

What is the psychology behind the Streisand Effect?

The core mechanism is psychological reactance, a theory developed by psychologist Jack Brehm in 1966. Reactance describes the motivational state that arises when a person perceives their freedom to access information or make choices is being threatened. When told they cannot or should not see something, people often develop an increased desire to see it. This combines with curiosity, the signaling function of prohibition, and the internet's amplification of controversy to make suppression systematically counterproductive.

Can the Streisand Effect be avoided?

The most reliable way to avoid the Streisand Effect is to not attempt suppression of widely accessible public information. For organizations or individuals managing sensitive information, the practical alternatives include getting ahead of a story by releasing information proactively with context, seeking quiet informal resolution before legal threats, or simply accepting that some unflattering information will exist online without amplifying it through removal attempts. Legal action should be reserved for genuinely private or defamatory content, not merely embarrassing truth.

What are some notable corporate Streisand Effect examples?

In 2010, a British oil company (reported as Carter-Ruck) sought a super-injunction preventing any reporting of their name in connection with toxic waste dumping in West Africa, which caused 'Carter-Ruck' to trend on Twitter within hours. In 2011, Sony sued electronics hacker George Hotz over a PlayStation 3 exploit, prompting the hacker collective Anonymous to attack Sony's network repeatedly. The lawsuit drew far more attention to the exploit than it would otherwise have received. Both cases illustrate how legal suppression can transform minor stories into major ones.