In the 1800s, the British colonial government in India offered a bounty for every dead cobra delivered to authorities. The intent was to reduce the snake population. The result: enterprising locals began breeding cobras to collect the reward. When the program was cancelled, the breeders released their now-worthless snakes, leaving India with more cobras than before.
This is the cobra effect — a policy designed to solve a problem that makes it worse by creating perverse incentives. Social media platforms have been running a version of this experiment for fifteen years, and the results are now visible in public discourse.
The Metric That Ate the Internet
When Facebook, Twitter, and YouTube were young, their engineers needed a way to measure whether people were enjoying their products. Engagement — likes, shares, comments, time spent on the platform — seemed like a perfect proxy. If someone is engaging, they must be having a good time. Build the thing that maximizes engagement, and you maximize human happiness.
The logic held, roughly, for the first few years. But as algorithmic amplification became more sophisticated and the platforms grew to billions of users, a darker pattern emerged: the content that drove the most engagement was not the content that left people feeling informed, connected, or fulfilled. It was content that made them angry.
Why Negative Emotions Win
The relationship between emotion and sharing behavior has been studied extensively. A landmark 2011 analysis by Jonah Berger and Katherine Milkman, published in the Journal of Marketing Research, examined 7,000 articles from The New York Times and found that content inducing high-arousal emotions — awe, anger, anxiety — was far more likely to go viral than content inducing low-arousal emotions like sadness.
Anger, in particular, has an almost unfair structural advantage. It is:
- Fast to form: A provocative headline triggers anger before the brain fully processes context
- Shareable without reading: Outrage-sharing requires almost no effort or comprehension
- Socially reinforcing: Sharing outrage signals group membership and moral positioning
- Algorithmically rewarded: Every share generates more engagement signals, which causes the algorithm to amplify the post further
A 2018 study by researchers at MIT found that false news stories spread faster, farther, and more broadly than true ones on Twitter — partly because false news was more novel, and novelty triggers curiosity and surprise, emotions that, like anger, drive engagement.
How Platform Algorithms Learned to Serve Outrage
Recommendation algorithms are not programmed to promote outrage. They are programmed to maximize a measurable proxy (engagement) for an unmeasurable goal (user value). The problem is that engagement and value diverged.
The YouTube Radicalization Funnel
YouTube's recommendation algorithm, designed to maximize watch time, developed a well-documented tendency to serve progressively more extreme content. A viewer who watched a mainstream political video would be recommended a more partisan one, and then a more extreme one, because each step slightly increased the probability of continued watching.
Researchers Guillaume Chaslot, a former YouTube engineer, and later Brendan Nyhan at Dartmouth studied this pattern and found that the algorithm systematically overrepresented fringe content relative to its actual production volume. The algorithm had no concept of accuracy, credibility, or long-term user welfare — only next-video click rate.
YouTube began modifying its algorithm in 2019, reducing recommendations of what it called "borderline content." The company reported that this change reduced views on such content by approximately 70%. But the intervention required YouTube to accept a reduction in engagement — something platforms are structurally reluctant to do.
Facebook's Meaningful Social Interactions Miscalculation
In January 2018, Facebook CEO Mark Zuckerberg announced a major shift: the News Feed algorithm would deprioritize passive content consumption and instead emphasize "meaningful social interactions" — posts that generated comments and shares.
The logic was superficially reasonable. A post you comment on is one you care about. A post you share matters to you.
The results were disastrous for civil discourse. As The Wall Street Journal later reported, Facebook's own internal research showed that the "meaningful interactions" update caused a significant increase in the virality of misinformation and divisive political content. Comments and shares are not neutral proxies for meaning — they are generated at high rates by content that provokes, offends, or frightens.
| Content Type | Average Reactions | Average Comments | Average Shares |
|---|---|---|---|
| Calm, informational posts | Low | Low | Low |
| Inspiring or uplifting posts | Medium | Low | Medium |
| Partisan political content | High | High | High |
| Outrage-inducing content | Very High | Very High | Very High |
The table above reflects the general pattern documented by academic researchers and internal platform studies: the content that performs best on engagement metrics is systematically different from content that leaves users feeling better off.
The News Publishers Who Got Caught in the Middle
The consequences of engagement optimization have been most acute for journalism. Throughout the 2010s, news publishers became increasingly dependent on Facebook for traffic. At the peak, some publishers received more than 40% of their web traffic via Facebook referrals.
This dependency created a powerful incentive to produce content that performed well in Facebook's feed. Outlets began optimizing headlines for emotional arousal rather than accuracy. The phenomenon of clickbait — headlines that exploit curiosity gaps, trigger outrage, or make exaggerated promises — became so common that it transformed reader expectations across the entire industry.
When Facebook's 2018 algorithm change deprioritized publisher content in favor of personal posts from friends and family, publishers who had invested in the platform were devastated. Several prominent digital media companies, including LittleThings and Upworthy in its original form, lost the majority of their traffic almost overnight.
"We built our entire distribution strategy around Facebook. When they changed the rules, we didn't have a fallback. We had been so focused on the metric that we forgot to build a direct relationship with our readers." — A former editor at a mid-sized digital publication, speaking to Reuters in 2018
The irony is sharp: publishers had debased their content to win an engagement game, and then the engagement game changed, leaving them with neither traffic nor credibility.
The Measurement Trap
Part of the problem is that engagement is easy to measure and user value is not. Platforms and publishers have built entire operations around optimizing for the measurable at the expense of the meaningful. This is a specific instance of Goodhart's Law, which states: "When a measure becomes a target, it ceases to be a good measure."
Engagement was once a reasonable signal of value. Once it became the target, it attracted content specifically engineered to game it.
Time Spent Is Not Time Well Spent
Beyond individual posts and articles, there is the question of what platform engagement metrics do to users over time.
The metric of time-on-site or daily active users has driven platform product decisions for years. More time spent equals more ad impressions equals more revenue. The incentive is to keep users scrolling as long as possible.
Research on the actual effects of high social media consumption paints a complicated picture. A 2019 large-scale study published in the Journal of Social and Clinical Psychology found that limiting social media use to 30 minutes per day led to significant reductions in loneliness and depression over three weeks. A separate 2020 study from Harvard found that passive social media consumption — scrolling without actively posting or connecting — was associated with decreased well-being.
Time-on-site is not equivalent to satisfaction. A person kept awake at 2am by an anxiety-inducing feed is generating engagement metrics while experiencing harm.
Tristan Harris, a former Google design ethicist and co-founder of the Center for Humane Technology, coined the phrase "time well spent" to describe a different design philosophy. The question should not be whether users stayed longer but whether they left feeling better.
The Self-Perpetuating Content Machine
The cobra effect is not merely a platform problem — it has reshaped the incentives of everyone who produces content.
Individual creators on YouTube, TikTok, and Instagram learn quickly that certain content types outperform others. A video expressing outrage about a public figure will typically outperform a calm explainer on the same topic. Over time, creators who want to grow their audiences are trained — by the feedback of metrics — to produce more emotionally provocative content.
News organizations face pressure from digital editors who track real-time analytics. A headline that generates thousands of shares within the first hour will be promoted; a nuanced, carefully reported piece that generates thoughtful but fewer shares may get buried. The analytics dashboard becomes the editor.
Political actors have learned that controversy, provocation, and conflict generate more attention than policy substance. This was observable across political parties and ideologies throughout the 2010s — not as a calculated strategy in every case, but as an evolutionary adaptation to the media environment.
What Better Metrics Look Like
The good news is that better metrics exist. Several platforms and researchers have proposed alternatives that attempt to capture value rather than mere attention.
Alternatives to Raw Engagement
Satisfaction surveys: Platforms can randomly sample users after exposing them to content and ask whether they feel better or worse informed, whether the content was worth their time, and whether they would recommend the platform to others. YouTube has used variations of this approach to train its recommendation algorithm.
Saves and bookmarks: When a user saves content, they are signaling intent to return to it — a signal of value that does not reward purely reactive content. Instagram and Pinterest have leaned into saves as a metric.
Completion rate with quality signal: Did the user read or watch the full piece? And did they subsequently engage with related content, or did they immediately close the app? The second behavior may indicate dissatisfaction even after completion.
Diverse information exposure: Researchers at MIT Media Lab developed a measure called the "Exposure Diversity Score," which tracks whether a user's feed exposes them to different perspectives. Platforms designed around this metric would penalize echo chambers rather than reward them.
Friction before resharing: Twitter introduced a prompt asking users if they had read articles before sharing them, and reported a 40% increase in users opening articles before sharing. Small friction points can reduce impulsive sharing of content users have not actually processed.
| Metric Type | What It Measures | Perverse Incentive | Better Alternative |
|---|---|---|---|
| Likes | Immediate emotional reaction | Approval-seeking content | Satisfaction survey |
| Shares | Viral amplification | Outrage and controversy | Bookmarks and saves |
| Comments | Volume of response | Conflict and debate | Proportion of constructive replies |
| Time on site | Session length | Infinite scroll addiction | Session quality rating |
| Daily active users | Habitual return | FOMO-inducing design | Weekly active users with intent |
Platform Accountability and the Regulatory Horizon
Regulation of social media algorithms is an active area of policy debate in the United States, European Union, and United Kingdom. The EU's Digital Services Act, which came into force in 2024, requires very large online platforms to conduct and publish risk assessments of their recommendation systems, including assessments of effects on civic discourse and mental health.
This represents a significant shift: platforms can no longer treat engagement metrics as purely internal business matters. They are now required to demonstrate awareness of the harms those metrics may create.
In the US, the Kids Online Safety Act and various state-level legislation have targeted the specific application of engagement-maximizing design to minors. The argument — supported by internal research leaked from Facebook in 2021 — is that platforms knew their engagement-optimized products caused harm to teenagers and did not act on that knowledge.
"The platform knew its algorithm was amplifying content that made teenage girls feel worse about their bodies. It knew this from its own research. And the metric was still clicks." — Frances Haugen, Facebook whistleblower, testifying before the US Senate in October 2021
Lessons for Content Creators and Publishers
If you are creating content for the web, understanding the cobra effect in engagement metrics is strategically important.
Chasing engagement metrics without questioning what they measure is building on sand. Platforms change their algorithms. Metrics that worked in 2019 may be actively penalized in 2025. Publishers who built sustainable audiences around trust, quality, and direct relationships — newsletters, podcasts, membership programs — weathered algorithm changes far better than those dependent on platform amplification.
Your audience's long-term trust is worth more than any individual post's engagement. A reader who returns to your publication because they consistently find it useful is more valuable than a thousand one-time visitors driven by an outrage-bait headline.
Measure what you actually care about. If your goal is to inform, measure comprehension and return rates. If your goal is to inspire action, measure downstream behavior. If your goal is to build community, measure the quality of interactions, not just the volume.
Conclusion
The cobra effect in social media is one of the clearest examples of what happens when a metric designed to proxy for human value is treated as if it were human value itself. Engagement was never the goal; it was a shortcut to measuring something harder to quantify.
The incentive structures created by engagement-first platforms have reshaped content, journalism, politics, and social behavior in ways that most people — including many of the engineers who built these systems — did not intend and do not endorse.
The path forward is not simply to measure more things, but to measure the right things. That requires platforms willing to accept lower engagement numbers in exchange for healthier ecosystems, publishers willing to invest in trust rather than traffic, and users willing to examine what they actually get from the time they spend online.
The cobras are everywhere. The question is whether we are still paying the bounty.
Frequently Asked Questions
What is the cobra effect in social media?
The cobra effect in social media describes how optimizing for a metric (like engagement or likes) creates incentives that produce the opposite of the intended outcome. Platforms designed to connect people instead amplify conflict and outrage because negative, divisive content generates more clicks, shares, and comments than calm or nuanced posts.
Why does outrage drive social media engagement?
Outrage triggers strong emotional responses that lower the threshold for sharing and commenting. Research from the University of Pennsylvania found that each moral-emotional word in a tweet increased its retweet rate by approximately 20%. Platforms reward this behavior algorithmically because high engagement signals relevance, creating a feedback loop that favors inflammatory content.
How did Facebook's algorithm changes affect news publishers?
In 2018, Facebook shifted its News Feed algorithm to prioritize 'meaningful social interactions' — comments and shares over passive likes. News publishers saw organic reach drop by 50-70% for informational content, while emotionally charged, partisan stories performed better. Many publishers were forced to adopt more sensational framing to survive on the platform.
What are better alternatives to engagement as a social media metric?
Healthier alternatives include 'satisfied' or 'inspired' reactions rather than just likes, return visitor rates, saves and bookmarks (which indicate intent to revisit), and surveys measuring whether content made users feel informed or uplifted. Some platforms experiment with hiding public like counts to reduce social comparison pressure.
Can platforms be redesigned to reduce perverse incentives?
Yes. Research by the Center for Humane Technology and academics like Renee DiResta suggests interventions including friction before resharing (prompting users to read before sharing), removing real-time like counts, and ranking feeds by diverse information exposure rather than pure engagement. Twitter's Birdwatch and Community Notes program represents one attempt to add accuracy as a dimension alongside engagement.