Some of the most instructive failures in policy, management, and organizational design share a common structure: someone identified a problem, designed a logical-sounding solution, implemented it — and the problem got worse, often directly because of the solution.
This phenomenon has a name: the cobra effect. It is one of the most important concepts for anyone trying to understand why well-intentioned interventions so frequently backfire, and why the gap between a policy's intent and its actual effects can be so wide.
The Origin Story
The term comes from a story set in colonial British India, most often attributed to the economist Horst Siebert, who used it in his 2001 book on the German economy. The story goes like this:
The British colonial government in Delhi was alarmed by the large number of venomous cobras in the city. To address the problem, they implemented what seemed like a sensible policy: they would pay a bounty for every dead cobra brought to the authorities. Kill a cobra, collect a reward. Simple, direct, measurable.
Initially, the program worked as intended. People went out and killed cobras. The bounty payments went up. Then something changed. Enterprising locals realized there was money to be made — and that the easiest way to maximize cobra income was not to hunt cobras across the city, but to breed them. Cobra farms appeared. The bounty revenue kept flowing.
When the government eventually discovered what was happening, they cancelled the program. Now the cobra breeders had a problem: they were left with large numbers of worthless cobras they could no longer sell. The solution was obvious — release them. The city ended up with more cobras than it had started with.
The British government set out to reduce cobras. Their incentive program created a cobra-breeding industry. The program's cancellation then released thousands of new cobras into the city. At every step, logical responses to incentives made the underlying problem worse.
Whether the historical details are precisely accurate is less important than what the story illustrates. It captures something structurally real about how incentive systems fail.
The Hanoi Rat Problem
A nearly identical story unfolded under French colonial rule in Hanoi around 1902. The colonial administration was troubled by a rat infestation in the city's sewer system and offered a bounty for rat tails — proof that a rat had been killed.
The policy generated an enormous volume of rat tails. It also generated, in time, an extraordinary sight: rats with no tails, released back into the sewers. Rat catchers had realized that cutting off the tail and releasing the rat was more efficient than killing it — the rat would continue to reproduce, providing a renewable source of future tail income.
Entrepreneurs also established rat farms on the outskirts of the city, breeding rats specifically to collect their tails for the bounty. The rat population in Hanoi did not decrease. By some accounts, it increased.
Why Incentives Produce Perverse Outcomes
The cobra effect and the Hanoi rat problem share a structural feature: they created a financial reward for producing the very thing the policy was trying to eliminate. But most incentive failures are subtler. They do not require outright deception or what we might call "gaming" — they arise from rational people responding reasonably to the system as designed.
The underlying problem in both stories is that the measure being incentivized (dead snakes, rat tails) was a proxy for the actual goal (fewer dangerous animals). The proxy was correlated with the goal in the short term under normal conditions, but it was not the same thing as the goal. Once people began responding to the incentive, they found ways to hit the proxy that did not serve the underlying goal.
This distinction between proxies and goals is crucial. Institutions almost always have to manage using proxies, because the underlying goals — "a safer city," "better student outcomes," "a more productive workforce" — are often hard to measure directly. The danger is forgetting that the proxy is a proxy.
Goodhart's Law
The general principle behind the cobra effect was formalized by the British economist Charles Goodhart, who observed in 1975:
"Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes."
This principle, now known as Goodhart's Law, is usually paraphrased as: "When a measure becomes a target, it ceases to be a good measure."
The underlying dynamic: measures are only useful as proxies for an underlying reality. When you begin rewarding people for hitting the measure, you change their behavior in ways that optimize for the measure specifically — and those behavioral changes break the correlation between the measure and the underlying reality you actually care about.
Goodhart's Law is among the most broadly applicable principles in economics and organizational behavior. It applies any time someone sets a measurable target and attaches meaningful consequences to it.
The Soviet Nail Factory
The Soviet planned economy produced some of the most thoroughly documented examples of Goodhart's Law in action, because it combined rigid centralized targets with powerful incentives to meet them.
In the 1950s, Soviet factories producing nails were given production quotas expressed in units — number of nails. The predictable response: factories produced enormous quantities of tiny, thin nails that were useless for construction. The nail count was maximized; the actual construction materials supply crisis remained.
The planners adjusted the quota to weight rather than count. The factories shifted to producing a small number of extremely heavy, oversized nails — sometimes described as anchor-sized spikes. Weight targets were met; useful nails remained scarce.
The Soviet nail story recurred in various forms across industries. Shoe factories with quota targets in pairs produced vast quantities of small shoes. Glass factories with square-footage quotas produced extremely thin glass that shattered on contact. The workers and managers were not saboteurs — they were rational actors responding to the incentives they were given.
The Wells Fargo Accounts Scandal
One of the most prominent corporate examples of the cobra effect in recent decades is the Wells Fargo cross-selling scandal, which became public in 2016.
Wells Fargo had developed a strategy around cross-selling — selling multiple products to existing customers. Management set aggressive targets: employees were expected to sell, on average, eight products per customer household. These targets were drilled into the culture, tracked relentlessly, and tied to employee compensation and job security.
The targets created an obvious incentive problem: meeting them legitimately was extraordinarily difficult. Employees responded by opening accounts customers had not requested, sometimes forging signatures, sometimes enrolling customers in fee-generating services without their knowledge.
By the time the scandal was fully investigated, Wells Fargo had opened approximately 3.5 million fraudulent or unauthorized accounts and was forced to pay $3 billion in settlements. The cross-selling targets designed to improve customer relationships had instead created an institution-wide culture of fraud.
The cobra effect here was precise: the incentive to increase customer accounts increased customer accounts — through fabrication rather than genuine need. The measure was hit. The underlying goal was obliterated.
Malaria Nets and Unintended Repurposing
Aid programs distributing insecticide-treated bed nets to prevent malaria in sub-Saharan Africa have encountered their own version of the problem. Multiple studies documented cases where distributed nets, intended to be hung over sleeping areas to prevent mosquito bites, were instead used for fishing.
The nets are fine-meshed, durable, and in communities with subsistence fishing needs, they are useful for catching small fish. A 2015 study in PLOS One and subsequent investigations found measurable increases in net use for fishing in some recipient communities, sometimes catching juvenile fish at scales that caused local fishery problems — creating an environmental harm while reducing the malaria protection the program was designed to provide.
This case is a somewhat different structure from the cobra or rat examples: the people repurposing the nets were not gaming a bounty system, they were simply using a resource for what was, to them, the highest-priority purpose. The program designers had not adequately modeled the material needs and priorities of recipients. The intervention did not account for the full context of users' lives.
How the Cobra Effect Happens Institutionally
The cobra effect is not primarily a problem of bad actors or corrupt systems. It is a problem of model mismatch — the mental model of how an incentive will work does not match how people actually respond to it.
There are several common failure modes:
Focusing on the metric instead of the goal: Organizations begin to treat hitting the number as the end in itself, rather than as evidence that the underlying situation has improved. The number becomes the reality.
Underestimating rational response: Incentive designers sometimes assume people will respond in the intended way because that is the "right" thing to do, without sufficiently modeling the full range of behaviors the incentive makes rational.
Single-metric optimization: Using one measure creates a single point of exploitation. If only one thing is being tracked, only one thing will be optimized — and the unmeasured dimensions of the underlying goal will be neglected or sacrificed.
Feedback lag: In complex systems, the perverse effects of an incentive often take time to emerge. By the time the problem is visible, the incentive structure is entrenched, the behaviors have become habitual, and the institutional resistance to changing the program is substantial.
Political incentives to claim success: Governments, executives, and managers have their own incentives to report that programs are working. This creates pressure to interpret metric improvements as goal achievement, even when the relationship between the two has broken down.
Designing Incentives That Do Not Backfire
There is no foolproof method, but several principles meaningfully reduce the risk of cobra-effect failures:
Reward outcomes, not proxies, where possible. Pay for actual cobra reduction in the city (measured by bite incidents, verified snake sightings), not for dead snakes. This is harder to measure but much harder to game.
Use multiple metrics rather than a single target. A diversified measurement portfolio makes it harder to optimize for any one proxy while neglecting the underlying goal. If Wells Fargo had tracked customer satisfaction and account retention alongside new accounts opened, the gaming might have been harder to sustain.
Anticipate second-order effects. Before launching an incentive program, explicitly ask: "How might a rational person maximize their reward while not serving the underlying goal? What is the easiest way to game this?" This adversarial thinking is uncomfortable but essential.
Monitor for unexpected behavioral responses. Implement measurement systems that can detect early signs of gaming or perverse response before they become entrenched. Track the proxy metric and a set of independent indicators of the underlying goal, and watch for divergence.
Build in regular review and adjustment. No incentive program should be treated as permanent. Circumstances change, gaming strategies evolve, and measures that were good proxies may drift. Build institutional mechanisms to revisit and revise incentive structures regularly.
Match incentive granularity to measurement granularity. The cobra bounty failed partly because it was binary — a dead cobra was worth a fixed sum regardless of context. More nuanced structures that reward genuine service to the goal are harder to game.
The Broader Lesson
The cobra effect is a specific pattern of incentive failure, but the principle behind it is one of the most general in social science: systems respond to incentives in ways that maximize outcomes from the perspective of the actors within them, not from the perspective of the designer's goals.
This is not cynicism. It is an accurate description of how people and organizations work. Understanding it does not mean abandoning incentive-based solutions — it means designing them with sufficient care to anticipate how rational actors will respond.
The history of management, public policy, and regulation is littered with cobra effects. The pattern recurs not because the people involved are unusually foolish or venal, but because designing a good incentive is genuinely hard. It requires modeling human behavior accurately, thinking adversarially, and maintaining the intellectual discipline to distinguish between the metric you are tracking and the reality you care about.
The cobra effect is most useful not as a cautionary tale about past failures, but as a standing reminder to ask, before any incentive is implemented: "What is the most rational way to maximize this reward while not serving the goal I actually care about?" If that question has an uncomfortable answer, the incentive needs redesigning before it is deployed.
| Example | Intended Goal | Incentive Used | Perverse Outcome |
|---|---|---|---|
| British India cobras | Fewer cobras | Bounty per dead cobra | Cobra breeding, more cobras released |
| French Hanoi rats | Fewer rats | Bounty per rat tail | Rat farming, tail-only harvesting |
| Soviet nail factories | Adequate nail supply | Quota per unit count | Useless tiny nails produced |
| Wells Fargo | Better customer relationships | Cross-sell targets per employee | 3.5 million fraudulent accounts |
| Malaria net programs | Malaria prevention | Net distribution | Nets repurposed for fishing |
The Cobra Effect in Technology
The technology industry has produced its own distinctive versions of the cobra effect, particularly in the design of metrics-driven systems.
App store ratings: App stores incentivize high ratings by ranking apps partly on their average star rating. The result: apps developed systematic mechanisms to prompt satisfied users to rate while routing dissatisfied users to feedback forms rather than the review system. Ratings as a signal of quality became increasingly unreliable precisely because they were used as a target.
Social media engagement metrics: Platforms that optimize for engagement — measuring success by clicks, shares, reactions, and time spent — created incentives for content that provoked strong reactions. The highest-engagement content turned out to include outrage, extreme claims, and divisive content. Platforms designed to connect people and share information instead became optimized for emotional provocation, because that was what the engagement metric rewarded. The metric was an effective proxy for "something people respond to" — but a poor proxy for "something that is good for users or society."
Ad click-through rates: Digital advertising's optimization for click-through rates — a reasonable proxy for ad effectiveness — produced click fraud, misleading thumbnails designed to generate accidental clicks, and an entire industry of low-quality content designed to attract clicks rather than serve readers. The proxy metric was attacked so successfully that it lost most of its value as a proxy.
In each case, the metric was a reasonable proxy for the underlying goal under normal conditions. Once optimized against directly, the correlation between the metric and the goal collapsed.
Why Fixing the Cobra Effect Is Harder Than It Looks
Even when a cobra effect has been clearly identified, fixing it is typically harder than designing the original incentive. Several reasons:
Entrenched interests: By the time a cobra effect is visible, people and organizations have invested in the game. The cobra breeders, the rat farmers, the employees meeting fraudulent quotas — they have built their livelihoods around the perverse incentive. Removing the incentive creates immediate, concentrated harm to those who benefited from it, even as it addresses diffuse benefits to the public.
Institutional inertia: The metrics and incentives have often been embedded in reporting systems, performance evaluations, and organizational culture. Changing them requires changing not just the policy but the measurement infrastructure and the shared understanding of what counts as success.
Political economy of credit and blame: The people who implemented the original incentive typically have an interest in not admitting it was a failure. Governments, corporations, and organizations face incentives to interpret metric achievements as goal achievement, even when the two have diverged. This creates delay between when the cobra effect is visible and when it is acknowledged and addressed.
The proxy problem recurs: When you fix one perverse proxy by replacing it with another, the new proxy is subject to the same optimization pressure. The history of performance management, public policy, and platform design is in part a history of chasing better proxies as each previous one is gamed into uselessness. The only long-term solution is to close the gap between the metric and the underlying goal — which often requires measuring things that are harder to measure.
The cobra effect is ultimately a story about the gap between what we can measure and what we actually care about. As long as that gap exists — and it always will — there will be incentive structures whose optimization produces outcomes very different from those intended. The goal of good incentive design is not to eliminate the gap, which is impossible, but to make it small enough and monitored closely enough that perverse dynamics can be caught and corrected before they become entrenched.
Frequently Asked Questions
What is the cobra effect?
The cobra effect describes situations where an incentive or solution designed to fix a problem inadvertently makes it worse. The term comes from a story set in colonial British India, where the British government offered bounties for dead cobras to reduce snake populations, only to find that locals began breeding cobras to collect the reward — ultimately increasing the snake population.
Is the cobra effect the same as Goodhart's Law?
They are closely related but not identical. Goodhart's Law states that 'when a measure becomes a target, it ceases to be a good measure.' The cobra effect is a specific and vivid instance of the same underlying principle: optimizing for a proxy metric rather than the true goal produces perverse outcomes. Goodhart's Law is the general principle; the cobra effect is one class of examples.
What are some real-world examples of the cobra effect?
Notable examples include the British colonial bounty on cobras in India and on rats in Hanoi (which led to rat farming), Soviet factory quotas that produced goods of useless sizes, Wells Fargo's aggressive cross-selling targets that led to millions of fraudulent accounts, and some mosquito net distribution programs where nets were repurposed rather than used for malaria prevention.
How can you design incentives that avoid the cobra effect?
Key principles include tying rewards to outcomes rather than proxies, anticipating how rational actors will game the system, monitoring for unexpected behavioral responses after implementation, using multiple metrics rather than a single measure, and building in regular review mechanisms to catch and correct perverse dynamics before they become entrenched.
Why does the cobra effect keep happening despite being well known?
Because designing incentives is cognitively hard. It requires predicting second-order effects — how people will respond to the incentive, not just how they should respond — and most institutions lack processes for this kind of adversarial thinking. Political and organizational pressures also favor visible action over careful design, creating incentives to implement programs quickly rather than thoroughly.