In the 1990s, Wells Fargo began expanding its cross-selling program — the practice of selling multiple financial products to existing customers. The logic was sound: existing customers are cheaper to serve, already trust the bank, and are receptive to additional products. Management set targets. Managers passed those targets to branch employees. The incentive structure was straightforward: sell more accounts, meet your quota, keep your job.
What followed was not what any executive intended. Employees, under pressure to meet quotas that most analysts later concluded were nearly impossible to achieve legitimately, began opening accounts without customers' knowledge or consent. Between 2002 and 2016, employees opened approximately 3.5 million fraudulent accounts. The program designed to deepen customer relationships instead produced one of the largest consumer banking scandals in American history: $185 million in regulatory fines, thousands of employee terminations, the resignation of CEO John Stumpf, and permanent damage to the bank's reputation.
The cross-selling target was a measure. When it became the goal, employees optimized for the measure rather than the underlying objective — and the solution made the underlying problem (shallow customer relationships) substantially worse than if the program had never been launched.
This is the Cobra Effect.
The Original Story
The Cobra Effect takes its name from an incident in British colonial India — the precise date is disputed but is generally placed in the 19th century, associated with the colonial administration in Delhi. British authorities, concerned about the danger posed by venomous cobras to the local population, introduced a bounty program: residents could collect payment for every dead cobra they delivered to the authorities.
The program initially worked. Cobras were killed, bounties collected, the snake population declined. Then the economics shifted. Enterprising residents realized that breeding cobras was more efficient than hunting them. Cobras are not difficult to breed in captivity; the bounty made each snake economically valuable. Cobra farming became a cottage industry.
When authorities recognized what was happening and cancelled the bounty program, the now-worthless captive cobras were released. The cobra population was higher after the program than before it — the program had funded the expansion of the very problem it was designed to eliminate.
German economist Horst Siebert popularized the term "Cobra Effect" in his 2001 book Der Kobra-Effekt: Wie man Irrwege der Wirtschaftspolitik vermeidet (The Cobra Effect: How to Avoid Wrong Turns in Economic Policy), using the story to illustrate perverse incentive structures in economic policy design. The story may be apocryphal in its details — there is limited historical documentation of the specific incident — but the mechanism it illustrates is extensively documented in contexts ranging from colonial administration to modern corporate governance.
The cobra bounty failed not because anyone behaved irrationally. It failed because rational behavior, in the presence of the incentive, produced outcomes opposite to the intended ones.
Why It Happens: Incentive Misalignment
The Cobra Effect is fundamentally an incentive misalignment problem. It occurs when three conditions are simultaneously present:
- A proxy measure is used in place of the actual goal
- Rational actors can optimize for the proxy in ways that diverge from the actual goal
- The optimization behavior makes the underlying problem worse
The cobra bounty illustrates each condition. The actual goal was reducing the cobra population. The proxy was dead cobras delivered. Rational actors could optimize for dead cobras (by breeding them) in ways that diverged from reducing the wild cobra population. The optimization made the problem worse when the program ended.
This structure appears across domains with remarkable consistency. The measure is never the goal itself — it is always a representation of the goal, chosen because it is observable and countable when the goal is not. The gap between measure and goal is where the Cobra Effect lives.
Goodhart's Law
The relationship between Cobra Effect and Goodhart's Law is direct. Charles Goodhart, a professor at the London School of Economics and former Chief Adviser to the Bank of England, formulated the principle (initially in a 1975 paper on monetary policy): "When a measure becomes a target, it ceases to be a good measure."
Goodhart was describing a specific problem in central banking: using money supply as an intermediate target for monetary policy broke down because financial institutions changed their behavior in response to the target, making the money supply no longer a reliable indicator of the things it had previously tracked. But the principle generalizes: any measure used as a target will be optimized for in ways that disconnect the measure from what it was measuring.
The Cobra Effect is Goodhart's Law with a specific severity condition: not merely does the measure degrade as an indicator, but the optimization behavior actively worsens the underlying problem. Every Cobra Effect involves Goodhart dynamics, but not every Goodhart problem rises to Cobra Effect severity.
A Taxonomy of Cobra Effects
| Example | Original Measure | Rational Gaming Behavior | Outcome |
|---|---|---|---|
| British India cobra bounty | Dead cobras submitted | Breed cobras to collect bounties | Cobra population increased |
| French Vietnam rat bounty | Rat tails submitted | Cut tails from live rats, import rats | Rat population not reduced |
| Soviet nail quota (by count) | Number of nails | Produce tiny useless nails | Useless inventory |
| Soviet nail quota (by weight) | Weight of nails | Produce enormous unusable nails | Useless inventory |
| Wells Fargo cross-sell targets | Accounts opened | Open fraudulent accounts | Scandal, $185M fines |
| Vietnam War body counts | Enemy casualties reported | Inflate counts, harm civilians | Strategic goal undermined |
Colonial and Policy Examples
The cobra story is one of several similar colonial-era incidents. British authorities in colonial Vietnam introduced a bounty for rat tails — rat tails being easier to transport and authenticate than dead rats. Residents began breeding rats for their tails. When that was discovered, some began cutting tails from live rats and releasing them to continue breeding. The rat population in areas near bounty collection points increased rather than decreased.
French colonial authorities in Hanoi attempted to reduce the rat population in the city's sewers in 1902 by offering bounties for rat tails. By June 1902, more than 4 million rat tails had been collected — but observers noted the rat population was not visibly declining. The investigation revealed that rats without tails were being released to breed further, and that rats were being imported from rural areas to collect the bounty near collection points. The program was cancelled.
These colonial examples share a common structure: authorities used an observable proxy (body parts) for an unobservable goal (population reduction) without modeling how rational actors would respond to the incentive.
Corporate Incentive Failures
The Wells Fargo cross-selling scandal is the most prominent modern corporate example, but the pattern is common.
Soviet factory quotas. Soviet central planning set output targets for factories based on observable metrics: number of units produced, total weight of output, production volume. Factory managers, under pressure to meet quotas, optimized for the metric. Nail factories producing by count made very large numbers of tiny nails; when the quota switched to weight, factories produced very large nails. Chandelier factories measured by weight produced heavy chandeliers that could not be hung. Glass factories measured by square meters produced very thin glass that shattered. The measure was hit; the goal — producing useful goods — was systematically missed.
Vietnam War body counts. The US military in Vietnam used enemy body counts as its primary metric of progress, the "measure of success" in Secretary of Defense Robert McNamara's framework. Commanders under pressure to show progress had incentive to inflate body counts, count civilian casualties as enemy combatants, and optimize for reported kills rather than actual strategic objectives. The measure became the mission; the actual mission — achieving a stable political outcome in South Vietnam — was not served by maximizing body count, and was arguably worsened by the atrocities that body count pressure incentivized.
Bug bounty programs. Software companies offer rewards to security researchers who discover and responsibly disclose security vulnerabilities. These programs generally improve security by incentivizing external review. But poorly designed programs have created incentive to introduce vulnerabilities quietly and then "discover" them for the reward. More commonly, researchers have found and reported large numbers of low-severity vulnerabilities to collect maximum bounty payments, overwhelming security teams and distracting from genuinely critical issues.
Technology and Algorithmic Examples
Engagement optimization. Social media platforms optimized recommendation algorithms for engagement metrics — likes, shares, comments, time on platform. These are measurable proxies for user satisfaction and value. But engagement can be maximized by content that generates strong emotional reactions, including outrage, fear, and conflict. Research by Max Roser and colleagues at Oxford's Reuters Institute, and internal research at Facebook (later released through whistleblower Frances Haugen), suggested that engagement optimization produced increases in inflammatory content that reduced user wellbeing even as it increased time on platform. The metric was hit; the underlying goal — delivering value to users — was worsened.
Academic citation metrics. Research institutions began using citation counts and journal impact factors to evaluate researcher productivity and allocate funding. These proxies for research quality produced predictable optimizations: paper-splitting (dividing one study's results into multiple publications), citation rings (researchers agreeing to cite each other), selection of incremental research over high-risk innovative research (incremental papers are safer publication bets), and publication bias (negative results are not publishable, so negative results are withheld). The San Francisco Declaration on Research Assessment (DORA, 2012) was a direct response to these Cobra Effect dynamics in academic incentive structures.
The Mechanism: Why Rational People Create Cobra Effects
The Cobra Effect is not produced by irrational actors or malicious intent. It is produced by rational actors responding to the incentives they actually face, rather than the incentives designers intended.
This is the core insight that makes the Cobra Effect analytically important: the failure mode is predictable from first principles. If you know what incentive a system creates, and you assume actors will respond to that incentive rationally, you can predict whether a Cobra Effect will emerge. The British authorities who designed the cobra bounty did not model how a rational agent would respond to "payment for dead cobras." If they had, the problem would have been apparent.
The Designer's Bias
Incentive designers typically imagine that participants will pursue the stated goal and use the incentive as motivation. The cobra bounty was designed by people who imagined residents hunting cobras and collecting payment. They did not imagine residents farming cobras. This failure of imagination is systematic: incentive designers have the actual goal salient in their minds and project that salience onto participants who have only the incentive structure.
Behavioral economist Richard Thaler (University of Chicago, Nobel Prize in Economics 2017) and legal scholar Cass Sunstein's work on "choice architecture" addresses this directly: the way choices and incentives are structured profoundly shapes behavior, often in ways designers do not anticipate. Designing good incentive structures requires explicitly modeling how people will actually respond, not how they ideally would respond.
Research on Perverse Incentives
The Campbell's Law Connection
Donald T. Campbell, a social scientist who conducted extensive research on social experimentation and evaluation, formulated what became known as Campbell's Law in a 1979 paper in American Behavioral Scientist: "The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor."
Campbell's Law is a generalization of the Cobra Effect mechanism to social indicator systems. When academic test scores become the primary metric for school quality, schools teach to the test. When crime statistics become the metric for police effectiveness, police engage in statistical manipulation. When hospital readmission rates become the metric for care quality, hospitals discharge patients more carefully to avoid being credited with readmissions — sometimes in ways that don't serve patient health.
A comprehensive 2002 study by Daniel Koretz (Harvard Graduate School of Education) documented Campbell's Law in action across multiple high-stakes testing programs in American public education, finding consistent patterns of score inflation that did not reflect genuine improvements in learning.
Experimental Evidence on Incentive Gaming
Uri Gneezy (Rady School of Management, UC San Diego) and Aldo Rustichini's landmark 2000 study, published in the Journal of Legal Studies, examined the effects of introducing a fine for parents who picked up their children late from Israeli day care centers. Classic economic intuition would predict that the fine — making the cost of late pickup explicit — would reduce late pickups.
The opposite occurred: late pickups increased significantly after the fine was introduced. The researchers' interpretation: the fine replaced a social norm (don't inconvenience the teachers) with a market transaction (pay for extra time). Once it was a transaction, parents felt no social obligation beyond the payment. The attempt to reduce a problem made it worse. The fine was a Cobra Effect.
How to Avoid It
Design for Outcomes, Not Proxies
Where possible, measure and incentivize the actual goal rather than a proxy. This is often harder than measuring the proxy but dramatically reduces Cobra Effect risk. In performance management, this means measuring business outcomes (customer satisfaction, revenue, retention) rather than activity metrics (calls made, emails sent, hours worked).
Red-Team the Incentive Structure
Before deploying an incentive system, explicitly ask: how would a rational, self-interested actor game this system? Assign someone the role of finding exploits. If the exploits are easy to find in a red-team exercise, they will be found in deployment.
Use Multiple Metrics
Single metrics are maximally vulnerable to Goodhart dynamics. When success requires satisfying multiple metrics simultaneously — and the metrics collectively approximate the actual goal from different directions — it becomes harder to optimize for any one measure in ways that harm the underlying objective. The Balanced Scorecard methodology developed by Robert Kaplan (Harvard Business School) and David Norton addresses this by requiring organizations to track financial, customer, internal process, and learning metrics simultaneously.
Maintain Human Judgment
Purely algorithmic incentive systems are more vulnerable than hybrid systems that include human qualitative judgment. A sales manager who can observe whether account relationships are genuine can catch fraudulent account openings that purely quantitative systems miss. Formal metrics should inform but not entirely replace judgment about whether the underlying goal is being achieved.
Build in Feedback Loops
Design systems to detect divergence between measure and goal early. The cobra bounty program lacked any mechanism to detect that the cobra population was not declining as bounties were being collected. Regular audits that measure the underlying goal directly — even if infrequently and at higher cost — allow early identification of Cobra Effect dynamics before they become catastrophic.
Environmental Policy and the Cobra Effect
Environmental regulation is particularly vulnerable to Cobra Effect dynamics because environmental goals are often observable only at significant cost, making proxy measures essential and the gap between measure and goal large.
Carbon offsetting. Carbon offset markets allow companies to pay for projects — tree planting, avoided deforestation, renewable energy development — in exchange for credits that offset their own emissions. The system was designed to reduce the cost of achieving climate targets by allowing emissions reductions where they are cheapest. Multiple investigations have found significant problems: offset projects that would have happened anyway being sold as additional reductions, forest protection credits for forests that were never threatened, and in several documented cases, the certification of offset projects that were abandoned after payment. A 2023 investigation by The Guardian and researchers from the University of Cambridge found that more than 90% of rainforest carbon credits certified by Verra (the largest offset certification organization) failed to represent genuine carbon reductions.
The mechanism is Cobra Effect: the carbon credit system, by making emission reductions tradeable, created incentives to produce the appearance of emission reductions without the substance. The metric (certified offset credit) diverged from the goal (actual reduction in atmospheric carbon).
Endangered species bounties. Several jurisdictions have experimented with bounties for invasive species — rats, pythons, lionfish — with mixed results. Florida's Python Challenge, which offers cash prizes for Burmese pythons captured in the Everglades, has been largely immune to farming because pythons are difficult and dangerous to breed in captivity. But programs targeting more farmable species have encountered the classic cobra problem: when the bounty is high enough relative to breeding costs, breeding for the bounty becomes economically rational.
Recycling contamination. Recycling programs designed to reduce waste to landfill often use collection volume as their primary metric — how much material is placed in recycling bins. This creates incentive for households and municipalities to report high recycling rates without ensuring that collected material is actually recyclable. When contamination rates (non-recyclable materials mixed into recycling streams) are high enough, the entire batch is landfilled. China's 2018 National Sword policy — which rejected most recycled material imports due to contamination — revealed that significant proportions of materials collected as "recycled" in Western countries were going to landfill anyway. The high collection-rate metric had masked a low actual-recycling-rate reality.
Healthcare Cobra Effects
Healthcare systems create particularly consequential Cobra Effects because the stakes — patient health — are high and the systems are complex enough to contain many proxy-goal gaps.
Readmission penalties. The US Affordable Care Act introduced financial penalties for hospitals with high readmission rates — patients discharged and then readmitted within 30 days. The target was improving care quality; readmissions were seen as evidence of premature discharge or inadequate post-discharge support. Research published in the Journal of the American Medical Association (2018, Wadhera et al.) found evidence that the penalties were associated with increases in mortality among heart failure patients. The proposed mechanism: hospitals became more reluctant to readmit patients who might benefit from readmission, treating the readmission metric as the target rather than patient health outcomes.
Surgical volume and specialization. Research from the Dartmouth Atlas of Health Care (Dartmouth Institute for Health Policy and Clinical Practice) documented that regions with more specialists per capita — which would seem to imply better care — often had worse health outcomes than regions with fewer specialists and more primary care physicians. The driver appears to be the incentive structure: specialists are paid per procedure; more specialists creates more procedures; more procedures creates more opportunities for complications. The metric (access to specialized care) diverged from the goal (better health outcomes).
Hospital ratings and patient satisfaction. Patient satisfaction surveys became important metrics for hospital funding and reputation. Research by Joshua Fenton and colleagues (UC Davis, published in Archives of Internal Medicine, 2012) found that patients with the highest satisfaction scores had significantly higher odds of inpatient admission, higher pharmaceutical expenditure, and — most strikingly — higher mortality than patients with lower satisfaction scores. The proposed mechanism: physicians under pressure to maximize satisfaction scores were more likely to prescribe requested medications, order requested tests, and admit patients who requested admission, even when these interventions were not medically indicated or were mildly harmful.
Cobra Effects in Education
Grade inflation. As grades became the primary visible output of education systems — used for college admission, scholarship allocation, and job screening — incentives accumulated to inflate grades. When students and parents treat grades as the target rather than as indicators of learning, and when teacher performance is evaluated partly through student satisfaction and grade outcomes, systematic grade inflation results. Average GPAs at American four-year colleges rose from approximately 2.5 in the 1950s to approximately 3.15 by 2020 (Jewell, McPherson, and Tieslau, University of North Texas, 2013), while standardized test scores showed no equivalent gains. The metric (grade) inflated while the underlying goal (demonstrated learning) did not keep pace.
Publication metrics in academia. Research productivity measured by publication count produced the "publish or perish" culture in universities, which the San Francisco Declaration on Research Assessment (DORA, 2012) identified as a fundamental flaw in academic incentive structures. Researchers optimizing for publication count engaged in practices including: minimum publishable units (dividing one study into multiple publications), selective reporting of positive results, and focus on incremental low-risk research rather than ambitious high-risk investigation. The result was a literature with known reproducibility problems — the "replication crisis" documented across psychology (Nosek et al., Open Science Collaboration, Science, 2015), economics, and medicine — that emerged partly from incentive structures that optimized for publication rather than for truth.
Relationship to Adjacent Concepts
Goodhart's Law is the immediate parent: when a measure becomes a target, it ceases to be a good measure. The Cobra Effect is the severe case where Goodhart dynamics actively worsen the underlying problem.
Second-order thinking is the preventive: systematically asking what happens after the first-order effect of the incentive (people respond to the bounty by hunting cobras) exposes the second-order effect (people breed cobras to collect bounties) before the program is deployed.
Hanlon's Razor is relevant to how Cobra Effects are analyzed after the fact: the bad outcomes typically emerge from rational actors gaming an incentive system, not from malicious intent — a distinction that matters for designing better systems rather than assigning blame.
The Overton Window interacts with the Cobra Effect: when a policy produces a Cobra Effect severe enough to attract attention, it can shift the policy window — making previously unthinkable regulatory approaches acceptable as corrective responses. The financial crisis of 2008 produced Cobra Effects (incentive structures that rewarded mortgage origination regardless of borrower quality) severe enough to shift the regulatory window significantly.
References
- Siebert, Horst. Der Kobra-Effekt: Wie man Irrwege der Wirtschaftspolitik vermeidet. Deutsche Verlags-Anstalt, 2001.
- Goodhart, Charles A. E. "Problems of Monetary Management: The U.K. Experience." Papers in Monetary Economics, Reserve Bank of Australia, 1975.
- Campbell, Donald T. "Assessing the Impact of Planned Social Change." Evaluation and Program Planning, vol. 2, no. 1, 1979, pp. 67-90.
- Gneezy, Uri, and Aldo Rustichini. "A Fine Is a Price." Journal of Legal Studies, vol. 29, no. 1, 2000, pp. 1-17.
- Koretz, Daniel. "Limitations in the Use of Achievement Tests as Measures of Educators' Productivity." Journal of Human Resources, vol. 37, no. 4, 2002, pp. 752-777.
- Thaler, Richard H., and Cass R. Sunstein. Nudge: Improving Decisions About Health, Wealth, and Happiness. Yale University Press, 2008.
- Kaplan, Robert S., and David P. Norton. "The Balanced Scorecard: Measures That Drive Performance." Harvard Business Review, January-February 1992.
- Consumer Financial Protection Bureau. "CFPB Fines Wells Fargo $100 Million for Widespread Illegal Practice of Secretly Opening Unauthorized Accounts." Press release, September 8, 2016.
Frequently Asked Questions
What is the Cobra Effect?
The Cobra Effect describes a situation in which an attempted solution to a problem inadvertently makes the problem worse. It is named after an incident in colonial India in which a British government bounty on dead cobras caused residents to breed cobras for the reward — increasing the snake population rather than reducing it.
What is the original cobra story?
Under British colonial rule in India (the specific date is disputed, but the incident is associated with 19th-century Delhi), authorities concerned about venomous cobras offered a bounty for every dead cobra delivered. Initially, residents killed cobras and collected the reward. Enterprising people soon began breeding cobras to collect larger bounties. When the program was cancelled, the now-worthless snakes were released, making the cobra problem worse than before the program started.
Who coined the term 'Cobra Effect'?
The term was popularized by German economist Horst Siebert in his 2001 book 'Der Kobra-Effekt: Wie man Irrwege der Wirtschaftspolitik vermeidet' (The Cobra Effect: How to Avoid Wrong Turns in Economic Policy). Siebert used the story to illustrate perverse incentive structures in economic policy.
What is the difference between the Cobra Effect and Goodhart's Law?
They are closely related. Goodhart's Law states that when a measure becomes a target, it ceases to be a good measure. The Cobra Effect is a specific manifestation of Goodhart's Law: the measure (cobra bounties as a proxy for cobra reduction) became the target, and people optimized for the measure rather than the underlying goal. Every Cobra Effect involves Goodhart dynamics, but Goodhart's Law is the broader principle.
What are modern examples of the Cobra Effect?
Common examples include: Wells Fargo's sales incentive program (2016) that led employees to create millions of fake accounts to meet targets; Vietnam War body count metrics that incentivized inflated kill counts over strategic objectives; Soviet factory output quotas that produced goods meeting weight targets but failing quality standards; and bug bounty programs that can incentivize creating bugs to report. Each case involves an incentive system that caused people to optimize for the metric rather than the underlying goal.
How is the Cobra Effect related to perverse incentives?
The Cobra Effect is a subset of perverse incentive problems. A perverse incentive is any incentive structure that produces outcomes contrary to its designers' intentions. The Cobra Effect specifically involves the incentive causing the target problem to worsen — not just failing to improve it, but actively making it worse.
How can organizations avoid the Cobra Effect?
Key strategies include: measuring outcomes rather than proxies where possible; testing incentive systems at small scale before broad deployment; modeling how rational actors will respond to the incentive (not how designers hope they will); monitoring for unexpected behavioral changes; and maintaining multiple indicators rather than single metrics that can be gamed.
Is the Cobra Effect the same as an unintended consequence?
The Cobra Effect is a specific type of unintended consequence — one in which the attempt to solve a problem causes that problem to worsen. Not all unintended consequences make problems worse; some are neutral or even positive. The Cobra Effect is specifically about solutions that backfire and create more of the problem they were designed to eliminate.
Does the Cobra Effect apply to technology?
Extensively. Recommendation algorithms optimized for engagement have been linked to increased outrage and misinformation — an attempt to deliver relevant content that worsened information quality. Content moderation systems optimized for detection accuracy sometimes increase the production of borderline content as creators game the system. Any algorithmic system optimizing a proxy metric is vulnerable to Cobra Effect dynamics.