Why Good Intentions Still Lead to Bad Outcomes

The Gap Between Meaning Well and Doing Well

Colonial India, 1800s: British government concerned about venomous cobras. Solution: Offer bounty for dead cobras. Result: People breed cobras to collect bounties. Program ends → Breeders release cobras → More cobras than before. (The Cobra Effect)

US welfare reform: Require work for benefits to encourage employment. Intention: Help people become self-sufficient. Result: Single mothers forced to take any job (often worse than welfare), childcare issues, health problems, still poor. (Unintended hardship)

Social media: Connect the world, give everyone a voice, democratize information. Intention: Empower people. Result: Misinformation spreads, echo chambers form, mental health declines, polarization increases, democracy threatened. (System-level backfire)

"The road to hell is paved with good intentions" isn't cynicism—it's systems thinking.

Good intentions are necessary but not sufficient. The world is complex. Actions ripple through feedback loops, create perverse incentives, trigger adaptations, and produce second-order effects impossible to predict from first principles alone.

Understanding why good intentions fail helps you:

  • Recognize patterns of backfire
  • Design interventions that account for complexity
  • Monitor for unintended harm
  • Stay humble about predictions
  • Distinguish foreseeable from unforeseeable harm

This is essential vocabulary for anyone trying to make the world better—which includes most people with power to act.

Why Good Intentions Aren't Enough

Intentions vs. Outcomes

Intentions: What you mean to accomplish; your goals and motivations.

Outcomes: What actually happens; the consequences of your actions.

The gap: World is complex. Actions interact with existing systems, incentives, behaviors, and constraints in ways that produce outcomes different from (often opposite to) intentions.

Moral philosophy debate:

  • Consequentialism: Ethics determined by outcomes (good intentions with bad outcomes = bad)
  • Deontology: Ethics determined by intentions/rules (good intentions = good, even if outcomes bad)
  • Virtue ethics: Ethics about character (good person intends good and tries to foresee consequences)

Practical reality: Both matter.

  • Intentions matter (distinguishes negligence from malice, accident from murder)
  • Outcomes matter (harm is harm, regardless of intent)
  • Responsibility requires both: Good intentions + reasonable effort to foresee consequences

Key insight: You're not judged only on intentions, but you are judged partly on whether you tried to understand likely outcomes.

Knowledge Limits

Problem: Can't know all consequences of actions.

Why:

  • Complexity: Systems have too many interacting parts
  • Emergence: System behavior arises from interactions, not predictable from parts alone
  • Non-linearity: Small changes can have huge effects (tipping points)
  • Delays: Effects appear long after action (causal link obscured)
  • Hidden variables: Factors you don't know about influence outcomes

Example - Prohibition (US, 1920-1933):

  • Intention: Reduce alcohol consumption, improve public health and morality
  • Unforeseen consequences:
    • Rise of organized crime (mafia controlled illegal alcohol trade)
    • Dangerous moonshine (unregulated alcohol poisoned people)
    • Corruption (police, politicians bribed)
    • Disrespect for law (widespread violation normalized lawbreaking)
  • Knowledge limit: Didn't anticipate how banning popular product would create massive black market

Implication: Humility required. Can't foresee everything, so build in feedback and adaptation.

Complexity and Second-Order Effects

First-order effects: Direct, immediate consequences of action.

Second-order effects: Consequences of the consequences—downstream impacts through system.

Third-order effects: Consequences of second-order effects... (ripples continue)

Example - Antibiotics:

  • First-order effect: Kill bacteria, cure infections (good)
  • Second-order effect: Overuse leads to antibiotic-resistant bacteria (bad)
  • Third-order effect: Resistant bacteria spread, making infections harder to treat (very bad)

Why second-order effects are missed:

  • Less obvious (not direct connection)
  • Delayed (appear later)
  • Distributed (affect different people/places)
  • Require systems thinking (see interconnections)

Example - Highway expansion to reduce traffic:

  • Intention: Add lanes to relieve congestion
  • First-order: More capacity (good)
  • Second-order: More capacity → Driving becomes faster → More people drive (induced demand) → Congestion returns
  • Result: Same traffic, more pollution, more sprawl

Application: Ask: "And then what? And then what?" Keep following ripples.

Classic Patterns of Backfire

Perverse Incentives (Cobra Effect)

Definition: When incentive structure encourages opposite of intended behavior.

Mechanism: People respond to incentives, not intentions. If incentive misaligned, behavior follows incentive.

Classic examples:

Intervention Incentive Created Backfire Result
Cobra bounty (India) Money for dead cobras Breed cobras for profit
Rat bounty (Hanoi) Money for rat tails Cut tails, release rats (tail farming)
Soviet factories Bonus for weight of nails produced Make few, very heavy useless nails
Hospital wait times Penalty for long ER waits Ambulances kept circling until patient can be seen "quickly"
Student test scores Teacher bonuses for high scores Teachers help students cheat

Example - Wells Fargo sales targets:

  • Intention: Motivate employees to sell more accounts
  • Incentive: Bonus for accounts opened
  • Perverse behavior: Open fake accounts without customer consent
  • Result: 3.5 million fraudulent accounts, massive scandal

Prevention:

  • Model incentives (what will people actually do?)
  • Test on small scale
  • Monitor for gaming
  • Balance metrics (not just quantity—quality, ethics, customer satisfaction)

Unintended Consequences

Definition: Outcomes that weren't predicted or intended.

Types:

1. Unexpected benefit (happy accident):

  • Penicillin discovered by accident
  • Post-it notes (failed glue became useful product)

2. Unexpected drawback (perverse consequence):

  • Social media → Mental health crisis
  • Plastics → Environmental catastrophe

3. Perverse result (opposite of intention):

  • Abstinence-only education → Higher teen pregnancy rates (lack of contraception knowledge)
  • Prohibition → More crime

Example - Streisand Effect:

  • Intention: Suppress unflattering information
  • Action: Sue to remove photo/information
  • Result: Lawsuit draws attention; information spreads more widely
  • Named after: Barbra Streisand sued to remove photo of her house → Made photo famous

Why they happen:

  • Incomplete mental model (don't understand system)
  • Focus on direct effects (ignore ripples)
  • Ignore incentives (assume people act as intended)
  • Unexpected interactions (multiple interventions interact)

Prevention:

  • Study analogous situations (what happened before?)
  • Pre-mortem (imagine failure, work backward)
  • Monitor and adapt (catch early, adjust)

Moral Hazard

Definition: When protection from consequences encourages risky behavior.

Mechanism: If downside is reduced/eliminated, people take more risks.

Examples:

Insurance: If car insured, less careful (someone else pays for damage).

  • Mitigation: Deductibles (you pay first $500), premium increases (poor driving costs you)

Too Big to Fail: Banks take excessive risks knowing government will bail them out.

  • 2008 financial crisis: Banks made reckless loans (profits if succeed, bailout if fail)
  • Moral hazard: Risk/reward asymmetry encouraged gambling

Welfare: Concerns that safety net discourages work.

  • Debate: Evidence mixed; some effect but smaller than feared
  • Actual problem: "Welfare cliff" (losing benefits when earn slightly more creates perverse incentive not to work more)

Safety equipment: Wearing seatbelt makes drivers feel safer → Drive more recklessly.

  • Peltzman effect: Safety improvements offset by behavior changes
  • Result: Still net positive (lives saved) but less than predicted

Prevention:

  • Maintain some skin in the game (partial consequences remain)
  • Monitor behavior (detect risk-taking increases)
  • Align incentives (reward caution, penalize recklessness)

Goodhart's Law

Definition: "When a measure becomes a target, it ceases to be a good measure."

Mechanism: People optimize for metric rather than underlying goal. Metric becomes disconnected from what it's supposed to measure.

Examples:

Teaching to the test:

  • Measure: Standardized test scores (proxy for learning)
  • Target: Schools evaluated by test scores
  • Gaming: Teach test-taking, narrow curriculum, focus on tested subjects
  • Result: Scores rise but actual learning doesn't

Police clearance rates:

  • Measure: % of crimes "solved"
  • Target: Pressure to increase clearance
  • Gaming: Arrest innocent people, coerce confessions, downgrade crimes
  • Result: Metric looks good, justice doesn't improve

Academic publications:

  • Measure: Publication count (proxy for research quality)
  • Target: Tenure/promotion based on publication count
  • Gaming: Salami slicing (one study → multiple minimal papers), publish junk in predatory journals
  • Result: More papers, less quality

Prevention: See "Measurement and Metrics Terms" article (use multiple metrics, rotate, check for metric-goal correlation).

System Adaptation

Definition: Systems respond to interventions, often in ways that undermine intervention's effectiveness.

Mechanism: People, organizations, ecosystems adapt. Adaptation can resist change.

Examples:

Antibiotic resistance:

  • Intervention: Antibiotics kill bacteria
  • Adaptation: Bacteria evolve resistance
  • Result: Arms race (need new antibiotics continuously)

Pesticide resistance:

  • Intervention: Pesticides kill pests
  • Adaptation: Pests evolve resistance
  • Result: Stronger pesticides needed (toxicity escalates)

Predictive policing:

  • Intention: Deploy police where crime predicted
  • Adaptation: Criminals shift to unpatrolled areas, or become more cautious/sophisticated
  • Result: Crime displacement, escalation

Rent control:

  • Intention: Keep housing affordable
  • Adaptation: Landlords reduce maintenance, convert to condos, developers build less rental housing
  • Result: Housing quality declines, supply shrinks (long-term affordability worse)

Prevention:

  • Anticipate adaptation (what will people/systems do in response?)
  • Design for co-evolution (interventions that adapt to resistance)
  • Monitor and adjust (catch adaptation early)

Domain-Specific Backfires

Development and Aid

Intention: Help poor countries develop, reduce poverty.

Common backfires:

Food aid displacing local agriculture:

  • Free food arrives → Local farmers can't compete → Stop farming → Dependency created
  • Better: Cash transfers, support for local farmers

Building infrastructure without maintenance:

  • Wells drilled, schools built → Equipment breaks, no funds to repair → Abandoned
  • Better: Include training, funding for ongoing maintenance

Corruption and aid:

  • Aid money stolen by corrupt officials → Enriches elites, not poor → Reinforces bad governance
  • Better: Direct delivery, conditional aid, transparent monitoring

Example - Malaria bed nets:

  • Intention: Distribute free bed nets to prevent malaria
  • Unintended use: Fishing nets (nets too fine, destroy fish populations)
  • Lesson: People repurpose resources based on immediate needs, not donors' intentions

Education Reform

Intention: Improve learning outcomes.

Common backfires:

High-stakes testing:

  • Intention: Measure learning, hold schools accountable
  • Backfire: Teaching to test, curriculum narrowing, cheating scandals
  • Result: Scores rise, actual learning questionable

Grade inflation:

  • Intention: Encourage students, reduce stress
  • Backfire: Grades lose meaning, can't distinguish performance levels
  • Result: Everyone gets A, no information conveyed

Zero-tolerance policies:

  • Intention: Ensure safety, consistency
  • Backfire: Absurd outcomes (kid suspended for toy gun, aspirin), disproportionate impact on minorities
  • Result: Injustice, distrust, no improvement in safety

Example - Homework bans:

  • Intention: Reduce student stress, free time for play/family
  • Possible backfire: Privileged families hire tutors (advantage), disadvantaged students fall behind
  • Lesson: Same policy affects different groups differently

Criminal Justice

Intention: Reduce crime, increase safety, deliver justice.

Common backfires:

Mandatory minimum sentences:

  • Intention: Deter crime, ensure consistency
  • Backfire: Mass incarceration, disproportionate minorities affected, judges can't consider context
  • Result: Prisons overflow, families destroyed, communities harmed

Three strikes laws:

  • Intention: Incapacitate repeat offenders
  • Backfire: Life sentences for minor crimes (theft), incentive to kill witnesses (already facing life)
  • Result: Injustice, perverse incentives

War on Drugs:

  • Intention: Reduce drug use, protect communities
  • Backfire: Mass incarceration, militarization of police, organized crime thrives, stigma prevents treatment
  • Result: Drug use continues, enormous collateral damage

Example - Sex offender registries:

  • Intention: Protect children, inform communities
  • Backfire: Registrants face housing/employment barriers → Homelessness, unemployment (increase recidivism risk), vigilante attacks
  • Evidence: Little evidence registries reduce reoffending; may increase risk

Technology and Social Media

Intention: Connect people, share information, democratize communication.

Backfires:

Echo chambers:

  • Mechanism: Algorithms show content you engage with → See only views you agree with → Polarization
  • Result: Society fragments, can't agree on facts

Misinformation spread:

  • Mechanism: False content often more engaging (outrage, fear) → Algorithms amplify
  • Result: Conspiracy theories, vaccine hesitancy, election denialism

Mental health crisis:

  • Mechanism: Social comparison, cyberbullying, addiction to validation (likes)
  • Result: Teen depression, anxiety, suicide rates rise

Manipulation and radicalization:

  • Mechanism: Algorithms optimize engagement → Extreme content keeps people on platform → Radicalization pipeline
  • Result: Real-world violence (Christchurch, Capitol riot)

Example - Facebook "real names" policy:

  • Intention: Reduce trolling, increase accountability
  • Backfire: Endangers activists in repressive countries, LGBTQ+ people not out, abuse survivors hiding from stalkers
  • Result: Vulnerable people banned or exposed

How to Do Better

Think in Systems

Problem: Linear thinking misses feedback loops, adaptations, emergent effects.

Solution: Systems thinking—understand interconnections, feedback, delays.

Key questions:

  • What are feedback loops? (Reinforcing or balancing?)
  • How will system adapt to intervention?
  • What are second-order effects?
  • Who benefits? Who's harmed?
  • What are incentives (not just intentions)?

Application: Before intervening, map the system. Identify key actors, incentives, feedback loops.

Model Incentives, Not Just Goals

Problem: People respond to incentives, not your goals.

Solution: Design incentives that align with goals.

Process:

  1. Ask: "If I were trying to game this system, how would I do it?"
  2. Identify perverse incentives: What unintended behaviors are encouraged?
  3. Test: Small-scale pilot before full rollout
  4. Monitor: Watch for gaming
  5. Adjust: Change incentives when gaming detected

Example - Tax policy:

  • Goal: Reduce carbon emissions
  • Mechanism: Carbon tax (incentive to reduce emissions, not just hope people will)
  • Key: Incentive aligned with goal

Start Small, Monitor, Adapt

Problem: Large-scale interventions have large-scale unintended consequences.

Solution: Iterative approach—test, learn, adjust.

Process:

  1. Pilot: Small-scale test
  2. Monitor: Watch for unintended effects
  3. Learn: What worked? What backfired?
  4. Adjust: Modify based on evidence
  5. Scale: Expand only if working

Example - Conditional Cash Transfers (Progresa/Oportunidades, Mexico):

  • Approach: Piloted in small communities, monitored effects, scaled gradually
  • Result: Successful poverty reduction (because adapted based on evidence)

Contrast - "Big push" development:

  • Approach: Massive intervention without testing
  • Result: Often fails (because couldn't adapt quickly enough)

Pre-Mortem

Definition: Imagine intervention failed. Work backward to figure out why.

Process:

  1. Assume failure: "It's 5 years from now. Our intervention failed catastrophically. What happened?"
  2. Brainstorm failure modes: How could it backfire?
  3. Identify warning signs: What would we see if failure beginning?
  4. Design mitigations: How prevent or detect failure modes?

Value: Surfaces assumptions, identifies risks, reduces overconfidence.

Example - Pre-mortem for new regulation:

  • Imagined failure: "Compliance became expensive, small businesses shut down, market dominated by big players"
  • Warning sign: Compliance costs spike
  • Mitigation: Scale requirements to business size, phase in gradually

Study History and Analogies

Problem: "This time is different" thinking ignores patterns.

Solution: Study similar interventions. What happened?

Questions:

  • Has this been tried before? Where?
  • What happened?
  • Why did it succeed/fail?
  • What's different now?

Example - Universal Basic Income (UBI) proposals:

  • Study pilots (Kenya, Finland, Alaska dividend)
  • What effects were observed? (Labor supply, spending patterns, wellbeing)
  • What unintended consequences?
  • Inform design of new programs

Caution: Analogy isn't identity. Context matters. But history provides patterns.

Maintain Humility

Problem: Overconfidence in predictions.

Solution: Epistemic humility—acknowledge limits of knowledge.

Practices:

  • Admit uncertainty: "We don't know all consequences"
  • Build in reversibility: Can undo if backfires?
  • Monitor continuously: Catch problems early
  • Willingness to admit failure: "This isn't working; let's stop/change"

Example - Scientific method:

  • Hypotheses are provisional
  • Evidence can overturn conclusions
  • Updating beliefs is strength, not weakness

Contrast - Ideology:

  • Certainty about solutions
  • Ignore contradictory evidence
  • Never admit failure
  • Result: Disasters (Soviet central planning, etc.)

When Good Intentions Should Be Judged

Key distinction: Foreseeable vs. unforeseeable harm.

Foreseeable Harm

Definition: Consequences that reasonable analysis would have predicted.

Judgment: You're responsible. Should have known.

Examples:

  • Perverse incentives (modeling incentives predicts gaming)
  • Well-documented backfires (history shows pattern)
  • Obvious second-order effects (basic systems thinking reveals)

Example - Opioid overprescription:

  • Pharmaceutical companies, doctors claimed opioids not addictive
  • Foreseeable: History of opioid addiction well-documented; dismissing risk was negligent
  • Responsibility: Companies, doctors share responsibility for crisis

Unforeseeable Harm

Definition: Consequences that emerge from genuinely novel interactions, couldn't be predicted with available knowledge.

Judgment: Responsibility more limited. But still obligated to monitor and respond.

Examples:

  • True emergence (system behavior unpredictable from parts)
  • Novel contexts (genuinely first-time situation)
  • Hidden variables (information unavailable at time)

Example - Thalidomide:

  • Drug prescribed for morning sickness (1950s-60s)
  • Caused severe birth defects
  • Unforeseeable (arguably): Testing protocols at time didn't detect; mechanism unknown
  • Responsibility: Once harm discovered, companies/regulators responsible for swift action

Gray area: Many "unforeseeable" harms were foreseeable with better analysis. Calling something unforeseeable can be excuse for not trying to foresee.

Ongoing Responsibility

Key point: Even if harm unforeseeable initially, you're responsible for monitoring and responding once aware.

Obligations:

  • Monitor for harm
  • Take seriously early warning signs
  • Adapt or stop if harming
  • Compensate victims if possible

Example - Social media companies:

  • Initial: Couldn't fully foresee mental health effects, misinformation, polarization
  • Now: Evidence clear; inaction inexcusable
  • Responsibility: Must address known harms (not just say "oops, unforeseen")

Conclusion

"The road to hell is paved with good intentions" because:

  • Complexity produces unintended consequences
  • Incentives matter more than intentions
  • Systems adapt in ways that undermine interventions
  • Knowledge limits prevent perfect foresight

But this doesn't mean:

  • Intentions irrelevant (they matter morally)
  • Trying to help is futile (many interventions succeed)
  • All outcomes equally unpredictable (many are foreseeable)

It means:

  • Good intentions are necessary but insufficient
  • Outcomes require systems thinking, incentive modeling, humility, monitoring
  • Responsibility includes effort to foresee consequences

To do better:

  • Think in systems (feedback, adaptation, second-order effects)
  • Model incentives (not just goals)
  • Start small, monitor, adapt
  • Use pre-mortems (imagine failure)
  • Study history (patterns repeat)
  • Maintain humility (admit limits)

Good intentions matter. But acting responsibly requires more than meaning well—it requires trying to understand consequences, designing for complexity, monitoring for backfire, and adapting when wrong.

Intend good. Think systemically. Act humbly. Monitor continuously. Adapt quickly.


Essential Readings

Unintended Consequences:

  • Merton, R. K. (1936). "The Unanticipated Consequences of Purposive Social Action." American Sociological Review, 1(6), 894-904. [Classic treatment]
  • Tenner, E. (1996). Why Things Bite Back: Technology and the Revenge of Unintended Consequences. New York: Knopf.
  • Harding, R. (2009). "Ecologically Sustainable Development: Origins, Implementation and Challenges." Desalination, 187(1-3), 229-239.

Systems Thinking and Complexity:

  • Meadows, D. H. (2008). Thinking in Systems: A Primer. White River Junction, VT: Chelsea Green.
  • Sterman, J. D. (2000). Business Dynamics: Systems Thinking and Modeling for a Complex World. Boston: McGraw-Hill.
  • Forrester, J. W. (1969). Urban Dynamics. Cambridge, MA: MIT Press. [Counterintuitive behavior of social systems]

Development and Aid:

  • Easterly, W. (2006). The White Man's Burden. New York: Penguin Press. [Critique of top-down development]
  • Banerjee, A. V., & Duflo, E. (2011). Poor Economics. New York: PublicAffairs. [Evidence-based development]
  • Moyo, D. (2009). Dead Aid. New York: Farrar, Straus and Giroux. [Aid's perverse effects]

Incentives and Behavior:

  • Gneezy, U., & Rustichini, A. (2000). "A Fine Is a Price." Journal of Legal Studies, 29(1), 1-17. [Perverse incentives]
  • Levitt, S. D., & Dubner, S. J. (2005). Freakonomics. New York: William Morrow. [Incentive analysis]
  • Kerr, S. (1975). "On the Folly of Rewarding A, While Hoping for B." Academy of Management Journal, 18(4), 769-783.

Moral Hazard:

  • Peltzman, S. (1975). "The Effects of Automobile Safety Regulation." Journal of Political Economy, 83(4), 677-725. [Peltzman effect]
  • Baker, T. (1996). "On the Genealogy of Moral Hazard." Texas Law Review, 75(2), 237-292.

Policy and Governance:

  • Scott, J. C. (1998). Seeing Like a State. New Haven: Yale University Press. [Why grand schemes fail]
  • Lindblom, C. E. (1959). "The Science of 'Muddling Through'." Public Administration Review, 19(2), 79-88. [Incrementalism]
  • Hirschman, A. O. (1991). The Rhetoric of Reaction. Cambridge, MA: Harvard University Press. [Perversity thesis]

Prohibition and Drug Policy:

  • Levine, H. G., & Reinarman, C. (1991). "From Prohibition to Regulation: Lessons from Alcohol Policy for Drug Policy." Milbank Quarterly, 69(3), 461-494.
  • Alexander, M. (2010). The New Jim Crow. New York: The New Press. [War on Drugs consequences]

Technology and Social Media:

  • Lanier, J. (2018). Ten Arguments for Deleting Your Social Media Accounts Right Now. New York: Henry Holt.
  • Zuboff, S. (2019). The Age of Surveillance Capitalism. New York: PublicAffairs.
  • Haidt, J., & Allen, N. (2020). "Scrutinizing the Effects of Digital Technology on Mental Health." Nature, 578, 226-227.

Experimentation and Learning:

  • Gawande, A. (2009). The Checklist Manifesto. New York: Metropolitan Books. [Systematic approaches]
  • Klein, G. (2007). "Performing a Project Premortem." Harvard Business Review, 85(9), 18-19.
  • Manzi, J. (2012). Uncontrolled: The Surprising Payoff of Trial-and-Error for Business, Politics, and Society. New York: Basic Books.

Historical Examples:

  • Diamond, J. (2005). Collapse: How Societies Choose to Fail or Succeed. New York: Viking. [Unintended consequences in history]
  • Boyle, D. (2001). The Tyranny of Numbers. London: HarperCollins. [Metrics gone wrong]