In the summer of 1988, the Yellowstone National Park administration faced a fire management decision that would test the limits of institutional knowledge. But to find an earlier and more precise illustration of what happens when humans remove constraints they do not understand, we need to travel to China in 1958.
Mao Zedong's Great Leap Forward included a campaign called the Four Pests Campaign (Sìhài Campaign), targeting rats, flies, mosquitoes, and Eurasian tree sparrows. The sparrow, it was determined by central planners, ate grain seeds and was therefore an enemy of agricultural production. The logic was clean, linear, and catastrophically incomplete.
What followed was one of the largest deliberate ecological interventions in human history. Hundreds of millions of Chinese citizens — students, soldiers, workers — were mobilized to destroy sparrow nests, smash eggs, beat drums to keep birds airborne until they died of exhaustion, and shoot the birds from the sky. By 1960, the Eurasian tree sparrow had been nearly eradicated from mainland China.
The fence had been removed.
What the central planners did not understand — what ornithologist Zheng Zuoxin had tried to warn, at considerable personal risk, before being silenced — was that sparrows were not net consumers of grain. They were also predators of locusts. With the sparrow population collapsed, locust populations exploded without natural check. The locusts devastated crops across the country. Combined with other catastrophic mismanagement of the Great Leap Forward, the resulting famine is estimated by historian Frank Dikötter in Mao's Great Famine (2010) to have killed between 15 and 55 million people between 1959 and 1961 — the deadliest famine in recorded human history.
The planners understood the fence as an obstacle. They did not understand it as a structure.
What Chesterton Actually Said
In 1929, G.K. Chesterton published The Thing: Why I Am a Catholic, a collection of essays. In a chapter titled "The Drift from Domesticity," he articulated what has since become one of the most cited principles in systems thinking and organizational theory:
"In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, 'I don't see the use of this; let us clear it away.' To which the more intelligent type of reformer will reply: 'If you don't see the use of it, I certainly won't let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.'" — G.K. Chesterton, The Thing: Why I Am a Catholic, 1929
Chesterton was not arguing for conservatism. He was not saying the fence should never come down. He was arguing against a specific epistemic failure: removing something before you understand why it exists. The principle makes no claim about whether the fence is good or bad, useful or useless. It only demands that you find out before you act.
This distinction matters enormously. Chesterton's Fence is frequently misquoted as a defense of tradition or the status quo. It is not. It is a demand for understanding as a prerequisite for change. A reformer who fully understands the fence's function and concludes it should be removed anyway is operating within the principle, not against it. The reformer who removes the fence because they cannot see its purpose is the one Chesterton indicts.
The principle also carries an embedded diagnostic: if you cannot explain why a constraint exists, you probably should not trust your own assessment that it is unnecessary.
The Systematic Violation: Why Humans Remove Fences They Don't Understand
The sparrow campaign was not an aberration. It reflects something deep in human cognition. Several overlapping biases drive people toward premature removal of constraints.
Naive interventionism is the tendency to assume that acting is better than not acting, and that visible action constitutes progress. Nassim Nicholas Taleb developed this concept extensively in Antifragile: Things That Gain from Disorder (2012), arguing that "naive interventionism" is responsible for a significant portion of iatrogenic harm — harm caused by the treatment itself rather than the disease. Medical history is littered with interventions (bloodletting, lobotomy, routine tonsillectomy) that were applied with confidence before their effects were understood.
Action bias has been documented in controlled experimental settings. In a landmark 2007 study published in the Journal of Economic Psychology, Michael Bar-Eli and colleagues analyzed 286 penalty kicks in top soccer leagues and found that goalkeepers dove left or right 94% of the time, despite the fact that staying in the center was the statistically optimal strategy. The researchers attributed this to action bias: the goalkeeper felt compelled to do something visible, even when doing nothing was more likely to succeed. The fence-removing reformer is the goalkeeper diving left.
The illusion of explanatory depth, documented by psychologists Leonid Rozenblit and Frank Keil in a 2002 paper in Cognitive Science, shows that people systematically overestimate how well they understand complex systems. When asked to rate their understanding of everyday objects (a zipper, a toilet, a helicopter) and then asked to provide a detailed mechanistic explanation, subjects' confidence collapsed. They knew the outputs of these systems without understanding the mechanisms. The same effect applies to institutions, policies, and social structures: we know what they do without knowing how or why.
Telescoping bias describes the tendency to see near-term effects clearly and discount or miss second- and third-order consequences. The sparrow's role in regulating locust populations was a second-order effect, invisible to planners focused on the first-order observation that sparrows ate grain. This connects directly to first-order vs second-order effects: the fence's visible function and its structural function are often different things.
The legibility problem, described by James C. Scott in Seeing Like a State (1998), is the tendency of institutions to only perceive and value what is visible and measurable. Scott documents how 20th-century states repeatedly destroyed resilient, complex local systems by replacing them with legible, simplified ones. The fence that local farmers built is invisible in the ledger. The grain yield is visible. So the fence gets removed.
The Comparison: Two Modes of Reform
| Dimension | Remove without understanding | Understand, then decide |
|---|---|---|
| Starting assumption | The fence is probably useless | The fence probably exists for a reason |
| Burden of proof | On those who want to keep the fence | On those who want to remove the fence |
| Epistemic requirement | None — absence of visible purpose is sufficient | Must explain the fence's function before acting |
| Risk posture | Assumes low cost of error | Acknowledges unknown cost of error |
| Time horizon | Immediate — first-order effects | Extended — second and third-order effects |
| Reversibility check | Rarely performed | Explicit consideration of whether the action is reversible |
| Outcome on error | Irreversible harm possible | Error contained — understanding preserved |
Four Case Studies in Fence Removal
1. The Yellowstone Wolf Reintroduction (and Its Predecessors)
The fence: Federal protection for wolves in Yellowstone National Park.
Why it seemed removable: Wolves were predators of livestock and were perceived as threats to ranching communities. By 1926, they had been systematically hunted to local extinction in Yellowstone.
What it was actually doing: Wolves were a keystone predator. Their presence regulated not just elk population numbers but elk behavior. Elk avoided grazing in open valleys and riverbanks where wolves could easily hunt them. With wolves gone, elk grazed freely along riverbanks, stripping vegetation. This caused riverbank erosion, altered stream flow patterns, and reduced biodiversity across the entire ecosystem.
Consequence of removal: After wolves were reintroduced in 1995-1996 under the U.S. Fish and Wildlife Service program, the cascade of recovery documented by ecologist William Ripple and Robert Beschta in BioScience (2012) was remarkable. Elk began avoiding open areas, riverbank vegetation recovered, beaver populations rebounded (beavers need the willows that had been overgrazed), streams narrowed and deepened, water temperature dropped, fish populations recovered. Removing the fence did not just reduce wolf population — it restructured the entire watershed.
The original fence-removers in 1926 thought they were solving a livestock problem. They were actually dismantling a hydrological system.
2. The Thalidomide Approval (and Frances Kelsey's Fence)
The fence: The U.S. Food and Drug Administration's requirement for clinical evidence of drug safety before approval, administered in this case by reviewer Frances Oldham Kelsey.
Why it seemed removable: Thalidomide had been approved in 46 countries by 1960. The drug's manufacturer, William S. Merrell Company, applied for U.S. approval and expressed frustration that the FDA was delaying the process with excessive caution. The drug appeared safe based on available European data. Kelsey's repeated requests for additional evidence of neurological effects were characterized by the company as obstructionist.
What it was actually doing: The FDA approval requirement was a fence built after earlier drug disasters. Kelsey's scrutiny was not procedural pedantry — she had noted that the drug's published safety data had methodological gaps and that peripheral neuritis had been reported in some European patients. She suspected the drug might cross the placental barrier.
Consequence of removal: In countries where thalidomide was approved, approximately 10,000 children were born with severe limb deformities (phocomelia) between 1957 and 1962. In the United States, where Kelsey held the fence, the number was approximately 17. Kelsey received the President's Award for Distinguished Federal Civilian Service from John F. Kennedy in 1962. The fence, in this case, was not conservatism — it was the accumulated knowledge that drug approval requires evidence, not market confidence.
3. The Enron "Mark-to-Market" Accounting Removal
The fence: Traditional historical-cost accounting rules, which required companies to record assets at their original purchase price rather than estimated future value.
Why it seemed removable: Enron lobbied the Securities and Exchange Commission in 1991 to allow mark-to-market accounting for its natural gas trading operations, arguing that it better reflected the true current value of long-term contracts. The SEC, persuaded by this argument, granted approval. The logic was not irrational: historical cost accounting does have real limitations in reflecting current economic reality.
What it was actually doing: Historical-cost accounting was a fence against manipulation. Because assets were recorded at verifiable historical prices rather than estimated future values, there was a natural check on inflating reported earnings. The discipline was not elegant, but it was auditable.
Consequence of removal: With mark-to-market accounting, Enron could book the total estimated future profits of a contract on the day it was signed, regardless of whether those profits would ever materialize. When actual profits failed to match projections, new contracts were signed to generate new paper profits, obscuring the gap. The accounting change provided the mechanism for one of the largest corporate frauds in U.S. history. When Enron collapsed in December 2001, $74 billion in shareholder value had been destroyed and 20,000 employees lost their jobs and pensions. The fence had not been useless. It had been the only thing keeping the structure standing.
4. The Removal of the Glass-Steagall Act
The fence: The Glass-Steagall Act of 1933 (the Banking Act of 1933), which separated commercial banking from investment banking in the United States.
Why it seemed removable: By the 1990s, the act was widely characterized as an anachronism — a Depression-era overreaction that disadvantaged American banks relative to international competitors operating without such restrictions. The argument, made by Citigroup's Sandy Weill and others, was that modern financial institutions were sophisticated enough to manage conflicts of interest internally. The Gramm-Leach-Bliley Act repealed Glass-Steagall's core provisions in 1999.
What it was actually doing: Glass-Steagall had been erected after the 1929 crash, during which conflicts of interest between commercial and investment banking had allowed banks to underwrite securities their commercial divisions then sold to uninformed depositors, using customer deposits to fund speculative investments. The fence separated the risk-taking function from the deposit-holding function.
Consequence of removal: Within a decade, the deregulated financial system had created mortgage-backed securities of such complexity that the institutions holding them did not understand their own exposure. The 2008 financial crisis resulted in the largest bank failures in U.S. history, a global recession, an estimated $22 trillion in household wealth destruction in the United States alone (Federal Reserve, 2012), and government bailouts exceeding $700 billion. Economists debate the precise causal weight of Glass-Steagall's repeal, but the structural argument is clear: the fence had separated functions that, when combined, created systemic risk that was not visible until it materialized.
Where Chesterton's Fence Applies
Software Engineering and Technical Debt
Software is perhaps the domain where Chesterton's Fence is most frequently violated and most immediately punished.
Every codebase accumulates code that new engineers cannot explain. The instinct is to remove it. "Dead code," "legacy logic," "redundant checks" — these labels carry an implicit judgment that the code serves no function. But codebases are historical artifacts. They encode decisions made in response to failures that may no longer be visible.
*Example*: A senior engineer at a major e-commerce company once removed what appeared to be a redundant null-check in a payment processing function — the upstream system "always" provided valid data. During a subsequent upstream system migration six months later, the upstream system briefly sent null values. Without the check, the payment processor began charging customers $0. The null-check had been written after a nearly identical incident years earlier, by an engineer who had since left the company. The documentation existed only in the code itself.
The Goodhart's Law failure mode in software is related: when engineers optimize for measurable metrics (lines of code, function count, cyclomatic complexity) they will remove code that contributes to those metrics without understanding what the code does. The metric becomes the target; the fence gets removed to improve the score.
The practice of writing explanatory commit messages, architectural decision records (ADRs), and inline comments explaining why rather than what is essentially an attempt to make fences legible to future engineers. Without this documentation, the only information preserved is that the fence exists — not why it was built.
Organizational Policy and Regulation
In organizations, policies accumulate like geological strata, each layer deposited in response to a specific event that may have been forgotten. A travel expense approval requirement that seems bureaucratic may have been introduced after a specific fraud incident. A mandatory code review requirement may trace to a production outage. A three-signature process may encode a hard-won lesson about a category of error that has not occurred since the process was introduced.
The organizational reformer who eliminates these constraints in the name of efficiency is operating on incomplete information. The process exists precisely because, without it, something bad happened. The bad thing has not happened since — not because conditions have changed, but because the process is working.
This does not mean all policies should be preserved. It means that the burden of proof for removal should include explaining the failure mode the policy was designed to prevent and establishing why that failure mode is no longer relevant.
Social Institutions and Laws
Social institutions are, in many cases, the accumulated solutions to coordination problems that earlier societies had to solve through trial and error at significant cost. Property rights, inheritance law, contract enforcement, marriage registration — each encodes a response to a category of conflict or coordination failure.
Edmund Burke, writing in Reflections on the Revolution in France (1790), made this argument more than a century before Chesterton:
"The science of constructing a commonwealth, or renovating it, or reforming it, is, like every other experimental science, not to be taught a priori. Nor is it a short experience that can instruct us in that practical science; because the real effects of moral causes are not always immediate." — Edmund Burke, Reflections on the Revolution in France, 1790
Burke was making a Chestertonian argument before Chesterton: that social institutions encode knowledge that cannot be read off their surface, and that confident reformers who cannot explain that knowledge are dangerous regardless of their intentions.
*Example*: Rent control is a policy frequently cited in this context. Economists across the ideological spectrum — from Milton Friedman to Paul Krugman — have argued that rent control, while addressing the visible problem of affordability for existing tenants, disrupts the mechanisms (price signals, investment incentives, housing turnover) that maintain and expand housing supply. The fence of market pricing in housing was removed to solve one problem; the unintended consequence was supply contraction and informal market distortions. Whether this trade-off is worth making is a legitimate policy debate. The error is removing the fence without understanding what it was doing.
Personal Habits and Routines
Chesterton's Fence applies at the individual level with equal force. Personal routines that appear inefficient often encode adaptations to recurring challenges. The person who always reviews their calendar the night before, who keeps a paper backup of their phone contacts, who follows a specific morning routine before starting cognitively demanding work — these habits may look like compulsions or inefficiencies to an outside observer. They are, in many cases, responses to past failures.
The productivity literature's instinct to "audit your habits" and eliminate the ones that seem low-value is Chestertonian fence-removal at scale. The better question before eliminating a habit is: what failure does this prevent? What was happening before I started doing this?
The Intellectual Lineage
Edmund Burke (1729-1797) is the philosophical ancestor. Burke's argument in Reflections on the Revolution in France (1790) was that political institutions are not rational constructs but historical accumulations of practical wisdom. Dismantling them based on abstract principles, without understanding what they do, is not reform — it is destruction.
Friedrich Hayek (1899-1992) extended this into economic theory. In "The Use of Knowledge in Society" (American Economic Review, 1945), Hayek argued that the knowledge relevant to economic coordination is distributed across millions of individuals and encoded in prices, practices, and institutions — it cannot be aggregated or fully known by any central planner. Removing or overriding these distributed knowledge-encoding mechanisms destroys information that cannot be reconstructed. Chesterton's Fence, in Hayekian terms, is the recognition that institutions encode knowledge that may not be articulable.
Nassim Nicholas Taleb (b. 1960) developed the concept of iatrogenics — harm caused by the healer — in Antifragile (2012) and The Black Swan (2007). Taleb's argument is that interventions in complex systems have hidden costs that only become visible under stress. The fragile system appears robust until the tail event that the removed fence was protecting against actually occurs.
James C. Scott (b. 1936) contributed the concept of "metis" in Seeing Like a State (1998): the practical, local, embodied knowledge that high-modernist planners systematically destroyed in their attempts to improve on evolved systems. Scott's case studies — Soviet collectivization, Tanzanian villagization, the destruction of organic cities in favor of planned ones — are Chesterton's Fence violations at civilizational scale.
Together, these thinkers form a coherent tradition: evolved systems encode knowledge. That knowledge is often not legible. Removing constraints before reading that knowledge destroys it.
What the Research Shows
The action bias study by Bar-Eli, Azar, Ritov, Keidar-Levin, and Schein (2007), published in the Journal of Economic Psychology, analyzed 286 penalty kicks across top professional leagues. Goalkeepers dived left 49.3% of the time, dived right 44.4% of the time, and stayed in the center 6.3% of the time. But penalty kicks were scored center 28.7%, left 32.2%, and right 39.2% of the time. Staying center was the highest-conversion defensive strategy. Goalkeepers systematically chose the lower-probability action because inaction under pressure felt inadequate. The study concluded that "the action bias leads people to prefer action over inaction even when the actions have negative value."
The illusion of explanatory depth (Rozenblit and Keil, 2002, Cognitive Science) showed that people consistently overestimate their mechanistic understanding of systems. Subjects rated their understanding of how a zipper works at an average of 4.08 out of 7 before explanation and 2.44 after attempting to explain it. This overconfidence is the cognitive mechanism by which fence-removers convince themselves they understand what they are removing.
Research on naive interventionism has been developed most systematically in the medical literature. A 2013 meta-analysis by Prasad and Cifu in Mayo Clinic Proceedings examined 146 medical practices that had been reversed by subsequent evidence. These reversals spanned major interventional areas: hormone replacement therapy, antiarrhythmic drugs for heart disease, routine episiotomy in childbirth. In each case, confident intervention based on incomplete understanding caused harm.
The cobra effect, documented by economist Horst Siebert in Der Kobra-Effekt (2001), describes how interventions designed to solve a problem can create incentives that worsen it. The colonial British government in India offered a bounty for dead cobras. Entrepreneurial locals began breeding cobras for the bounty. When the government canceled the program, the breeders released their now-worthless cobras, increasing the cobra population beyond its original level. The fence (no bounty) had kept cobra-breeding unprofitable. Its removal made it profitable.
The Limits: When to Remove the Fence Without Full Understanding
Chesterton's Fence is not an absolute prohibition on action under uncertainty. There are conditions under which the principle should be superseded.
When the fence is actively harmful and urgent. A fence that is currently causing clear harm should not be left in place while researchers develop a complete understanding of its origins. The urgency calculus changes when the harm of keeping the fence exceeds the expected harm of removing it.
When the cost of understanding exceeds the cost of error. For low-stakes, reversible decisions, the research investment Chesterton demands may exceed the value of the decision itself. The principle applies most strongly to high-stakes, low-reversibility interventions.
When understanding is systematically unavailable. Sometimes institutional knowledge has been lost and the people who built the fence are gone. In these cases, the Chestertonian approach is not to remove the fence and walk away but to remove it slowly, watch carefully, and be prepared to rebuild.
When the fence serves the interests of the fence-builders at the expense of others. Not all fences encode distributed wisdom. Some encode captured interests, historical injustices, or incumbent advantages. Jim Crow laws were fences. The principle that fences encode knowledge does not imply that all knowledge is good or that all power arrangements are legitimate.
When the system is already failing. Chesterton's Fence is most powerful in stable systems where the fence is performing its function. In failing systems — where existing constraints are clearly not preventing harm — the calculation shifts.
The meta-principle that emerges from these exceptions: understand the fence's function before removing it, and if you cannot understand it, proceed slowly and reversibly while watching for the effects you could not predict.
Practical Application: How to Honor the Principle
Before removing any constraint, policy, process, code, habit, or institution, ask and answer these questions:
1. Why was this created? Not "what does it do?" but "what problem was it a response to?" Look for the incident report, the meeting minutes, the git commit message, the historical record.
2. What failure does it prevent? The fence may be preventing a failure so effectively that the failure has become invisible. Absence of evidence of the failure is not evidence that the failure could not occur.
3. Is the failure mode it prevents still relevant? Conditions change. A fence built against a threat that no longer exists may genuinely be unnecessary. But you must first establish that the threat no longer exists — not merely that it is not currently visible.
4. What is the cost and reversibility of being wrong? Removing a redundant check in a low-traffic internal tool is not the same as removing a regulatory requirement governing systemic financial risk. Stakes of error should calibrate the rigor of understanding required.
5. Who will notice the effects, and when? Some consequences of fence removal are immediate. Others are latent — they appear only when a specific set of conditions occurs, which may be months or years away. The longer the latency, the higher the bar for understanding before removal.
References
- Chesterton, G.K. The Thing: Why I Am a Catholic. Sheed & Ward, 1929. https://www.gutenberg.org/ebooks/27536
- Burke, Edmund. Reflections on the Revolution in France. 1790. https://www.gutenberg.org/ebooks/15679
- Hayek, Friedrich A. "The Use of Knowledge in Society." American Economic Review, 35(4), 519–530, 1945. https://www.econlib.org/library/Essays/hykKnw.html
- Taleb, Nassim Nicholas. Antifragile: Things That Gain from Disorder. Random House, 2012. https://www.penguinrandomhouse.com/books/176227/antifragile-by-nassim-nicholas-taleb/
- Scott, James C. Seeing Like a State. Yale University Press, 1998. https://yalebooks.yale.edu/book/9780300078152/seeing-like-a-state/
- Dikötter, Frank. Mao's Great Famine. Walker & Company, 2010. https://us.macmillan.com/books/9780802777799/maosgreatfamine
- Bar-Eli, M., Azar, O.H., Ritov, I., Keidar-Levin, Y., & Schein, G. "Action Bias among Elite Soccer Goalkeepers: The Case of Penalty Kicks." Journal of Economic Psychology, 28(5), 606–621, 2007. https://doi.org/10.1016/j.joep.2006.12.001
- Rozenblit, L., & Keil, F. "The Misunderstood Limits of Folk Science: An Illusion of Explanatory Depth." Cognitive Science, 26(5), 521–562, 2002. https://doi.org/10.1207/s15516709cog2605_1
- Prasad, V., & Cifu, A. "Medical Reversal: Why We Must Raise the Bar Before Adopting New Technologies." Mayo Clinic Proceedings, 86(10), 2011. https://doi.org/10.4065/mcp.2011.0155
- Ripple, W.J., & Beschta, R.L. "Trophic Cascades in Yellowstone: The First 15 Years After Wolf Reintroduction." Biological Conservation, 145(1), 205–213, 2012. https://doi.org/10.1016/j.biocon.2011.11.005
- Federal Reserve Board. "Changes in U.S. Family Finances from 2007 to 2010." Federal Reserve Bulletin, June 2012. https://www.federalreserve.gov/pubs/bulletin/2012/pdf/scf12.pdf
- McLean, Bethany, & Elkind, Peter. The Smartest Guys in the Room. Portfolio, 2003. https://www.penguinrandomhouse.com/books/288064/the-smartest-guys-in-the-room-by-bethany-mclean-and-peter-elkind/
Frequently Asked Questions
What is Chesterton's Fence?
Chesterton's Fence is the principle that you should not remove or change something until you understand why it exists. If you cannot explain the purpose of a constraint, you are not qualified to remove it.
Where does Chesterton's Fence come from?
G.K. Chesterton articulated the principle in his 1929 book The Thing: Why I Am a Catholic, using the metaphor of a fence across a road that a reformer wants to remove without understanding why it was built.
Is Chesterton's Fence an argument against change?
No. It is an argument for understanding before change. A reformer who understands a fence's function and concludes it should be removed is acting within the principle. The principle only prohibits removal based on failure to see a purpose.
What is an example of violating Chesterton's Fence?
Mao's Four Pests Campaign (1958) eradicated sparrows because they ate grain, not understanding that sparrows controlled locust populations. The locust explosion that followed contributed to a famine killing tens of millions.
How does Chesterton's Fence apply to software engineering?
Engineers who remove code they cannot explain — null checks, validation logic, seemingly redundant conditions — often discover it was preventing a failure case that had not occurred in years precisely because the code existed.
When is it acceptable to remove a fence you don't fully understand?
When the fence is actively causing harm, when the cost of further research exceeds the cost of error, when the system is already failing, or when the fence exists to protect incumbent interests rather than general welfare.
What is the relationship between Chesterton's Fence and action bias?
Action bias — the documented tendency to prefer visible action over inaction — drives fence removal. Research by Bar-Eli et al. showed goalkeepers dive unnecessarily during penalty kicks for the same reason: inaction under pressure feels inadequate.
How does Chesterton's Fence relate to second-order thinking?
Fences often exist to prevent second or third-order consequences that are invisible from a first-order perspective. Understanding the fence requires tracing the causal chain that the fence was built to interrupt.