# Ockham's Razor vs Hanlon's Razor vs Chesterton's Fence Explained **Meta Description:** Expert-written explanation of three classic reasoning heuristics with when to apply each, common misapplications, and decision examples. **Keywords:** ockhams razor vs hanlons razor, chestertons fence, reasoning heuristics, william of ockham razor, hanlons razor malice stupidity, chesterton fence principle, mental models for reasoning, cognitive shortcuts, razor principles, charlie munger mental models **Tags:** #mental-models #ockhams-razor #hanlons-razor #chestertons-fence #reasoning #heuristics --- ## Three Heuristics for Messy Situations Reasoning under uncertainty is where most decisions happen. Three heuristics have survived centuries because they compress experience-tested patterns into short, applicable rules. Ockham's Razor picks between competing explanations. Hanlon's Razor picks between malicious and accidental interpretations of other people's behavior. Chesterton's Fence governs when to remove an existing structure. The three are frequently confused and frequently misapplied. Each addresses a different kind of decision. > "Entities should not be multiplied without necessity." > -- William of Ockham, paraphrased from his 14th century writings > "Never attribute to malice that which is adequately explained by stupidity." > -- Robert Hanlon, Murphy's Law Book Two, 1980 > "Do not take away a fence until you know the reason why it was put up." > -- G. K. Chesterton, The Thing, 1929 --- ## Ockham's Razor William of Ockham, a 14th century English Franciscan friar and philosopher, did not actually coin the phrase later named after him, but his writings consistently favored parsimonious explanations. The modern formulation: when multiple explanations fit the evidence, prefer the one that requires the fewest assumptions. ### What Ockham's Razor Actually Means The razor is a tool for selecting among hypotheses, not for establishing truth. Given equal explanatory power, simpler hypotheses are more likely to be correct because each additional assumption introduces an additional opportunity for error. The razor does not say the simplest explanation is always right. It says that in the absence of evidence favoring complexity, simplicity is the better bet. New evidence can justify additional assumptions. The razor only applies when evidence is ambiguous. ### When to Apply Ockham's Razor - Comparing medical diagnoses when the symptoms could fit multiple conditions. - Debugging software when a bug could have multiple causes. - Evaluating conspiracy theories against coincidence explanations. - Choosing a business hypothesis when multiple strategies could explain current customer behavior. - Forming first impressions about a person's behavior. ### When Ockham's Razor Misleads The razor fails when the domain is actually complex. Biology, ecology, and social systems often require multi-causal explanations. Quantum physics, for example, is not simple. Applying the razor to reject quantum mechanics in favor of classical physics would be wrong because the evidence favors the more complex explanation. The razor also fails when "simple" is defined wrong. A single cause may look simpler than multiple causes, but if the single cause requires exotic mechanisms that the multiple causes do not, the multi-cause explanation may be simpler overall. --- ## Hanlon's Razor Robert Hanlon submitted the phrase to a 1980 joke book called Murphy's Law Book Two, attributing it to himself. Earlier formulations of the same idea appear in Goethe's 1774 novel The Sorrows of Young Werther and in Napoleon Bonaparte's military correspondence. Hanlon packaged it memorably, and the phrase stuck. ### What Hanlon's Razor Actually Means When someone's behavior looks malicious, check first whether it could be explained by stupidity, ignorance, forgetfulness, incompetence, or simple error. Malice is cognitively expensive to execute; incompetence is effortless and ubiquitous. The razor is a corrective heuristic. Humans have evolved to detect malice because undetected malice is dangerous. The evolutionary design produces false positives: we see malice where none exists. The razor shifts the default toward the more common cause. ### When to Apply Hanlon's Razor - Interpreting coworker behavior that feels targeted. - Reading email from someone you do not know well. - Evaluating policies that appear designed to harm a group. - Diagnosing customer complaints about your product. - Interpreting your partner's annoying behavior. - Reading bureaucratic decisions that feel arbitrary. ### When Hanlon's Razor Misleads The razor has clear failure modes. Repeated patterns of harm, especially when the actor benefits, are not well explained by stupidity. Malice concealed as incompetence is a known strategy. Hanlon's Razor should not be applied indefinitely to patterns that look increasingly deliberate. A related failure is that Hanlon's Razor can become an excuse for other people's real harm. A partner whose forgetfulness consistently burdens you is acting within a pattern even if each individual lapse is genuinely forgetful. The razor handles individual instances well and patterns poorly. Large systems can also be designed to harm without any individual acting with malice. Institutional racism, regulatory capture, and algorithmic bias often produce harmful outcomes through mechanisms no single participant willed. Hanlon's Razor applied to systems can mask these structural effects. --- ## Chesterton's Fence G. K. Chesterton, the British writer and Christian apologist, introduced the principle in his 1929 book The Thing: Why I Am a Catholic. The full passage is essential to understand the principle properly. "In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, 'I don't see the use of this; let us clear it away.' To which the more intelligent type of reformer will do well to answer: 'If you don't see the use of it, I certainly won't let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.'" ### What Chesterton's Fence Actually Means Before removing a rule, institution, or structure whose purpose you do not understand, investigate why it exists. The absence of obvious utility is not evidence of absent utility. The fence may protect against a hazard you cannot see from where you stand. The principle is reformist, not conservative. Chesterton is not arguing that fences should never be removed. He is arguing that the right to remove a fence is earned by understanding it first. ### When to Apply Chesterton's Fence - Joining a new company and encountering processes that seem unnecessary. - Updating legacy code whose purpose is not documented. - Revising laws or regulations. - Changing family traditions. - Refactoring tests. - Modifying personal habits that your past self built for reasons you have forgotten. ### When Chesterton's Fence Misleads The principle does not require unlimited investigation. Some fences are genuinely obsolete, and the investigation can become an excuse for inaction. The principle is a hurdle, not a wall. Chesterton's Fence can also be weaponized against legitimate reform. Defenders of harmful institutions can always claim that removing them requires understanding reasons that predate the critics. Used in bad faith, the principle indefinitely preserves the status quo. The modern interpretation, used carefully, adds a time limit: investigate the fence for a proportionate amount of time given the stakes, then act even if the investigation is incomplete. --- ## Side-by-Side Comparison | Dimension | Ockham's Razor | Hanlon's Razor | Chesterton's Fence | |---|---|---|---| | Domain | Competing explanations | Interpreting behavior | Removing existing structures | | Default shifts toward | Simpler explanation | Incompetence over malice | Keeping the fence until understood | | Origin | 14th century philosophy | 1980 joke book (ideas older) | 1929 essay | | Primary risk | Oversimplifying complex systems | Excusing real harm | Paralyzing reform | | Best for | Diagnostic reasoning | Social interpretation | Organizational change | | Worst for | Biology, quantum physics, complex social systems | Patterns of repeated harm | Obsolete or harmful institutions | --- ## When to Combine the Three Real decisions often require all three. Consider a scenario: you join a new team and notice that a specific manager appears to be undermining a colleague with a pattern of confusing feedback. - Ockham's Razor: the simplest explanations are often communication style differences or unclear expectations rather than deliberate undermining. - Hanlon's Razor: the behavior might be explained by the manager's poor communication skills rather than intent to harm. - Chesterton's Fence: the apparent "rule" that the colleague follows the manager's feedback may have a history you do not know. Applying all three tempers the initial interpretation. If evidence accumulates (repeated pattern, manager benefits, colleague is clearly harmed), the razors update, and Hanlon in particular releases its hold. ### Example: Software Refactoring A developer joins a team and sees a 2,000-line file that handles edge cases in ways that look unnecessary. Ockham's Razor says the simpler refactor is probably correct. Chesterton's Fence says investigate why the edge cases exist; they may be production incidents from years past. Hanlon's Razor does not directly apply. The disciplined response: check commit history, talk to existing team members, read incident logs. If the edge cases protect against known failures, keep them and document why. If they are cruft from deprecated features, remove them. The investigation takes hours; the refactor without investigation can cost weeks of production issues. ### Example: Corporate Policy A new CEO sees a budget rule requiring two signatures for expenses over $10,000. It looks like bureaucratic drag. Ockham's Razor might suggest the simpler rule (single signature) is better. Chesterton's Fence asks why the rule exists. The history may include fraud incidents that shaped the policy. The history may be that a previous CFO wanted to feel important. The correct path: investigate, then decide. The rule may survive, change, or be removed, but the decision is informed. --- ## Research and Theoretical Support ### Ockham's Razor Research Ockham's Razor has formal mathematical justification in Bayesian inference and minimum description length theory. Solomonoff's theory of inductive inference (1964) formalizes the intuition: among models that fit the data equally well, shorter models assign higher prior probability to the data actually observed and therefore have better posterior odds. In machine learning, the razor appears as regularization. Models are penalized for complexity to prevent overfitting. The regularization hyperparameter is essentially a formal implementation of Ockham's Razor. In medicine, the razor is formalized as Occam's principle in diagnostics: prefer a single diagnosis that explains all symptoms over multiple diagnoses. The principle is valid statistically but has been critiqued for over-application in complex cases where multiple conditions coexist. ### Hanlon's Razor Research Attribution theory in social psychology, pioneered by Fritz Heider in the 1950s and extended by Harold Kelley in the 1960s, describes how people assign causes to behavior. The fundamental attribution error, documented by Lee Ross in 1977, is the human tendency to overattribute others' behavior to disposition (character, intent) and underattribute to situation (circumstance, accident). Hanlon's Razor is a corrective heuristic against the fundamental attribution error. Daniel Kahneman's work on fast and slow thinking, published in Thinking, Fast and Slow (2011), documents the System 1 tendency to jump to dispositional explanations. Hanlon's Razor functions as a System 2 correction. ### Chesterton's Fence Research Systems thinking and organizational research support the principle's core claim: organizations encode lessons from past failures in rules, processes, and structures. Nassim Taleb's work on antifragility extends the insight, arguing that long-surviving structures have passed selection pressure and carry information that is invisible from the current vantage point. Evolutionary biology provides an analogy. Genes with no apparent function may have been preserved by selection for functions that only manifested in conditions that have not recently occurred. Removing them may be fine for many generations and then catastrophic under the right stress. > "Time is the best editor. Long-surviving structures have already passed the test that new ideas have not yet taken." > -- Nassim Taleb, Antifragile, 2012 --- ## Common Misapplications ### Ockham's Razor Misapplied **Rejecting complexity that the evidence supports.** Climate science, quantum mechanics, and economic systems are complex. The razor is misapplied when used to prefer simple explanations that cannot account for the evidence. **Confusing simple with familiar.** Familiar explanations often feel simple because they do not require new thinking. Genuine simplicity requires counting assumptions, not comfort level. ### Hanlon's Razor Misapplied **Excusing repeated harm.** A single incident explained by stupidity is reasonable. Twenty incidents that all happen to benefit the person while harming you are a pattern. Hanlon's Razor is not a get-out-of-jail-free card. **Ignoring systemic effects.** Systems can produce harmful outcomes without any individual acting maliciously. Applying Hanlon at the individual level can miss the structural pattern. ### Chesterton's Fence Misapplied **Indefinite investigation.** Some fences deserve a quick look and a decision. Treating every existing structure as requiring deep historical research is a form of paralysis. **Preserving harmful structures.** Historical institutions that caused real harm do not deserve preservation merely because they existed. The principle is about understanding, not deference. --- ## Applications by Context ### In Business Strategy All three are useful. Ockham for diagnosing customer behavior. Hanlon for interpreting competitor or employee actions. Chesterton for legacy processes. Executives building international operations through [corpy.xyz](https://corpy.xyz) often encounter country-specific regulations that look arbitrary; Chesterton's Fence is the appropriate tool for understanding why they exist before advocating for reform. ### In Personal Relationships Hanlon dominates. The fundamental attribution error causes most interpersonal conflict. Reading annoying behavior as deliberate when it is actually tired, distracted, or misinformed produces unnecessary resentment. ### In Software Engineering All three. Ockham for debugging. Hanlon for reading coworker commit messages. Chesterton for legacy code. The combination is sometimes called the "before you refactor" checklist. ### In Writing and Research Ockham's Razor favors concise arguments over elaborate ones. Chesterton's Fence applies to established stylistic conventions whose reasons may not be obvious. The writing templates at [evolang.info](https://evolang.info) document conventions that have survived because they work; the templates are Chesterton-aware structures rather than arbitrary rules. ### In Career Decisions Ockham for choosing among career paths with overlapping appeal. Hanlon for interpreting feedback from managers. Chesterton for organizational practices whose value is not yet obvious to you. Career capital research synthesized in the professional assessments at [whats-your-iq.com](https://whats-your-iq.com) and related platforms apply these principles implicitly. --- ## Frequently Asked Questions **What is the difference between Ockham's Razor and parsimony?** Parsimony is a synonym. Both refer to preferring the least complex explanation that fits the evidence. Ockham's Razor is the informal name; parsimony is the technical term used in philosophy of science and statistics. **Is Hanlon's Razor too generous to bad actors?** It can be. The razor is a starting default, not a final verdict. Patterns of repeated harm, especially patterns that benefit the actor, override the default. Hanlon's Razor is best applied to single or rare interactions, not to long patterns. **Is Chesterton's Fence conservative?** Not inherently. Chesterton's own politics were complicated, but the principle applies equally to conservative and progressive reform. The claim is that any reform should understand what it is replacing, regardless of direction. **Can a fence be removed if you still do not understand it?** Yes, if the stakes are low and reversible. The principle scales investigation to stakes. Low-stakes fences can be removed with minimal investigation. High-stakes fences deserve serious investigation before removal. **Are these principles used in law?** Yes, implicitly. Legal reasoning about rules and precedent follows Chesterton's logic. Criminal attribution uses something like Hanlon's Razor (mens rea standards). Evidence evaluation follows Ockham in preferring simpler narratives. **What mental model is the opposite of Chesterton's Fence?** No single opposite, but Taleb's lindy effect is a related complement: long-surviving structures have earned the benefit of the doubt that new structures have not. Both principles respect the information content of age. **Which of the three is most important for leaders?** Chesterton's Fence. Leaders have the power to remove structures, and their biggest regrets typically come from removing things they did not understand rather than preserving things they did not like. --- ## References 1. Chesterton, G. K. (1929). The Thing: Why I Am a Catholic. Dodd, Mead & Company. 2. Hanlon, R. J. (1980). Murphy's Law Book Two: More Reasons Why Things Go Wrong. Price/Stern/Sloan. 3. Ross, L. (1977). The intuitive psychologist and his shortcomings: Distortions in the attribution process. Advances in Experimental Social Psychology, 10, 173-220. https://doi.org/10.1016/S0065-2601(08)60357-3 4. Solomonoff, R. J. (1964). A formal theory of inductive inference. Information and Control, 7(1), 1-22. https://doi.org/10.1016/S0019-9958(64)90223-2 5. Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux. 6. Taleb, N. N. (2012). Antifragile: Things That Gain from Disorder. Random House. 7. Parrish, S. (2019). The Great Mental Models, Volume 1: General Thinking Concepts. Farnam Street. https://fs.blog/mental-models-hanlons-razor/ 8. Munger, C. T. (2005). Poor Charlie's Almanack: The Wit and Wisdom of Charles T. Munger. Donning Company Publishers.

Frequently Asked Questions

What is the difference between Ockham's Razor and Hanlon's Razor?

Ockham's Razor selects between competing explanations of evidence: when multiple hypotheses fit the data equally well, prefer the one that requires the fewest assumptions. It applies to factual and diagnostic questions (medical diagnoses, debugging, scientific hypotheses). Hanlon's Razor selects between interpretations of other people's behavior: when behavior looks malicious, check first whether stupidity, ignorance, or simple error could explain it. It applies to social interpretation rather than factual explanation. The two are sometimes confused because both are called razors and both prefer the 'less dramatic' interpretation, but they operate in different domains. Ockham is about reality; Hanlon is about motive. Applying Ockham to someone's rude email is a category error; applying Hanlon to a medical diagnosis is also a category error. The razors are tools for specific decision types, not universal preferences for simplicity.

When should I NOT use Ockham's Razor?

Ockham's Razor misleads when the actual domain is complex. Biological systems, quantum physics, economic dynamics, and human cultures often require multi-causal explanations that the razor would reject. Climate science is the clearest modern example: the observed warming has multiple contributing mechanisms (CO2, methane, albedo effects, feedback loops), and attempts to explain it with single-cause models consistently fail to match the data. The razor is appropriate when evidence is ambiguous and multiple explanations are roughly equivalent; it is inappropriate when evidence specifically supports the more complex explanation. A second failure mode is confusing 'simple' with 'familiar.' An explanation that relies on mechanisms you already know feels simple even if it requires many assumptions; an unfamiliar explanation can feel complex even if it requires fewer assumptions. True application of the razor requires counting assumptions, not preferring comfort. When in doubt, check whether the evidence actually distinguishes between the hypotheses. If it does, follow the evidence. If it does not, the razor applies.

Does Hanlon's Razor mean I should ignore mistreatment?

No. Hanlon's Razor is a default for initial interpretation of single instances, not a policy for ignoring patterns. A coworker who forgets your name once is probably tired or distracted; forgetting everyone's name in your department but remembering the names of the senior team is a pattern that warrants a different interpretation. The razor is a corrective heuristic against the fundamental attribution error (the human tendency to overattribute others' behavior to deliberate character traits), but it does not override evidence of deliberate harm. Three signals that override the default: repetition over time, benefit to the actor, and escalation when the behavior is pointed out. When all three are present, incompetence is no longer an adequate explanation, and the razor should release its hold. The razor is particularly dangerous when used to excuse systems that produce harm without any single actor being deliberately malicious. Structural discrimination, regulatory capture, and algorithmic bias are systemic phenomena that Hanlon's individual-actor framing can obscure.

How much research does Chesterton's Fence require?

Proportional to the stakes and reversibility of removing the fence. Low-stakes, easily-reversible changes (removing a decorative rule from a team handbook) deserve minimal investigation. High-stakes, irreversible changes (eliminating a regulatory requirement, removing a legal precedent, tearing down a physical structure) deserve deep investigation. The principle scales with consequences. For most personal and professional decisions, Chesterton's Fence is satisfied by a single clarifying question: what was this structure originally intended to solve? If the answer emerges quickly and the original problem is no longer relevant, proceed with removal. If the answer requires extended research or if the original problem is still live, keep the fence or investigate further. The principle is not designed to paralyze reform; it is designed to prevent the specific failure mode of removing structures whose purpose only becomes visible after they are gone. Users who find themselves investigating every existing rule are misapplying the principle. Users who remove structures without any investigation are exposing themselves to the original harm the structure prevented.

Can these heuristics be combined?

Yes, and combining them often produces better decisions than any single one. A typical combination sequence: apply Ockham's Razor first to clarify what the evidence actually supports, filtering out elaborate explanations that do not add predictive power. Apply Hanlon's Razor next to social interpretations embedded in the situation, preventing premature attribution of malice to actors whose behavior could be explained by simpler mechanisms. Apply Chesterton's Fence last if any decision involves removing existing structure, making sure the structure's purpose is understood before it is modified. In software engineering, for example, a debugging session might use all three: Ockham for the most likely cause, Hanlon for the commit author's intent, and Chesterton for the mysterious code that looks removable but exists for a reason. The combination is sometimes called the 'triple check' and is particularly valuable for high-stakes decisions where confidence is expensive if wrong. The combination also prevents each razor from being misapplied in isolation: Ockham without Chesterton can oversimplify legacy systems, Hanlon without Ockham can excuse mechanical problems as human error, and Chesterton without either can become a defense of bureaucracy.

Are these heuristics used in professional fields?

Yes, across many. Medicine uses Ockham's Razor formally as 'Occam's principle' in diagnostic reasoning: prefer a single diagnosis that explains all symptoms over multiple diagnoses. Law uses Hanlon's logic implicitly in mens rea standards, which distinguish deliberate wrongdoing from negligence. Software engineering teaches Chesterton's Fence as the 'before you refactor' checklist: understand the code before removing it. Public policy uses all three in regulatory impact analysis, though inconsistently. Journalism uses Ockham when evaluating competing explanations for events and Hanlon when interpreting official behavior that could be corrupt or incompetent. Organizational design uses Chesterton when inheriting company policies whose origin is unclear. The professional applications are not always labeled with the heuristic names, but the underlying logic is standard practice. Charlie Munger has argued that a well-designed mental toolkit should include these three alongside other core models (probabilistic thinking, incentive analysis, backward reasoning) because each captures a pattern of situations that appear frequently enough to justify a dedicated rule.

Which mental models should I learn first?

Start with these three and add others as needed. Ockham's Razor, Hanlon's Razor, and Chesterton's Fence cover a disproportionate share of the everyday reasoning situations most people face: interpreting evidence, interpreting behavior, and making changes. After these, the next additions depend on what you do. For scientists and analysts, add probabilistic thinking, base rates, and confirmation bias. For business leaders, add incentive analysis, survivorship bias, and second-order thinking. For anyone managing complexity, add Meadows's systems thinking vocabulary (stocks, flows, feedback loops). Shane Parrish's Great Mental Models series, published through Farnam Street, organizes about 100 such models across four volumes. Charlie Munger has argued that 80 to 100 major mental models suffice for most decision-making, but the marginal return drops steeply after the first 20 to 30. The three covered in this article are in that first tier, along with cost-benefit analysis, opportunity cost, margin of safety, and inversion thinking. Investing 20 minutes learning each pays compounding returns over years of decisions.