On the night of September 26, 1983, Soviet Lieutenant Colonel Stanislav Petrov sat alone in the Serpukhov-15 bunker south of Moscow, watching a nuclear early-warning system scream at him that the United States had just launched five intercontinental ballistic missiles toward the Soviet Union. The Cold War was at one of its most brittle points — the USSR had shot down Korean Air Lines Flight 007 just three weeks earlier, killing 269 people, and NATO was preparing to deploy Pershing II missiles in Western Europe. Every institutional incentive, every political pressure, every conditioning of the Soviet military apparatus pointed toward a single conclusion: this was real. This was the attack.
Petrov did not launch a retaliatory strike.
He reasoned — in the span of a few minutes, under conditions of enormous stress — that if the United States were initiating nuclear war, it would not begin with five missiles. It would begin with hundreds. The small number, combined with his distrust of a newly deployed satellite system still working out its failures, pointed not to American aggression but to a malfunctioning sensor. He was right. The system had mistaken the sun's reflection off high-altitude clouds for missile launches. The software had a bug. The hardware had limitations no one had adequately tested. No one in the Soviet military hierarchy was malicious. No one in the American government was attacking. The machinery of a trillion-dollar defense system had simply made a catastrophic error, and one man's decision to attribute it to incompetence rather than intent may have prevented thermonuclear war.
This is Hanlon's Razor at its most consequential: the discipline of reaching for the simplest, least sinister explanation before escalating to the catastrophic one.
"Never attribute to malice that which is adequately explained by stupidity — or more charitably, by ignorance." — Robert J. Hanlon, 1980
What Hanlon's Razor Actually Says
The aphorism is precise: "Never attribute to malice that which is adequately explained by stupidity." It first appeared in print in Robert J. Hanlon's submission to Arthur Bloch's 1980 anthology Murphy's Law Book Two: More Reasons Why Things Go Wrong, where Hanlon — a software developer from Scranton, Pennsylvania — contributed the formulation as an original saying. The name stuck. The idea, as we will see, did not originate with Hanlon, but the popularization and the eponym belong to him.
The razor operates as an epistemological tool — a heuristic for pruning hypothesis space when something has gone wrong. When faced with an adverse outcome, human cognition instinctively reaches for agency: someone must have wanted this. Hanlon's Razor interrupts that reach and asks a prior question: could this outcome have occurred without anyone intending it?
Critically, the razor is often misread. It does not say malice is rare or impossible. It does not counsel naivety about the existence of bad actors. It says something subtler and more useful: that in the absence of evidence specifically pointing to intentional wrongdoing, the hypothesis of incompetence, negligence, indifference, or error is almost always more parsimonious — and therefore should be investigated first. It is a tool for prioritizing hypotheses, not for ruling out conclusions.
The connection to Occam's Razor is intentional. William of Ockham's 14th-century principle — that among competing hypotheses, the one requiring the fewest unnecessary assumptions should be preferred — maps directly onto the malice-versus-incompetence question. Malice requires more assumptions: that an actor identified a goal, formulated a plan, executed it deliberately, and concealed their intent. Incompetence requires fewer: that a system, process, or person failed in one of the many ordinary ways that systems, processes, and people fail. Hanlon's Razor is, at its core, Occam's Razor applied to human motivation.
A Taxonomy of Bad Outcomes: Malice, Negligence, and Indifference
Not all failures are the same, and Hanlon's Razor does not flatten them into one category. When something goes wrong, three distinct hypotheses usually compete:
| Hypothesis | Core Claim | Evidence That Supports It | When It Is Most Likely |
|---|---|---|---|
| Malice | Someone wanted this outcome and engineered it deliberately | Pattern of targeting, concealment, prior motive, benefit to actor | Organized crime, fraud, systematic discrimination, warfare, sabotage |
| Incompetence / Negligence | Someone failed to prevent this outcome through error, poor judgment, or inadequate skill | No clear benefit to any actor, process failures, training gaps, communication breakdowns | Bureaucratic disasters, product failures, accidents, miscommunications |
| Indifference | Someone knew the risk but did not prioritize addressing it | Documented awareness, lack of action despite warnings, cost-cutting decisions | Corporate negligence, regulatory failure, deferred maintenance, systemic inequality |
The distinction between these categories has profound practical consequences. Treating negligence as malice leads to punishment where remediation is needed, adversarialism where collaboration is possible, and escalation where de-escalation would serve everyone better. Treating malice as indifference, on the other hand, allows genuine bad actors to evade accountability. Getting the category right is a prerequisite for getting the response right.
When the Boeing 737 MAX crashed in Indonesia in October 2018 and again in Ethiopia in March 2019, killing 346 people, the initial public response often framed the question as one of corporate malice — did Boeing deliberately hide a dangerous system to protect profits? The fuller picture that emerged through investigation was more damning in some ways and more exculpatory in others: a culture of cost-cutting, institutional pressure to avoid pilot retraining, flawed safety assumptions, and regulatory capture by the FAA. The MCAS system's flaws were less a secret kept by malicious individuals than a collective failure of incentives, oversight, and organizational communication. That distinction mattered enormously for how aviation safety regulators redesigned certification processes afterward.
Why Humans Default to Malice
The tendency to attribute bad outcomes to intentional agency is not a random error. It is the predictable output of cognitive systems shaped by millions of years of evolution in environments where failing to detect a predator or a hostile competitor was far more costly than falsely detecting one. The asymmetry of errors — one kills you, the other wastes some energy — created a strong bias toward threat detection.
Kenneth Dodge and John Coie, in their landmark 1987 study published in the Journal of Personality and Social Psychology, documented what they called hostile attribution bias — the systematic tendency to interpret ambiguous social stimuli as intentionally hostile. Their research, conducted with children exhibiting aggressive behavior patterns, found that children classified as "reactive aggressors" consistently attributed hostile intent to peers in situations where intent was genuinely ambiguous (a bump in a hallway, a spilled drink, a taken toy). The bias was not a feature of stupidity or low social intelligence in isolation — it was a learned threat-detection pattern that had been shaped by environments where assuming hostility was a reasonable protective strategy.
This research was extended by Daniel Nagin and Richard Tremblay in the 1990s, who traced hostile attribution bias into adulthood and showed its connection to escalating conflict cycles. When Actor A interprets Actor B's accidental slight as deliberate, A responds with hostility, which B — reading A's response through the same bias — now interprets as deliberate aggression, justifying counter-hostility. The cycle requires no malice from either party to generate outcomes that look, from the outside, like sustained conflict.
The fundamental attribution error, documented by Lee Ross in his 1977 paper "The Intuitive Psychologist and His Shortcomings" in Advances in Experimental Social Psychology, adds another layer. Ross showed that humans systematically over-attribute behavior to dispositional factors (who someone is, what they intend) and under-attribute it to situational factors (the constraints, pressures, and systems they operate within). When a colleague misses a deadline, we see a lazy or hostile person; we do not automatically see an overwhelmed person operating within a broken project management system. The situation is invisible; the person is visible; the visible explanation wins.
These biases feed naturally into conspiracy thinking — the most extreme form of malice attribution. Researchers Jan-Willem van Prooijen and Mark van Vugt, writing in Perspectives on Psychological Science in 2018, argued that conspiracy thinking is an evolutionarily conserved pattern arising from the same agent-detection systems that help humans identify predators and social cheaters. The pattern-matching that kept ancestors alive in genuinely dangerous social environments now generates false positives in complex modern systems where bad outcomes are more often the product of distributed incompetence than coordinated malice.
Five Historical Case Studies
1. The Challenger Disaster (1986)
On January 28, 1986, NASA's Space Shuttle Challenger disintegrated 73 seconds after launch, killing all seven crew members. The cause was the failure of an O-ring seal in one of the solid rocket boosters, which had been compromised by cold temperatures at launch. The Rogers Commission report, and Richard Feynman's famous dissent within that report, revealed a picture not of malicious concealment but of institutional dysfunction: engineers at Morton Thiokol had raised concerns about O-ring performance in cold weather the night before launch. Their concerns were overridden by managers operating under schedule pressure, with a communication culture that made it difficult for technical objections to rise above middle management.
Diane Vaughan, in her 1996 book The Challenger Launch Decision, coined the phrase "normalization of deviance" to describe what had actually happened: NASA personnel had repeatedly flown with known O-ring anomalies, each time interpreting the anomaly as within acceptable bounds. Over time, the deviant condition became the normal one. No one set out to kill astronauts. The disaster was the product of systematic organizational processes that made a catastrophic outcome structurally possible.
Attributing the Challenger disaster to malice — to executives knowingly sacrificing astronauts for money or schedule — would have generated the wrong remedies. The actual remedies that emerged, imperfect as they were, focused on communication culture, the position of safety officers within institutional hierarchies, and the documentation of dissent.
2. The Vasa Warship (1628)
On August 10, 1628, the Swedish warship Vasa set sail on her maiden voyage from Stockholm harbor and sank approximately 1,300 meters from shore, killing 30 to 50 of the crew. The Vasa was the pride of King Gustav II Adolf's naval expansion program, heavily armed with 64 bronze cannons on two gun decks — a design that was subsequently found to have been fatally top-heavy.
The subsequent inquiry into the disaster is one of history's earliest documented accident investigations. Officers were interrogated about whether any negligence or incompetence could be identified. The master shipwright, Henrik Hybertsson, had died before the ship was completed. His successor, Hein Jacobsson, testified that he had built the ship to the specifications he had been given. Investigators found no one to punish: the disaster was the product of a design process operating without adequate stability calculations (which were not yet part of maritime engineering knowledge), political pressure to arm the ship more heavily than originally planned, and the death of the lead designer before the errors could be identified and corrected.
No one sabotaged the Vasa. No one wanted it to sink. The most expensive warship Sweden had ever built was destroyed by the compounding of institutional pressure, incomplete knowledge, and the kind of catastrophic gap between design intent and physical reality that no one in 1628 had the mathematical tools to detect in advance.
3. The 1918 Spanish Flu Misinformation
The "Spanish Flu" pandemic of 1918-1919, which killed an estimated 50-100 million people globally, was not Spanish in origin — it likely emerged from the United States or China. Spain was the source of the name because it was one of the few countries not subject to wartime censorship, and Spanish newspapers reported freely on the outbreak while allied nations suppressed information to maintain wartime morale.
There was no conspiracy here, no deliberate effort to blame Spain or to hide the pandemic's true origins. The wartime censorship that created the information vacuum was motivated by straightforward military calculation — maintaining civilian and soldier morale — combined with the era's genuinely limited understanding of viral transmission. The outcome — a catastrophic delay in coordinated public health response and the mislabeling of a pandemic that still confuses historians — was the product of institutional indifference to truth in wartime combined with the ordinary incompetence of early 20th-century epidemiology.
4. The 2003 Iraq War Intelligence Failure
The intelligence assessment that Saddam Hussein possessed weapons of mass destruction, which formed the primary public justification for the 2003 invasion of Iraq, was wrong. The Senate Intelligence Committee's 2004 report and the subsequent Robb-Silberman Commission report in 2005 documented a systemic failure of the American intelligence community: analysts had been operating in an environment of confirmation bias, where the assumption of Iraqi WMD programs was so deeply embedded in analytical culture that evidence pointing the other way was systematically discounted.
The fuller picture, as documented by journalist Bob Drogin in his 2007 book Curveball: Spies, Lies, and the Con Man Who Caused a War, is one of institutional cascade failure: an intelligence community under political pressure, operating with inadequate source verification procedures, whose incentive structures rewarded alarming findings over cautious ones. Deliberate fabrication by senior officials cannot be entirely ruled out in the most extreme readings, but the baseline explanation — that a broken institutional process produced a false conclusion — does not require postulating a coordinated conspiracy to fit the evidence.
5. The Therac-25 Radiation Incidents (1985-1987)
Between 1985 and 1987, a Canadian radiation therapy machine called the Therac-25 delivered massive overdoses of radiation to at least six patients, killing three of them. The machine was a computer-controlled linear accelerator that had replaced hardware safety interlocks with software controls — a decision that seemed rational in an era when software was increasingly trusted for critical systems.
The software contained a race condition — a subtle timing-dependent bug in which a user who entered commands faster than the system expected could accidentally switch the machine into a high-power mode without the beam-attenuating filter in place, delivering radiation doses hundreds of times above therapeutic levels. No one at Atomic Energy of Canada Limited, the machine's manufacturer, designed the Therac-25 to kill patients. The engineers were operating under the assumption that software was more reliable than hardware, an assumption that was reasonable given the knowledge of the time.
Nancy Leveson and Clark Turner's exhaustive 1993 case study, published in IEEE Computer, identified the failure as rooted in inadequate testing methodology, poor incident response when the first anomalies appeared, and an organizational culture that resisted patient-harm explanations for machine malfunctions. The patients were killed by a software bug and an institutional failure to respond to early warning signs — not by anyone's malicious intent.
Applications Across Domains
Organizational Conflict and Workplace Dynamics
In workplace environments, Hanlon's Razor is one of the most practically valuable tools available to managers and individual contributors alike. The colleague who does not reply to an email is not, by default, snubbing you — they are managing an inbox of 200 messages and your email was not flagged. The manager who assigns you to an undesirable project is not, by default, trying to marginalize you — they may be operating with incomplete information about your skills and preferences, or with constraints on their decision-making that they have not communicated.
Research by Amy Edmondson at Harvard Business School, particularly her work on psychological safety published in Administrative Science Quarterly in 1999, has shown that teams operating in low-psychological-safety environments — where mistakes are treated as moral failures rather than information — consistently underperform compared to teams where errors are treated as learning opportunities. The organizational culture that treats incompetence as malice creates the conditions for the very failures it fears: people hide problems rather than surfacing them, and small errors cascade into catastrophic ones.
Political Discourse and Conspiracy Thinking
Political discourse has always attracted malice attribution, but the dynamics have intensified in the era of social media, where outrage-inducing content travels faster than corrective information. A government agency that fails to process benefit applications efficiently becomes, in malice-attribution framing, a deliberate attempt to starve vulnerable populations. A bureaucratic delay becomes a coordinated suppression campaign.
Psychologist Cass Sunstein and legal scholar Adrian Vermeule, in their 2009 paper "Conspiracy Theories" in the Journal of Political Philosophy, argued that conspiracy theories are self-sealing in a characteristic way: any evidence against the theory can be incorporated as further evidence of the conspiracy's reach. Hanlon's Razor provides a useful counter-pressure — not by ruling out conspiracies, but by requiring that the incompetence hypothesis be tested first, and that the conspiracy hypothesis carry a proportionally higher evidential burden.
Software Engineering and Debugging
In software engineering, Hanlon's Razor is operationalized in debugging culture as the default assumption that unexpected behavior has an explanation in the code, the environment, or the data — not in the adversarial intent of a colleague who wrote the function three years ago. The blameless postmortem, developed as a practice in software engineering culture (particularly at Etsy, as documented by John Allspaw in a 2012 essay "Blameless PostMortems and a Just Culture") and now widespread in Site Reliability Engineering, is a formal institutionalization of Hanlon's Razor applied to technical failures. It does not eliminate accountability — people are still responsible for their work — but it shifts the frame from punishment to understanding, which produces better outcomes for the organization.
International Relations
At the level of nation-states, malice attribution can be literally catastrophic. The security dilemma — one of the central concepts of international relations theory, formalized by John Herz in his 1950 paper in World Politics — is, at its core, a failure of Hanlon's Razor applied at the state level. When State A builds up its military for defensive reasons, State B interprets the buildup as offensive preparation and responds in kind. Neither state is malicious; both are operating from genuine security concerns; but the mutual attribution of hostile intent produces an arms race that makes both states less secure.
Robert Jervis's 1976 book Perception and Misperception in International Politics remains the most thorough treatment of how cognitive biases, including the tendency toward malice attribution, produce foreign policy disasters. The Cuban Missile Crisis itself was narrowly averted not only by diplomatic skill but by the willingness of key decision-makers — particularly Robert Kennedy and Anatoly Dobrynin — to consider that the other side might be operating under constraints and miscommunications rather than from a single coherent malicious intent.
The Intellectual Lineage
Hanlon's 1980 formulation was not an original insight but the crystallization of a much older principle.
The earliest close formulation in the Western tradition is often attributed to Johann Wolfgang von Goethe, who wrote in his 1774 novel The Sorrows of Young Werther: "Misunderstandings and neglect create more confusion in this world than trickery and malice." Goethe was not making a narrow psychological claim but a broad observation about the mechanics of human social disaster: the ordinary failures of communication and attention do more damage than the extraordinary project of deliberate harm.
Napoleon Bonaparte is credited with a related maxim — "Never ascribe to malice that which is adequately explained by incompetence" — though the sourcing is disputed and likely apocryphal in this exact form. What is documented is that Napoleon, in his military correspondence, repeatedly warned against assuming coordinated enemy cleverness where disorganization was the simpler explanation.
The most striking pre-Hanlon formulation appears in Robert A. Heinlein's 1941 science fiction novella "Logic of Empire": "You have attributed conditions to villainy that simply result from stupidity." Heinlein's formulation is notable for extending the principle beyond individual motivation to systemic analysis: institutions, policies, and social structures can produce terrible outcomes without requiring any individual who intended those outcomes.
The philosophical tradition of the Principle of Charity — the interpretive norm that one should assume the strongest, most reasonable version of another's position before critiquing it — runs in parallel. Neil Wilson introduced the principle formally in his 1959 paper in The Review of Metaphysics, and it was developed by Donald Davidson in his work on radical interpretation in the 1970s.
The connection between these traditions — the aphoristic (Goethe, Napoleon, Heinlein, Hanlon), the psychological (hostile attribution bias, fundamental attribution error), and the philosophical (Principle of Charity) — suggests that Hanlon's Razor is not a clever saying but the convergent product of careful thinking about human error across disciplines and centuries.
The Limits of the Razor: When Malice Is the Right Explanation
Hanlon's Razor is a heuristic, not a universal law, and the failure mode of over-applying it is as real as the failure mode of under-applying it.
There are circumstances in which malice is, in fact, the most parsimonious explanation for an adverse outcome. When an outcome benefits the actor, when the same pattern of harm recurs across different victims, when documentation shows awareness combined with concealment, when targets are selected in ways that cannot be explained by random or systemic failure — these are the evidential conditions under which the malice hypothesis earns priority.
Tobacco industry litigation provides the clearest modern case study. For decades, the major American tobacco companies denied the link between smoking and cancer, funded research designed to produce ambiguous results, and marketed products they knew were lethal to consumers who were not informed of the risk. The internal documents that emerged through litigation — most comprehensively analyzed by Stanton Glantz, John Slade, and colleagues in The Cigarette Papers (1996) — showed explicit awareness of the health risks at the highest corporate levels, combined with deliberate strategies of concealment. This was not incompetence. Applying Hanlon's Razor uncritically to tobacco industry communications would have been a catastrophic mistake, and did, in fact, delay effective public health regulation for decades.
The practical heuristic is this: Hanlon's Razor should be the first hypothesis tested, not the last conclusion reached. Apply it to generate your opening interpretation, then remain genuinely open to revising that interpretation as evidence accumulates. If the evidence consistently points toward intent — pattern, concealment, benefit, targeting — follow the evidence. The razor is for cutting through the noise of initial reaction, not for shielding wrongdoers from accountability.
The Practice of Non-Malicious Interpretation
What does it look like to actually apply Hanlon's Razor in daily life?
It begins with a deliberate pause before interpretation. When an adverse outcome occurs — a colleague's failure, a system's breakdown, a government's error, a stranger's slight — the first cognitive move is to ask not "who wanted this?" but "what processes could have produced this?" This is not a passive or indulgent question. It is a disciplined inquiry into the actual causal chain of the event.
Daniel Kahneman's framework in Thinking, Fast and Slow (2011) maps this onto the System 1 / System 2 distinction. Malice attribution is a System 1 response — fast, automatic, emotionally satisfying, requiring little deliberate effort. The structured alternative — generating an incompetence hypothesis, evaluating its fit against available evidence, considering what additional information would distinguish malice from error — is System 2 work, requiring deliberate effort and time.
Research by Sandra Murray and John Holmes at the University of Waterloo, published in Psychological Review in 1994, found that partners who consistently attributed ambiguous relationship behaviors to positive rather than negative intent reported higher relationship satisfaction and were more resilient to conflict. The interpretation came first; the reality, partly, followed.
References
- Bloch, Arthur. Murphy's Law Book Two: More Reasons Why Things Go Wrong. Price/Stern/Sloan, 1980.
- Dodge, Kenneth A., and John D. Coie. "Social-Information-Processing Factors in Reactive and Proactive Aggression in Children's Peer Groups." Journal of Personality and Social Psychology 53, no. 6 (1987): 1146-1158.
- Ross, Lee. "The Intuitive Psychologist and His Shortcomings: Distortions in the Attribution Process." Advances in Experimental Social Psychology 10 (1977): 173-220.
- Vaughan, Diane. The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA. University of Chicago Press, 1996.
- Leveson, Nancy G., and Clark S. Turner. "An Investigation of the Therac-25 Accidents." IEEE Computer 26, no. 7 (1993): 18-41.
- Edmondson, Amy C. "Psychological Safety and Learning Behavior in Work Teams." Administrative Science Quarterly 44, no. 2 (1999): 350-383.
- Jervis, Robert. Perception and Misperception in International Politics. Princeton University Press, 1976.
- Herz, John H. "Idealist Internationalism and the Security Dilemma." World Politics 2, no. 2 (1950): 157-180.
- van Prooijen, Jan-Willem, and Mark van Vugt. "Conspiracy Theories: Evolved Functions and Psychological Mechanisms." Perspectives on Psychological Science 13, no. 6 (2018): 770-788.
- Sunstein, Cass R., and Adrian Vermeule. "Conspiracy Theories: Causes and Cures." Journal of Political Philosophy 17, no. 2 (2009): 202-227.
- Kahneman, Daniel. Thinking, Fast and Slow. Farrar, Straus and Giroux, 2011.
- Drogin, Bob. Curveball: Spies, Lies, and the Con Man Who Caused a War. Random House, 2007.
- Blight, James G., and David A. Welch. On the Brink: Americans and Soviets Reexamine the Cuban Missile Crisis. Hill and Wang, 1989.
- Heinlein, Robert A. "Logic of Empire." Astounding Science Fiction, March 1941.
- Goethe, Johann Wolfgang von. The Sorrows of Young Werther. 1774.
- Glantz, Stanton A., John Slade, et al. The Cigarette Papers. University of California Press, 1996.
- Murray, Sandra L., and John G. Holmes. "Storytelling in Close Relationships: The Construction of Confidence." Psychological Review 101, no. 4 (1994): 586-608.
- Hunt, Andrew, and David Thomas. The Pragmatic Programmer: From Journeyman to Master. Addison-Wesley, 1999.
Frequently Asked Questions
What is Hanlon's Razor?
Hanlon's Razor states: 'Never attribute to malice that which is adequately explained by stupidity.' It is a heuristic for hypothesis prioritization — when something goes wrong, investigate incompetence, negligence, or error before assuming deliberate harmful intent. It first appeared in Robert J. Hanlon's submission to Arthur Bloch's 1980 anthology Murphy's Law Book Two.
Is Hanlon's Razor related to Occam's Razor?
Yes. Hanlon's Razor is Occam's Razor applied to human motivation. Malice requires more assumptions: that an actor identified a goal, formulated a plan, executed it deliberately, and concealed their intent. Incompetence requires fewer: that a system or person failed in one of the many ordinary ways that systems and people fail. The simpler explanation — incompetence — should be tested first.
Where did Hanlon's Razor originate?
Robert J. Hanlon, a software developer from Scranton, Pennsylvania, submitted the aphorism to Arthur Bloch's 1980 Murphy's Law compilation. But the idea is much older: Goethe wrote in 1774 that 'misunderstandings and neglect create more confusion in this world than trickery and malice.' Heinlein formulated a version in 1941. The sentiment also appears in Napoleon's military correspondence.
Why do humans default to attributing malice?
Because evolutionary selection favored threat detection. Failing to detect a genuine threat (a predator, a hostile competitor) was far more costly than falsely detecting one. This produces hostile attribution bias — the systematic tendency to interpret ambiguous events as intentionally hostile. The fundamental attribution error (Ross 1977) compounds this: we over-attribute behavior to people's dispositions and under-attribute it to the situations and systems they operate within.
When should you NOT apply Hanlon's Razor?
When evidence specifically points to deliberate intent: when an outcome benefits the actor, when the same pattern of harm recurs across multiple victims, when documentation shows awareness combined with active concealment, or when targeting is too specific to be explained by random failure. The tobacco industry's documented concealment of health risks is a case where incompetence was not an adequate explanation. Hanlon's Razor is the first hypothesis to test, not the last conclusion to reach.
How did the Stanislav Petrov incident demonstrate Hanlon's Razor?
On September 26, 1983, Soviet Lt. Col. Stanislav Petrov's nuclear early-warning system reported five US missiles incoming. Petrov reasoned that a real American first strike would not begin with five missiles — the number was inconsistent with a deliberate attack. He attributed the alert to a system malfunction rather than hostile intent. He was right: software had mistaken sunlight reflecting off clouds for missile launches. His application of Hanlon's Razor may have prevented nuclear war.
What is the connection between Hanlon's Razor and blameless postmortems?
Blameless postmortems — a practice in software engineering and Site Reliability Engineering — institutionalize Hanlon's Razor by asking 'what process failure allowed this outcome?' before 'who is responsible?' The practice, developed notably at Etsy, produces better learning from failures because it shifts the frame from punishment to systemic understanding, which generates more actionable remediation.