The Myth of Bad Apples
Enron collapses. Narrative: Bad people (Ken Lay, Jeffrey Skilling) committed fraud.
Reality: Thousands of employees knew something wrong. Most were normal people. System encouraged, rewarded, and protected fraud.
Wells Fargo scandal. Narrative: Bad employees opened fake accounts.
Reality: Unrealistic sales targets, punishment for dissent, reward for results regardless of methods. Good people did bad things under pressure.
Boeing 737 MAX crashes. Narrative: Bad engineers cut corners.
Reality: Culture prioritizing cost/schedule over safety, management pressuring engineers, board failing to oversee. Systemic failure.
Common belief: Ethical failures happen because bad people do bad things.
Reality: Most ethical failures involve normal people in bad systems. As Hannah Arendt observed after witnessing the Eichmann trial, ordinary people commit extraordinary harm not through malice but through thoughtlessness — a failure to think critically about what they are participating in. She called it "the banality of evil."
Understanding how ethical failures happen helps you:
- Recognize early warning signs
- Design systems that prevent failure
- Avoid becoming part of ethical collapse
- Hold right people/systems accountable
This isn't about excusing misconduct—it's about preventing it by understanding actual mechanisms, not comforting myths about "bad apples." Understanding ethical decision making — the deliberate process of applying moral reasoning to choices — is one of the most effective defenses against the failures described here. See also: Good Intentions, Bad Outcomes.
Core Mechanisms of Ethical Failure
Incremental Compromise (Slippery Slope)
Definition: Gradual erosion of ethical standards through small compromises, each individually justified.
Mechanism:
- Minor ethical compromise (seems trivial)
- No immediate negative consequences
- Small compromise becomes new baseline
- Next compromise slightly larger (but relative to new baseline, still "small")
- Repeat → Eventually major violations feel normal
Why it works:
- Each step seems justifiable in context
- Changes happen slowly (boiling frog)
- No dramatic line-crossing moment
- Cognitive dissonance resolved through rationalization
Example - Theranos (Elizabeth Holmes):
- Start: Overpromise technology capabilities (common in startups)
- Next: Use fake prototypes in demos (just for investors, will catch up)
- Next: Use competitors' machines, claim they're proprietary (temporary workaround)
- Next: Report false test results to patients (numbers will improve, avoiding panic)
- End: Massive fraud endangering lives
Each step: Rationalized as necessary, temporary, no big deal.
Collective result: Criminal fraud.
Prevention:
- Clear red lines (non-negotiable boundaries)
- Regular external audits (fresh eyes catch drift)
- Periodic ethics reset (explicitly recommit to standards)
- "Frog out of water" (imagine explaining each step to outsider)
Normalized Deviance
Definition (Diane Vaughan, 1996): Rule violations become routine because nothing bad happened initially, creating false sense of safety.
"What I found at NASA was not deviance but conformity — conformity to an [organizational] culture that normalized risk." — Diane Vaughan, sociologist and author of The Challenger Launch Decision
Mechanism:
- Rule exists for safety/ethics
- Rule violated (accident, convenience, pressure)
- No negative consequence (luck, circumstances)
- Violation repeats
- Becomes standard practice ("how we do things")
- Risk accumulates until catastrophe
Classic example - Space Shuttle Challenger (1986):
- Design: O-rings seal joints; shouldn't erode
- Reality: O-rings eroding on flights (rule violation)
- Response: Each time no explosion → "Acceptable risk"
- Normalization: Erosion became expected, not alarm
- Result: Cold temperature (29°F) → O-ring failed → Explosion → 7 astronauts killed
Vaughan's analysis: Engineers knew risk. But incremental normalization made danger feel routine. "We've launched with erosion before; it was fine."
Other examples:
Financial fraud:
- Minor accounting irregularity
- Auditors don't catch (or ignore)
- Becomes practice
- Irregularities grow
- Eventually massive fraud (Enron, WorldCom)
Safety violations:
- Skip safety check (saves time)
- Nothing happens
- Becomes routine
- Eventually: disaster
Prevention:
- Zero tolerance for critical rules (no normalization allowed)
- Near-miss reporting (treat close calls as warnings, not luck)
- External review (internal culture can't see own normalization)
- Mindset: "We were lucky" not "It's fine"
Moral Disengagement
Definition (Albert Bandura, 1986): Psychological processes allowing people to act unethically without feeling immoral.
Mechanisms:
1. Moral justification: Reframe harmful act as serving higher purpose.
- Example: "Lying to investors is necessary to save the company (and jobs)."
2. Euphemistic labeling: Use sanitized language to obscure harm.
- Example: "Enhanced interrogation" (torture), "let go" (fired), "revenue optimization" (price gouging)
3. Advantageous comparison: Compare to worse alternatives to seem less bad.
- Example: "Everyone else commits tax fraud; we're less egregious."
4. Displacement of responsibility: Blame authority or orders.
- Example: "I was told to do it; not my decision."
5. Diffusion of responsibility: Many people involved, no one feels responsible.
- Example: "I just provided data; someone else made the decision."
6. Disregard/distort consequences: Minimize, ignore, or deny harm.
- Example: "The fake accounts didn't really hurt customers."
7. Dehumanization: View victims as objects, not people.
- Example: "Customers are just numbers; they don't care."
8. Attribution of blame: Victims deserved it or provoked it.
- Example: "If they were stupid enough to fall for it, not our fault."
Albert Bandura, who developed the theory of moral disengagement, summarized the problem bluntly: "People do not ordinarily engage in reprehensible conduct until they have justified to themselves the rightness of their actions."
Example - Milgram experiment (1961):
- Participants administered (fake) electric shocks to people
- 65% went to maximum voltage (450V, labeled "XXX")
- Most were uncomfortable but continued when told "you must continue"
- Moral disengagement: Authority (experimenter) took responsibility; subjects felt "just following orders"
Prevention:
- Personal accountability (can't hide in collective)
- Direct exposure to consequences (see harm caused)
- Language discipline (avoid euphemisms)
- Encourage dissent (question orders/norms)
Incentive Corruption
Definition: Reward structure encourages unethical behavior.
Mechanism: When doing wrong thing rewarded and doing right thing punished, people comply despite reservations. Misaligned incentives are one of the most reliable drivers of systemic misconduct.
Forms:
Rewards for bad behavior:
- Bonuses for sales (regardless of how achieved)
- Promotions for results (don't ask about methods)
- Status for rule-breaking ("get things done")
Punishments for good behavior:
- Whistleblowers fired or marginalized
- Those who raise concerns labeled "not team players"
- Ethical employees miss promotions (slower results)
Example - Wells Fargo:
- Incentive: Bonuses for accounts opened, risk of being fired for low numbers
- Pressure: Impossible targets (8 products per customer)
- Result: 5,300 employees opened 3.5 million fake accounts
- Why so many complied: Survival (keep job) vs. ethics (don't commit fraud)
Example - Sears auto repair scandal (1992):
- Incentive: Mechanics paid commission on repairs sold
- Result: Recommended unnecessary repairs (to earn commission)
- Why: Mechanic income dependent on upselling
Prevention:
- Align incentives with values (reward ethical behavior)
- Remove perverse incentives (don't reward outcomes achieved through wrong means)
- Balance metrics (not just sales—also ethics, customer satisfaction, long-term health)
- Protect whistleblowers (reward, don't punish, raising concerns)
Authority and Obedience
Definition: People obey authority figures even when orders conflict with ethics.
"The social psychology of this century reveals a major lesson: often it is not so much the kind of person a man is as the kind of situation in which he finds himself that determines how he will act." — Stanley Milgram, Obedience to Authority (1974)
Mechanism (Stanley Milgram):
- Authority perceived as legitimate
- Responsibility transferred to authority ("they told me to")
- Social pressure to comply (don't challenge, don't question)
- Incremental escalation (small steps, each justified)
Factors increasing obedience:
- Legitimate authority (title, credentials, institution)
- Physical proximity (authority present)
- Distance from victim (can't see harm)
- Gradual escalation (no clear line)
- Institutional context (official setting)
Example - Abu Ghraib prison abuse (Iraq, 2004):
- Low-ranking soldiers tortured prisoners
- Defense: "Following orders" (from intelligence officers, contractors)
- Reality: Orders may have been implicit or inferred, but authority context enabled abuse
- Contributing factors: Dehumanization (enemy prisoners), distance from accountability, group dynamics
Example - Corporate whistleblower suppression:
- Employee raises ethical concern
- Manager says "Don't worry about it; I'll handle it" (authority)
- Employee backs down (obedience to authority)
- Problem festers
Prevention:
- Question authority (culture encourages dissent)
- Personal accountability (can't pass all responsibility to authority)
- Proximity to consequences (see results of actions)
- Multiple reporting channels (alternative authorities)
Groupthink and Conformity
Definition: Group pressure leads to consensus without critical evaluation, suppressing dissent. This is one of the most well-documented traps in decision-making.
Mechanisms:
Groupthink (Irving Janis, 1972):
- Illusion of invulnerability (overconfidence)
- Rationalization (dismiss warnings)
- Belief in group's morality (we're the good guys)
- Stereotyping out-groups (critics are fools/enemies)
- Self-censorship (don't voice doubts)
- Illusion of unanimity (silence interpreted as agreement)
- Pressure on dissenters (conform or be excluded)
- Mindguards (protect group from dissenting information)
Conformity (Solomon Asch, 1951):
- People conform to group even when group is obviously wrong
- Social pressure stronger than individual judgment
- 75% of participants conformed at least once (even for clearly wrong answer)
Example - NASA Challenger decision (1986):
- Engineers warned: too cold to launch safely
- Management pressure to launch (political, schedule)
- Groupthink: "We've launched in cold before" (normalized deviance + groupthink)
- Dissent suppressed
- Result: Disaster
Example - Financial crisis (2008):
- Banks all pursuing same risky strategies (subprime mortgages)
- Groupthink: "Everyone's doing it; must be safe"
- Dissenters dismissed as not understanding new paradigm
- Result: Global financial collapse
Prevention:
- Encourage dissent (reward devil's advocate role)
- Anonymous feedback (reduce social pressure)
- External review (break echo chamber)
- Leadership models openness (admits doubts, welcomes challenges)
Information Asymmetry and Opacity
Definition: Those making decisions lack information about consequences; those seeing consequences lack power to stop them.
Mechanism:
- Decision-makers insulated from harm
- Those affected have no voice
- Complexity obscures causal links
- Opacity prevents accountability
Example - Opioid crisis:
- Pharmaceutical executives: Decided to aggressively market opioids, downplay addiction risk
- Consequences: Addiction, overdoses, deaths (in different communities)
- Information gap: Executives didn't see (or chose not to see) devastation
- Result: Epidemic (500,000+ deaths)
Example - Subprime mortgage crisis:
- Bankers: Created complex financial products (CDOs, synthetic CDOs)
- Complexity: Even creators didn't fully understand risk
- Distance: Bankers far from homeowners losing homes
- Result: Didn't grasp (or care about) harm until system collapsed
Prevention:
- Transparency (make information visible)
- Proximity (decision-makers see consequences)
- Simplicity (reduce obscuring complexity)
- Feedback loops (connect actions to outcomes)
Situational Factors
Pressure and Time Constraints
Mechanism: Urgency overwhelms ethical deliberation.
Forms:
- Financial pressure (must hit targets)
- Competitive pressure (rivals moving faster)
- Crisis (no time to think)
- Career pressure (fear of failure/firing)
Effect:
- Shortcuts taken
- Ethical concerns dismissed as luxuries
- Long-term consequences ignored
Example - Volkswagen emissions scandal:
- Pressure to compete with hybrids (technology, cost)
- Can't meet emissions standards without expensive engineering
- Time/budget pressure
- Result: Install "defeat device" (cheat on tests)
Prevention:
- Build slack (don't operate at edge of capacity constantly)
- Protect long-term thinking (don't sacrifice ethics for speed)
- Crisis protocols (ethics don't disappear in emergencies)
Competition and Survival
Mechanism: "If we don't, competitor will" or "If I don't, I'll be fired."
Logic:
- Ethical choice = competitive disadvantage
- Unethical choice = survival
- Therefore: unethical choice justified
Problem: Race to bottom (everyone violates ethics to compete).
Example - Tech company data practices:
- "If we don't collect user data, competitor will"
- "Users will use competitor; we'll die"
- Result: All companies violate privacy (collective action problem)
Better framing:
- Some competition worth losing (if winning requires ethical compromise)
- Ethical differentiator can be competitive advantage (trust)
- Collective action (industry standards, regulation) prevents race to bottom
Prevention:
- Define non-negotiables (won't compromise regardless of competition)
- Coordinate (industry standards)
- Regulatory floor (prevents race to bottom)
Isolation and Echo Chambers
Mechanism: Insulated from outside perspective, internal organizational culture norms drift.
Contributing factors:
- Geographic isolation (head office far from operations)
- Social isolation (executives socialize only with each other)
- Intellectual isolation (dismiss external critics)
- Cultural isolation (unique organizational culture disconnected from broader norms)
Effect: Internal norms seem reasonable; external perspective would reveal problems.
Example - Enron:
- Insular culture ("smartest guys in room")
- Dismissed critics as not understanding
- Internal norms drifted far from legal/ethical standards
- By the time collapsed, massive fraud seemed normal internally
Prevention:
- External board members (outside perspective)
- Regular rotation (fresh eyes)
- Engage critics (don't dismiss)
- Broad reference group (compare to multiple benchmarks, not just peers)
Warning Signs of Ethical Drift
Early indicators (intervene before collapse):
Increasing Rationalization
Sign: More frequent justifications for questionable behavior.
Examples:
- "Everyone does it"
- "It's not technically illegal"
- "Just this once"
- "Greater good justifies it"
- "No choice"
What it means: Cognitive dissonance increasing (people know it's wrong but doing it anyway).
Suppressed Dissent
Sign: Those raising concerns ignored, marginalized, or punished. Understanding accountability structures helps explain why suppression of dissent is often the rule rather than the exception.
Examples:
- Whistleblowers fired
- Dissenters labeled "not team players"
- Concerns dismissed without investigation
- Retaliation against those who speak up
What it means: System protecting itself from accountability.
Metrics Obsession
Sign: Hitting numbers at all costs; means don't matter.
Examples:
- Celebrate results, don't ask how achieved
- Fire those who miss targets, ignore ethical violations
- Manipulation of metrics common and tolerated
What it means: Goodhart's Law (metric becomes target, ceases to be good measure); perverse incentives dominant.
Blame Culture
Sign: Errors punished, not learned from; cover-ups encouraged.
Examples:
- Shoot messenger
- Scapegoat individuals, ignore systemic causes
- Failure not tolerated (so hidden)
What it means: Information won't surface; problems will fester.
Leadership Hypocrisy
Sign: Leaders violate values they espouse; "do as I say, not as I do."
Examples:
- CEO preaches integrity, commits fraud
- Executives exempt from rules others must follow
- Values poster vs. actual practices
What it means: Values are PR, not real; behavior follows leadership example, not stated values.
Minor Violations Ignored
Sign: Small rule-breaking tolerated (normalized deviance beginning).
Examples:
- Safety shortcuts
- Accounting irregularities
- Policy violations
What it means: Standards eroding; major violations likely to follow.
Prevention Strategies
Design Systems for Ethics
Principle: Make doing right thing easy; doing wrong thing hard.
Tactics:
- Default to ethical (require extra steps to violate)
- Remove temptation (don't put people in situations requiring heroism)
- Circuit breakers (automatic stops when thresholds crossed)
- Transparency (hard to hide misconduct)
Example - Autopilot safety features:
- System won't allow certain dangerous maneuvers
- Doesn't rely on pilot choosing not to
Align Incentives
Principle: Reward ethical behavior, not just results.
Tactics:
- Balance metrics (ethics, long-term, stakeholder welfare—not just profit)
- Long-term incentives (vest over years, clawback provisions)
- Punish violations (consequences for unethical behavior, even if profitable)
- Protect whistleblowers (reward, don't punish)
Maintain Oversight
Principle: Independent eyes watching.
Tactics:
- Independent board (not captured by management)
- External audits (not just rubber stamps)
- Multiple reporting channels (bypasses corrupted hierarchy)
- Regular ethics audits
Encourage Dissent
Principle: Make raising concerns safe and expected.
Tactics:
- Psychological safety (no retaliation)
- Devil's advocate role (formalize dissent)
- Anonymous channels (reduce social pressure)
- Leadership models (welcomes challenges)
Model from Top
Principle: Leaders set culture through behavior, not words.
Tactics:
- Walk the talk (actions match stated values)
- Admit mistakes (accountability)
- Visible consequences (leaders not exempt)
- Consistent values (no exceptions for high performers)
Research Evidence: The Mechanisms of Ethical Failure Under Controlled Conditions
Academic research on ethical failure has progressively moved from observation of historical cases to controlled experiments and natural experiments that isolate specific causal mechanisms. The resulting literature provides some of the clearest evidence in social science for how normal people come to participate in systematic wrongdoing.
Dan Ariely's research program on dishonesty, summarized in The (Honest) Truth About Dishonesty (2012) and documented across multiple peer-reviewed studies, established several key findings about what predicts ethical drift. Using a cheating-on-tests paradigm where participants could surreptitiously overclaim correct answers, Ariely and colleagues found that (1) the availability of moral reminders immediately before a decision dramatically reduced cheating—even reminding people of the Ten Commandments reduced cheating to near zero even among non-religious participants—suggesting that ethical fading is a primary mechanism of failure; (2) social proof is a powerful driver of moral behavior, with participants cheating significantly more when they observed a confederate cheating conspicuously; and (3) the magnitude of the potential gain had relatively little effect on cheating rates while the perceived detectability had very large effects, suggesting that most ethical violations are opportunistic rather than calculated.
Ann Tenbrunsel and David Messick's concept of "ethical fading," introduced in their 2004 Social Justice Research paper, has received substantial empirical support. The core finding: when decisions are framed in business terms (efficiency, cost-benefit, strategic positioning) rather than ethical terms (harm, fairness, rights), the ethical dimensions literally fade from awareness. Experiment participants who were given business-framed versions of ethically identical decisions were far less likely to identify the decisions as having ethical dimensions and far more likely to make choices that imposed harm on others. This research explains why sophisticated managers in environments that systematically use business language to describe what are fundamentally ethical choices consistently fail to engage their moral reasoning. The mechanism is not hypocrisy—it is that the framing genuinely alters what they perceive.
Francesca Gino, Shahar Ayal, and Dan Ariely (2009) published research in Organizational Behavior and Human Decision Processes demonstrating that moral licensing—the phenomenon where recent ethical behavior increases subsequent unethical behavior—operated through a specific mechanism: the prior ethical act reduced chronic identity threat (the anxiety of not living up to one's self-image as an ethical person), freeing participants to act self-interestedly without experiencing dissonance. This finding has direct implications for corporate ethics programs that celebrate employees as ethical agents while leaving structural incentives unchanged: the celebration may license subsequent ethical violations by satisfying the self-image need without changing the conditions that produce violations.
The most systematic test of normalized deviance as a mechanism came from Diane Vaughan's analysis of 19 years of NASA decision-making records before the Challenger disaster, published as The Challenger Launch Decision (1996). Vaughan documented 7 pre-launch decisions involving O-ring erosion in which engineers observed anomalies, classified them as within acceptable limits, and proceeded with launches. Each successful launch with O-ring erosion was interpreted as evidence that erosion was manageable—creating a progressively lower threshold for what counted as an acceptable level of risk. Vaughan's central contribution was demonstrating that this was not recklessness but conformity to an organizational culture that defined its risk-acceptance process as rigorous even as it systematically incorporated more risk. The structural mechanism—treating survival of risky decisions as evidence that the decisions were sound—is identifiable in retrospect across many subsequent organizational disasters.
The Opioid Crisis as System-Level Ethical Failure: The Sackler Family and Purdue Pharma
The opioid epidemic, which has caused over 500,000 deaths in the United States since 1999 according to the Centers for Disease Control and Prevention, represents one of the most thoroughly documented system-level ethical failures in modern American history. The mechanisms documented in the legal proceedings, state attorney general investigations, and journalistic accounts (notably Patrick Radden Keefe's Empire of Pain, 2021) illustrate nearly every mechanism described in this article operating simultaneously.
Purdue Pharma developed OxyContin and obtained FDA approval in 1995. The drug had genuine medical applications for severe pain management, but its approval was accompanied by a prescribing label that understated addiction risk. Richard Sackler, who became president of Purdue in 1999, implemented an aggressive marketing strategy that paid sales representatives significantly above-industry-average bonuses for prescriptions written by doctors in their territories. The incentive structure created the same dynamic documented in the Wells Fargo case: sales representatives faced financial pressure to maximize prescriptions, creating incentive to minimize communication of addiction risk to prescribers and to target high-prescribing doctors disproportionately, including those whose prescribing patterns were consistent with running opioid mills rather than legitimate pain management practices.
Internal documents produced in state litigation showed that Purdue executives were aware of addiction and diversion (sale of prescribed pills for non-medical use) at an early stage. A 2001 email from Richard Sackler, disclosed in court proceedings, described the plan to respond to emerging criticism of OxyContin: "We have to hammer on abusers in every way possible. They are the culprit and problem. They are reckless criminals." The moral disengagement mechanism was explicit: responsibility for harm was attributed to the victims rather than the product or the marketing strategy. By constructing addicts as moral failures rather than as people harmed by a product and marketing strategy Purdue controlled, executives avoided confronting the causal role of their own decisions.
The incremental compromise mechanism is visible in Purdue's expansion of OxyContin marketing from its initially approved use (cancer pain and post-surgical pain, where addiction was a less central concern) to primary care pain management for conditions like chronic back pain, where the patient population was much larger and the ratio of benefit to addiction risk was substantially worse. Each expansion of the marketing strategy was internally justified as a small step from the previous practice; the cumulative result was the saturation of communities with highly addictive drugs prescribed by physicians who had received marketing claiming the addiction risk was less than 1% of patients—a claim Purdue's own research contradicted.
The regulatory failure component illustrates the weakness of compliance-based ethics systems. Purdue operated technically within FDA approvals and legal marketing constraints for years while causing documented harm. The FDA's drug approval process is designed to assess safety and efficacy at the population level; it was not designed to assess the ethical implications of aggressive marketing strategies that create financial incentives to maximize prescribing regardless of clinical appropriateness. The gap between what was legal and what was ethical was enormous, and no organizational mechanism was positioned to intervene in that gap.
The outcome involved Purdue Pharma's 2020 bankruptcy and a settlement of approximately $8 billion with state and federal governments; criminal guilty pleas by the company; and extensive civil litigation against the Sackler family that ultimately resulted in payments exceeding $6 billion. The Sackler family's reputation, previously honored by named wings at major museums, was erased in several years as museums returned donations and removed the Sackler name. The case illustrates that system-level ethical failures eventually produce system-level consequences, but the lag between the initiation of the harm and the accountability—in this case roughly two decades—allows enormous damage to accumulate.
Conclusion
Ethical failures don't require bad people—they require bad systems operating on normal people.
| Mechanism | Core Dynamic | Classic Example |
|---|---|---|
| Incremental compromise | Small violations erode standards over time | Theranos: overpromise → fake demos → falsified results |
| Normalized deviance | Repeated rule violations with no consequence become routine | Challenger: O-ring erosion treated as acceptable |
| Moral disengagement | Psychological distancing removes sense of personal responsibility | Wells Fargo: fake accounts blamed on employee ambition |
| Incentive corruption | Reward structure punishes ethical behavior and rewards misconduct | Sears: commission pay drove unnecessary repairs |
| Authority and obedience | Legitimate authority transfers moral responsibility upward | Milgram: 65% administered maximum shocks under instruction |
| Groupthink | Conformity pressure suppresses dissent and critical evaluation | NASA 1986: engineers overridden by schedule pressure |
| Information asymmetry | Decision-makers insulated from consequences of their choices | Opioid crisis: executives distant from addiction and overdose deaths |
Core mechanisms:
- Incremental compromise (slippery slope)
- Normalized deviance (violations become routine)
- Moral disengagement (psychological distancing from harm)
- Incentive corruption (rewarded for wrong behavior)
- Authority and obedience (follow orders despite doubts)
- Groupthink (conformity suppresses dissent)
- Information asymmetry (don't see consequences)
Situational factors:
- Pressure (time, financial, competitive)
- Isolation (echo chambers)
- Opacity (complexity hides harm)
Prevention:
- Design systems that make ethics easy
- Align incentives with values
- Maintain independent oversight
- Encourage and protect dissent
- Model integrity from leadership
Most important: Recognize you're vulnerable. Believing "I'm ethical; I wouldn't fall for this" makes you more vulnerable. Ethical people do unethical things in bad situations. As James Reason, the cognitive psychologist who developed the Swiss Cheese Model of accident causation, put it: "We cannot change the human condition, but we can change the conditions under which humans work."
The question isn't "Am I good?"—it's "Is this system designed to prevent ethical failure?"
Fix the systems. Don't rely on virtue alone.
Essential Readings
Ethical Failure Patterns:
- Vaughan, D. (1996). The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA. Chicago: University of Chicago Press. [Normalized deviance]
- Tenbrunsel, A. E., & Messick, D. M. (2004). "Ethical Fading: The Role of Self-Deception in Unethical Behavior." Social Justice Research, 17(2), 223-236.
- Bazerman, M. H., & Tenbrunsel, A. E. (2011). Blind Spots: Why We Fail to Do What's Right and What to Do about It. Princeton: Princeton University Press.
Moral Disengagement:
- Bandura, A. (1986). Social Foundations of Thought and Action. Englewood Cliffs, NJ: Prentice-Hall.
- Bandura, A. (1999). "Moral Disengagement in the Perpetration of Inhumanities." Personality and Social Psychology Review, 3(3), 193-209.
- Bandura, A., Barbaranelli, C., Caprara, G. V., & Pastorelli, C. (1996). "Mechanisms of Moral Disengagement in the Exercise of Moral Agency." Journal of Personality and Social Psychology, 71(2), 364-374.
Authority and Obedience:
- Milgram, S. (1974). Obedience to Authority. New York: Harper & Row. [Classic experiment]
- Zimbardo, P. (2007). The Lucifer Effect. New York: Random House. [Stanford Prison Experiment; Abu Ghraib analysis]
- Haney, C., Banks, C., & Zimbardo, P. (1973). "Interpersonal Dynamics in a Simulated Prison." International Journal of Criminology and Penology, 1, 69-97.
Groupthink and Conformity:
- Janis, I. L. (1982). Groupthink (2nd ed.). Boston: Houghton Mifflin. [Classic analysis]
- Asch, S. E. (1956). "Studies of Independence and Conformity: A Minority of One Against a Unanimous Majority." Psychological Monographs, 70(9), 1-70.
- Sunstein, C. R., & Hastie, R. (2015). Wiser: Getting Beyond Groupthink to Make Groups Smarter. Boston: Harvard Business Review Press.
Organizational Ethics:
- Ashforth, B. E., & Anand, V. (2003). "The Normalization of Corruption in Organizations." Research in Organizational Behavior, 25, 1-52.
- Palmer, D. (2012). Normal Organizational Wrongdoing. Oxford: Oxford University Press.
- Darley, J. M. (2005). "The Cognitive and Social Psychology of Contagious Organizational Corruption." Brooklyn Law Review, 70(4), 1177-1194.
Case Studies:
- McLean, B., & Elkind, P. (2003). The Smartest Guys in the Room: The Amazing Rise and Scandalous Fall of Enron. New York: Portfolio. [Enron]
- Carreyrou, J. (2018). Bad Blood: Secrets and Lies in a Silicon Valley Startup. New York: Knopf. [Theranos]
- Robison, P. (2021). Flying Blind: The 737 MAX Tragedy and the Fall of Boeing. New York: Doubleday. [Boeing]
- Lewis, M. (2010). The Big Short. New York: W. W. Norton. [Financial crisis]
Prevention and Culture:
- Paine, L. S. (1994). "Managing for Organizational Integrity." Harvard Business Review, 72(2), 106-117.
- Treviño, L. K., Weaver, G. R., & Reynolds, S. J. (2006). "Behavioral Ethics in Organizations: A Review." Journal of Management, 32(6), 951-990.
- Gentile, M. C. (2010). Giving Voice to Values: How to Speak Your Mind When You Know What's Right. New Haven: Yale University Press.
Whistleblowing:
- Near, J. P., & Miceli, M. P. (1985). "Organizational Dissidence: The Case of Whistle-Blowing." Journal of Business Ethics, 4(1), 1-16.
- Rothschild, J., & Miethe, T. D. (1999). "Whistle-Blower Disclosures and Management Retaliation." Work and Occupations, 26(1), 107-128.
Psychology of Good People Doing Bad Things:
- Ariely, D. (2012). The (Honest) Truth About Dishonesty. New York: Harper. [Small dishonesty]
- Gino, F. (2013). Sidetracked: Why Our Decisions Get Derailed, and How We Can Stick to the Plan. Boston: Harvard Business Review Press.
Frequently Asked Questions
Do most ethical failures come from bad people?
No. Most come from normal people in systems with bad incentives, weak oversight, incremental compromise, and rationalization.
What is ethical drift?
Ethical drift is gradual erosion of standards through small compromises that accumulate, each individually justified until collective harm is severe.
What is normalized deviance?
Normalized deviance is when rule violations become routine and accepted because nothing bad happened initially, lowering standards over time.
How do incentives cause ethical failures?
When rewards encourage cutting corners, hiding problems, or prioritizing metrics over values, people often comply despite ethical reservations.
What role does rationalization play?
People maintain self-image as ethical while doing questionable things by creating justifications—everyone does it, no choice, greater good.
Can good people do unethical things?
Yes, easily. Situational pressures, authority, group dynamics, incrementalism, and moral disengagement allow ethical compromises.
How do you prevent ethical failures?
Design systems that make ethics easy, incentivize doing right things, maintain oversight, encourage dissent, and model integrity from top.
What early warning signs indicate ethical drift?
Increasing rationalizations, suppressed dissent, metrics obsession, blame culture, leadership hypocrisy, and minor violations ignored.