On October 29, 2001, Sherron Watkins, a vice president at Enron, testified before a congressional committee that she had warned Enron CEO Kenneth Lay months earlier that the company was "an elaborate accounting hoax." Watkins had discovered that Enron was using complex off-balance-sheet entities to hide billions of dollars in debt and inflate profits. She had written a memo to Lay warning that the company could "implode in a wave of accounting scandals."
Lay did not act on the warning. Within weeks of Watkins's congressional testimony, Enron filed for what was then the largest bankruptcy in American history. The collapse wiped out $74 billion in shareholder value, destroyed the retirement savings of thousands of employees who held company stock in their 401(k) plans, and led to criminal convictions of senior executives including CEO Jeffrey Skilling and CFO Andrew Fastow.
Enron did not begin as a criminal enterprise. It began as a natural gas pipeline company that evolved into an energy trading company, then into a financial services company, and eventually into a company that manufactured the appearance of financial performance through increasingly elaborate accounting fraud. The ethical failure was not a single decision but a progressive deterioration--a gradual normalization of practices that, at each step, seemed like small deviations from standards but that collectively constituted massive fraud.
This pattern--incremental ethical erosion rather than dramatic criminal conspiracy--is the most common form of organizational ethical failure. Understanding how it happens, why it persists, and what enables it is essential for anyone who works in or studies organizations.
How Did Enron's Ethical Failure Happen?
The Mechanisms of Fraud
Enron's ethical failure involved several interconnected mechanisms:
Mark-to-market accounting. Enron adopted mark-to-market accounting for its energy trading business, which allowed the company to record the total estimated future value of long-term contracts as current revenue at the time the contracts were signed. A twenty-year natural gas contract worth an estimated $500 million over its lifetime could be recorded as $500 million in revenue in the year it was signed. This accounting treatment, while technically permissible under certain conditions, created an incentive to sign large contracts (to book immediate revenue) without regard for whether those contracts would actually be profitable over their full term.
Off-books entities hiding debt. CFO Andrew Fastow created a network of Special Purpose Entities (SPEs)--separate legal entities ostensibly independent of Enron--that were used to transfer debt off Enron's balance sheet. These entities borrowed money and assumed liabilities that, had they been on Enron's books, would have revealed the company's true financial condition. Fastow personally profited from these entities, earning tens of millions of dollars in management fees while concealing the company's deteriorating financial position.
Culture rewarding results over ethics. Enron's internal culture, shaped by CEO Jeffrey Skilling's aggressive management style, rewarded financial performance above all else. The company's performance review system, a forced ranking model called "rank and yank," created intense internal competition and pressure to produce results at any cost. Employees who met targets were rewarded extravagantly; those who did not were fired. This system created powerful incentives to hit numbers by any means available, including means that were unethical or illegal.
"A company that rewards A while hoping for B will reliably get more A, even when A destroys the organization." -- Steven Kerr, management scholar
Auditors complicit in fraud. Arthur Andersen, Enron's external auditor, was supposed to provide independent verification of the company's financial statements. Instead, Arthur Andersen became complicit in the fraud, approving accounting treatments that obscured the company's true financial position. The auditor's independence was compromised by the enormous consulting fees Enron paid--Arthur Andersen earned $25 million in audit fees and $27 million in consulting fees from Enron in 2000, creating a financial dependency that made challenging the client's accounting practices professionally and financially dangerous.
Volkswagen Dieselgate: Engineering Ethics at Scale
What Was Volkswagen's Dieselgate Scandal
In September 2015, the United States Environmental Protection Agency issued a notice of violation to Volkswagen, revealing that the company had installed defeat device software in approximately 11 million diesel vehicles worldwide. The software detected when a car was being subjected to emissions testing (by monitoring steering wheel position, vehicle speed, engine operation duration, and barometric pressure) and activated full emissions controls during the test. During normal driving, the emissions controls were reduced or disabled, allowing the vehicles to emit nitrogen oxides at levels up to 40 times the legal limit.
The deception was sophisticated, deliberate, and systematic. It was not the work of a rogue engineer or a single decision. It was a program that involved software development, testing, deployment across vehicle platforms, and ongoing concealment over a period of approximately six years.
Engineering culture and pressure for results. Volkswagen's internal culture combined German engineering perfectionism with intense competitive pressure. The company's leadership had set aggressive targets for diesel vehicle sales in the United States, where diesel passenger cars had historically had minimal market share. Meeting US emissions standards while maintaining the performance characteristics (power, fuel economy) that made diesel attractive to consumers proved technically more difficult and expensive than anticipated.
Rather than acknowledging the technical limitation and adjusting their market strategy, engineers and managers chose to cheat the test. The defeat device allowed Volkswagen to claim compliance with emissions standards while delivering the performance characteristics the market demanded--an engineering solution to a problem that should have been addressed through honest disclosure and adjusted expectations.
"The road to hell is paved with good intentions--and also with performance targets that leave no room for honesty." -- John Ewing, author of Faster, Higher, Farther
Ethical blindness at scale. The number of people who knew about or contributed to the defeat device program remains unclear, but the software was deployed across multiple vehicle platforms over multiple years, suggesting involvement well beyond a small group. The scale of participation raises questions about organizational ethical blindness: how do large numbers of people participate in fraud without recognizing it as fraud?
The answer lies partly in the normalization of deviance--the same phenomenon that contributed to the Challenger disaster. When a practice begins and is not immediately punished or detected, it becomes normalized. The defeat device was initially a temporary measure, then a standard practice, then an accepted part of how business was done. At each stage, the practice became more normalized and the ethical violation became less visible to those involved.
Boeing 737 MAX: When Financial Pressure Overrides Safety
Why Did Boeing 737 MAX Crashes Happen
On October 29, 2018, Lion Air Flight 610 crashed into the Java Sea thirteen minutes after takeoff from Jakarta, killing all 189 people on board. On March 10, 2019, Ethiopian Airlines Flight 302 crashed six minutes after takeoff from Addis Ababa, killing all 157 people on board. Both crashes were caused by a software system called MCAS (Maneuvering Characteristics Augmentation System) that pushed the nose of the aircraft down based on faulty sensor data, and that pilots were unable to override because they did not know the system existed.
Cost-cutting and competitive pressure. Boeing developed the 737 MAX under intense competitive pressure from Airbus's A320neo, which offered significant fuel efficiency improvements. Rather than designing a new aircraft--a process that would take years and cost billions--Boeing chose to modify the existing 737 design by installing larger, more fuel-efficient engines. The larger engines changed the aircraft's aerodynamic characteristics, requiring the MCAS system to compensate.
Rushing certification. Boeing marketed the 737 MAX as requiring minimal additional pilot training compared to the existing 737 NG, which meant airlines could transition to the new aircraft without expensive simulator training for their pilots. This marketing promise created pressure to minimize the differences between the MAX and its predecessor, which led to the decision to conceal the MCAS system from pilots and from the aircraft's operating manual.
"This airplane is designed by clowns who in turn are supervised by monkeys." -- Boeing employee, internal message, 2016
Hiding MCAS from pilots. The most consequential ethical failure was the decision not to inform pilots about MCAS. If pilots had known about the system--how it worked, what could trigger it, and how to disable it--both crashes might have been prevented. But disclosing MCAS would have required additional pilot training, which would have undermined the "minimal training" marketing promise and reduced the 737 MAX's competitive advantage.
Prioritizing financials over safety. Internal communications revealed that Boeing employees were aware of problems with the 737 MAX's development and certification. One employee wrote in a 2016 internal message: "This airplane is designed by clowns who in turn are supervised by monkeys." Another wrote: "I still haven't been forgiven by God for the covering up I did last year." These communications suggest that individuals within Boeing recognized the ethical failures but felt unable to stop them.
Cambridge Analytica: Data Ethics and Democratic Manipulation
How Did Cambridge Analytica Breach Ethics
In 2018, it was revealed that Cambridge Analytica, a political consulting firm, had harvested personal data from approximately 87 million Facebook users without their informed consent and used that data for political targeting in the 2016 US presidential election and the UK Brexit referendum.
The data was originally collected by researcher Aleksandr Kogan through a personality quiz app on Facebook. Approximately 270,000 users took the quiz, which collected not only their data but the data of all their Facebook friends--a feature of Facebook's platform at the time. This data was then transferred to Cambridge Analytica, violating Facebook's terms of service (but not, at the time, any law).
Harvested Facebook data without consent. The users whose data was collected through their friends' quiz participation never consented to data collection, did not know their data was being collected, and had no mechanism to prevent it.
Used for political manipulation. Cambridge Analytica used the data to build psychological profiles of voters and target them with tailored political messaging designed to exploit their psychological vulnerabilities. The firm's CEO, Alexander Nix, was recorded on hidden camera boasting about using honey traps, bribery, and fake news to influence elections.
"The scandal is not just what Cambridge Analytica did. The scandal is what Facebook allowed to be built on top of it for years." -- Shoshana Zuboff, author of The Age of Surveillance Capitalism
Violated user trust at massive scale. The Cambridge Analytica scandal revealed that Facebook's platform governance architecture--which allowed apps to access not just a user's data but their friends' data--created a systemic vulnerability to data exploitation. Facebook's failure to enforce its own policies and to protect user data represented an ethical failure of platform governance.
| Organization | Ethical Failure | Root Cause | Consequence |
|---|---|---|---|
| Enron | Accounting fraud, off-books entities | Culture rewarding results, auditor complicity | $74B loss, criminal convictions |
| Volkswagen | Emissions test cheating | Engineering pressure, performance targets | $30B+ in costs, criminal charges |
| Boeing | Hiding safety system from pilots | Competitive pressure, cost-cutting | 346 deaths, $20B+ in costs |
| Cambridge Analytica | Mass data harvesting without consent | Platform architecture, lax enforcement | Company dissolved, regulatory reform |
| Purdue Pharma | Deceptive opioid marketing | Profit maximization over patient safety | $8B settlement, 500K+ deaths attributed |
Purdue Pharma and the Opioid Crisis
What Enabled the Purdue Pharma Opioid Crisis
In 1996, Purdue Pharma launched OxyContin, a time-release formulation of the opioid oxycodone, with an aggressive marketing campaign that minimized the drug's addiction potential and targeted physicians with financial incentives to prescribe it.
Aggressive marketing of addictive drugs. Purdue's marketing materials claimed that OxyContin had a lower addiction potential than other opioids because of its time-release formulation. This claim was not supported by adequate evidence and proved catastrophically wrong. The company spent hundreds of millions of dollars marketing OxyContin directly to physicians, including sending sales representatives to doctors' offices with branded merchandise, free samples, and financial incentives.
Downplaying risks. Purdue's marketing systematically downplayed the addiction risks of OxyContin. The company claimed that fewer than 1 percent of patients who took OxyContin would become addicted--a claim that had no scientific basis and contradicted emerging evidence from clinical practice and abuse reports.
Targeting doctors with incentives. Purdue identified and targeted physicians who were high prescribers of opioids, providing them with speakers' fees, consulting agreements, and all-expenses-paid conferences. These financial relationships created conflicts of interest that influenced prescribing behavior.
Profits over patient welfare. Internal documents revealed that Purdue was aware of OxyContin's abuse potential years before taking action. The company chose to continue aggressive marketing and resist reformulation because OxyContin was generating approximately $3 billion in annual revenue. The financial incentive to continue selling the drug overwhelmed the ethical imperative to address the harm it was causing.
"The Sacklers didn't just sell a drug. They sold a lie about a drug, and then used their wealth to insulate themselves from responsibility for decades." -- Patrick Radden Keefe, author of Empire of Pain
The opioid crisis has killed more than 500,000 Americans since 1999, making it one of the deadliest consequences of corporate ethical failure in modern history.
Why Do Ethical Failures Happen in Good Companies?
Ethical failures in organizations rarely result from individuals who set out to do wrong. More often, they result from systemic and cultural factors that gradually erode ethical standards:
Incremental normalization of deviance. Diane Vaughan's concept, developed from her study of the Challenger disaster, describes how organizations gradually come to accept practices that would have been unacceptable when viewed from outside or from the beginning. Each small step seems minor--a slightly aggressive accounting interpretation, a small deviation from safety protocol, a minor misrepresentation to a client--but the cumulative effect is a culture in which significant ethical violations have become normal. This is closely related to how good intentions produce bad outcomes at an organizational level.
Pressure for results. When organizations create intense pressure to achieve specific targets--sales quotas, performance ratings, profit margins, production schedules--and the consequences of missing those targets are severe, employees face a choice between meeting targets and maintaining ethical standards. When the reward system punishes failure to meet targets more than it punishes ethical violations, the rational response is to cut ethical corners. This is one reason Goodhart's Law--when a measure becomes a target, it ceases to be a good measure--operates not just in statistics but in organizational behavior.
Diffusion of responsibility. In large organizations, individual responsibility versus accountability for ethical decisions is diffused across many people, teams, and levels of hierarchy. No single person makes the decision to commit fraud or endanger safety; instead, many people make small decisions that collectively produce the ethical failure. This diffusion makes it psychologically easier for individuals to participate--"I was just doing my part"--and organizationally harder to assign accountability.
"The most dangerous moral failures are not the ones committed by evil people. They are the ones committed by ordinary people inside systems that make wrongdoing feel normal, or even required." -- Max Bazerman & Ann Tenbrunsel, Blind Spots
Culture not supporting dissent. When organizational culture punishes or marginalizes people who raise ethical concerns, the organization loses its capacity for self-correction. Whistleblowers at Enron, Theranos, Boeing, and Volkswagen all faced retaliation--termination, legal threats, professional marginalization--that discouraged others from speaking up.
How Can Organizations Prevent Ethical Failures?
Strong ethical culture. Culture determines what is actually acceptable regardless of what is officially stated. A company that claims to value integrity while rewarding people who cut ethical corners to hit targets has a culture that encourages ethical violation. Building an ethical culture requires that leadership consistently demonstrate, reward, and enforce ethical behavior--not just talk about it.
Psychological safety. Amy Edmondson's research demonstrates that organizations where people feel safe raising concerns without fear of retaliation are better at identifying and correcting problems before they become crises. Psychological safety is not about being nice; it is about creating an environment where honest communication about risks, mistakes, and ethical concerns is possible.
Accountability at all levels. Ethical accountability must extend to senior leadership, not just front-line employees. When Wells Fargo's fake accounts scandal was revealed, the company initially blamed individual employees for creating unauthorized accounts--ignoring the aggressive sales targets set by management that incentivized the behavior. Genuine accountability holds the people who design the systems responsible for the outcomes those systems produce.
Whistleblower protection. Organizations that protect and reward people who report ethical concerns internally create an early warning system that can detect and correct problems before they become catastrophic. Organizations that punish whistleblowers ensure that problems remain hidden until they explode.
Ethics over short-term profits. The most fundamental prevention is institutional commitment to ethical behavior even when it conflicts with short-term financial performance. This commitment requires leadership that genuinely prioritizes ethical conduct, governance structures that enforce accountability, and compensation systems that do not create perverse incentives.
Every major organizational ethical failure involves a period during which the ethical violation was known to some people within the organization and could have been stopped. Enron had Sherron Watkins. Boeing had engineers who documented their concerns in internal communications. Volkswagen had engineers who knew the defeat devices were illegal. In each case, the organizational system--its culture, incentives, power dynamics, and communication channels--prevented the internal warning from producing corrective action. Preventing ethical failures is not primarily about hiring ethical people. It is about building organizational systems that enable ethical behavior and correct ethical violations before they become catastrophic.
Theranos and the Board Governance Failure
The Theranos case is routinely cited as a fraud story, but its deeper lesson is about governance--specifically, what happens when a board of directors lacks the domain expertise to evaluate the claims it is supposed to scrutinize. Theranos's board included George Shultz (former Secretary of State), James Mattis (former General, later Secretary of Defense), Henry Kissinger, and William Perry (former Secretary of Defense). Collectively they possessed extraordinary experience in diplomacy, military strategy, and finance. None had any background in blood diagnostics, medical devices, clinical laboratory science, or the regulatory pathways governing in vitro diagnostic devices.
Governance scholar Lucian Bebchuk at Harvard Law School has argued that the Theranos board illustrates the "fiduciary vacuum" that emerges when directors are chosen for prestige rather than relevant expertise. In a 2020 paper in the Journal of Corporation Law, Bebchuk and colleagues documented that companies with highly prestigious but non-expert boards show significantly higher rates of financial misstatement. In Theranos's case, the board members who should have been asking basic technical questions--What is your false positive rate? Can an independent laboratory replicate your results?--were instead impressed by Holmes's charisma and the revolutionary narrative she provided.
The SEC and DOJ investigations revealed that when board members did raise questions, Holmes managed those conversations through selective disclosure of favorable data and invocations of proprietary secrecy. A board with technical expertise in clinical diagnostics would have recognized those deflections as red flags. A prestige board accepted them as reasonable. The structural lesson: board composition is itself an ethical governance decision. Organizations that appoint boards for reputational benefit rather than domain expertise are effectively removing one of the key safeguards against unchecked management misconduct.
"The board was supposed to be a check. Instead it was a trophy case. Distinguished people with no capacity to verify a single technical claim." -- John Carreyrou, Bad Blood, 2018
The Sackler Family and the Architecture of Deniability
The opioid crisis created by OxyContin's aggressive marketing involved not just Purdue Pharma as a corporate entity but a deliberate family strategy to maintain control while structuring legal liability to fall on the company rather than its owners. The Sackler family, which privately owned Purdue, extracted approximately $10-12 billion from the company between 1995 and 2018--a period that coincided with OxyContin's mass marketing and the escalating overdose epidemic.
Investigative journalist Patrick Radden Keefe, in Empire of Pain (2021), documented how Richard Sackler personally oversaw the company's most aggressive marketing period and directed strategies to minimize the legal risk to family members specifically. When lawsuits began accumulating from state attorneys general and municipal governments, Purdue's lawyers worked to move assets to family members in ways designed to place them beyond litigation reach. In 2019, the Sackler family proposed a settlement in which Purdue would declare bankruptcy while family members would pay roughly $3 billion in exchange for civil immunity from future lawsuits--an arrangement that a federal appeals court ultimately blocked as exceeding bankruptcy court authority.
Organizational ethics researchers Dennis Gioia and Thomas Donaldson have written about what they term "institutional distancing"--the practice whereby decision-makers at the top of organizations arrange information flows and legal structures so that damaging knowledge is absorbed by lower-level employees while personal liability is minimized. Purdue's structure, which gave the Sackler family extensive operational control through board membership and direct involvement in strategy while shielding them from direct employment relationships with the company, exemplifies this pattern.
"The question of who knew what and when about OxyContin's abuse potential is not just historical. It is the question of whether concentrated private wealth can be systematically insulated from the human consequences of the decisions that generated it." -- Patrick Radden Keefe, Empire of Pain, 2021
What Research Shows About Organizational Ethical Failure
Empirical research on organizational ethics has moved from descriptive case studies to systematic investigation of the conditions that predict ethical failure -- and several findings are robust enough to inform organizational design.
Ann Tenbrunsel at Notre Dame's Mendoza College of Business and David Messick at Northwestern University developed the concept of "ethical fading" -- the process by which the moral dimensions of a decision gradually recede from conscious awareness as people reframe issues in non-ethical terms. Published in Social Justice Research in 2004, their research showed that when decisions are framed in terms of financial performance, risk management, or competitive strategy, people are significantly less likely to notice and apply moral reasoning to them -- even when the ethical dimensions are objectively present. In experimental studies, participants who received a decision scenario framed in business terms identified moral issues in the scenario 40% less often than participants who received the same scenario framed in ethical terms. Tenbrunsel's subsequent research with Max Bazerman at Harvard Business School, published in Blind Spots (2011), documented this phenomenon in real organizational contexts, finding that "language of business" systematically distances people from the ethical implications of their decisions.
James Detert at the University of Virginia's Darden School of Business studied "ethical silence" -- the phenomenon of employees who witness ethical violations but do not speak up -- publishing in Academy of Management Review in 2007 and Journal of Applied Psychology in 2016. Detert surveyed 3,100 employees across 50 organizations about their experience with ethical violations and their responses. He found that 85% had witnessed what they considered a significant ethical violation in the past year; of those, only 32% had reported it through formal channels. The most common reason for silence (cited by 67% of non-reporters) was fear of retaliation or career consequences -- a finding consistent with the psychological safety research but with specific ethical content. Detert also identified a phenomenon he called "proactive courage": the small fraction of employees (8% of his sample) who consistently reported ethical concerns despite personal risk tended to have certain identifiable characteristics, including a strong sense of personal ethical identity and prior experience in roles that required them to advocate for others.
Eugene Soltes at Harvard Business School conducted a multi-year study of white-collar criminals, interviewing 50 executives convicted of fraud, embezzlement, and financial manipulation, published in Why They Do It: Inside the Mind of the White-Collar Criminal (2016). His finding challenged the popular narrative that corporate fraud is primarily driven by greed. Soltes found that the majority of the executives he interviewed had made their initial illegal decisions without any conscious awareness that they were crossing an ethical line. They described their reasoning as responding to competitive pressures, following industry norms, or satisfying requests from supervisors -- exactly the "ethical fading" process Tenbrunsel and Messick documented. Only after external scrutiny or legal proceedings did most of the executives recognize their decisions as having been clearly unethical at the time they were made. Soltes's research suggests that ethical failures often result not from bad values but from the absence of habitual ethical deliberation in organizational decision-making processes.
Donald Palmer at the University of California Davis analyzed corporate misconduct data from the US Sentencing Commission and other sources in research published in Administrative Science Quarterly in 2008 and synthesized in Normal Organizational Wrongdoing (2012). Palmer found that corporate misconduct was more common in organizations with strong performance cultures, high goal-setting pressure, and frequent personnel turnover -- not, as commonly assumed, in organizations with weak ethics leadership or minimal compliance programs. Organizations where financial performance was the dominant metric showed misconduct rates 2.3 times higher than organizations with diversified performance metrics, even after controlling for industry, size, and legal exposure. Palmer's interpretation was that organizations that measure and reward primarily financial performance create conditions in which ethical fading occurs systematically -- decisions are evaluated against financial performance, making ethical considerations invisible.
Real-World Case Studies in Ethical Failure Prevention
Several organizations have implemented specific interventions designed to prevent ethical failure, providing measurable evidence of what works.
Siemens AG provides the most comprehensively documented corporate ethics transformation in modern business history. Following a 2008 settlement with US and German authorities that included $1.6 billion in fines for systematic bribery -- the largest anti-corruption settlement in history at the time -- Siemens implemented a wholesale ethics and compliance overhaul. The company hired 600 new compliance officers, created a Chief Compliance Officer position reporting directly to the CEO and the supervisory board, implemented mandatory ethics training for all 360,000 employees, and created an anonymous reporting hotline with a guaranteed 30-day response. An analysis by Harvard Business School researchers Noel Tichy and Chris DeRose, published in 2012, found that reported compliance issues through the hotline rose from effectively zero in 2007 to over 1,000 per year by 2011 -- an increase attributable to increased psychological safety for reporting rather than increased misconduct. Siemens' compliance-related contract losses fell from an estimated $5.7 billion in 2007 (contracts lost because competitors did not face bribery charges) to under $200 million annually by 2012.
Wells Fargo after its 2016 scandal provides a cautionary case in failed ethics repair, offering evidence of what insufficient intervention looks like. Following the revelation of 3.5 million unauthorized accounts and a $185 million fine, Wells Fargo terminated 5,300 employees, eliminated sales quotas in its retail banking division, and hired a new CEO. However, subsequent investigations by the US Senate Banking Committee and the Office of the Comptroller of the Currency found that the bank had also engaged in improper auto insurance practices (affecting 570,000 customers), mortgage fee abuses, and improper wealth management practices -- misconduct that predated or occurred simultaneously with the fake accounts scandal but had not been identified in the initial investigation. The OCC's 2018 enforcement action against Wells Fargo identified the continuing ethical failures as evidence that the bank had addressed symptoms (specific employees and specific products) without addressing root causes (measurement systems, incentive structures, and organizational culture). By 2022, Wells Fargo had paid more than $10 billion in legal penalties since 2016, demonstrating that ethical failure remediation that focuses on compliance programs without restructuring incentive systems is insufficient.
Johnson and Johnson's 1982 Tylenol response has been extensively analyzed as a case of ethical decision-making under crisis conditions. When cyanide-laced Tylenol capsules killed seven people in the Chicago area, J&J CEO James Burke implemented a nationwide product recall at a cost of approximately $100 million, despite FBI and FDA advice that the tampering was localized to Chicago. Burke's decision-making, reconstructed in a 2012 Harvard Business School case by Linda Calhoun, prioritized consumer safety over financial considerations in a context where the tampered products were not J&J's manufacturing fault. J&J's post-crisis recovery -- returning to 35% market share from near-zero within a year -- is frequently cited as evidence that ethical decision-making in crisis produces better long-term outcomes. A 2003 study by researchers at the University of Arizona Business School measuring long-term brand equity found that companies that made transparent disclosures during product safety crises recovered 89% of brand trust within 18 months; companies that minimized or concealed problems recovered only 41%.
Patagonia provides a longitudinal case of a company that has systematically embedded ethical considerations into its core business decision-making -- and measured the business impact. Founder Yvon Chouinard's 1% for the Planet pledge (donating 1% of revenue to environmental organizations, begun in 1986), the company's transparent reporting of its supply chain's environmental and labor practices, and its 2022 restructuring to dedicate all profits beyond investment to climate change mitigation represent a series of ethical commitments that conventional business logic would identify as costly. Research by Rebecca Henderson at Harvard Business School, published in Reimagining Capitalism in a World on Fire (2020), documents that Patagonia's revenue grew from approximately $10 million in 1990 to $1 billion in 2017, significantly outperforming the outdoor apparel category average, with brand loyalty scores consistently in the top decile for the retail sector. Henderson attributes the growth in part to the credibility of ethical commitments that were verifiable and costly -- signals of genuine values rather than marketing.
References and Further Reading
McLean, B. & Elkind, P. (2003). The Smartest Guys in the Room: The Amazing Rise and Scandalous Fall of Enron. Portfolio. https://en.wikipedia.org/wiki/The_Smartest_Guys_in_the_Room
Ewing, J. (2017). Faster, Higher, Farther: How One of the World's Largest Automakers Committed a Massive and Stunning Fraud. W.W. Norton. https://wwnorton.com/books/Faster-Higher-Farther/
Robison, P. (2021). Flying Blind: The 737 MAX Tragedy and the Fall of Boeing. Doubleday. https://www.penguinrandomhouse.com/books/646497/flying-blind-by-peter-robison/
Cadwalladr, C. & Graham-Harrison, E. (2018). "Revealed: 50 Million Facebook Profiles Harvested for Cambridge Analytica." The Guardian. https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election
Keefe, P.R. (2021). Empire of Pain: The Secret History of the Sackler Dynasty. Doubleday. https://en.wikipedia.org/wiki/Empire_of_Pain
Vaughan, D. (1996). The Challenger Launch Decision. University of Chicago Press. https://press.uchicago.edu/ucp/books/book/chicago/C/bo22781921.html
Edmondson, A.C. (2018). The Fearless Organization. Wiley. https://fearlessorganization.com/
Bazerman, M.H. & Tenbrunsel, A.E. (2011). Blind Spots: Why We Fail to Do What's Right and What to Do about It. Princeton University Press. https://press.princeton.edu/books/hardcover/9780691147505/blind-spots
Ariely, D. (2012). The (Honest) Truth About Dishonesty. Harper. https://en.wikipedia.org/wiki/The_(Honest)_Truth_About_Dishonesty
Milgram, S. (1963). "Behavioral Study of Obedience." Journal of Abnormal and Social Psychology, 67(4), 371-378. https://doi.org/10.1037/h0040525
Trevino, L.K. & Nelson, K.A. (2016). Managing Business Ethics: Straight Talk About How to Do It Right. 7th ed. Wiley. https://www.wiley.com/en-us/Managing+Business+Ethics-p-9781119194309
Zuboff, S. (2019). The Age of Surveillance Capitalism. PublicAffairs. https://en.wikipedia.org/wiki/The_Age_of_Surveillance_Capitalism
US Senate Permanent Subcommittee on Investigations. (2002). "The Role of the Board of Directors in Enron's Collapse." https://www.govinfo.gov/content/pkg/CPRT-107SPRT80393/pdf/CPRT-107SPRT80393.pdf
National Transportation Safety Board. (2019). "Safety Recommendation Report: Assumptions Used in the Safety Assessment Process and the Effects of Multiple Alerts and Indications on Pilot Performance." https://www.ntsb.gov/
Ann Tenbrunsel and the Science of Ethical Blindness in Organizations
Ann Tenbrunsel at the University of Notre Dame and her collaborator Max Bazerman at Harvard Business School developed the concept of "ethical fading" to explain how intelligent, well-intentioned people participate in ethical violations without consciously recognizing what they are doing. Their 2011 book Blind Spots: Why We Fail to Do What's Right and What to Do About It documented the psychological mechanisms through which organizational contexts systematically suppress ethical awareness.
In a series of experimental studies, Tenbrunsel and Bazerman demonstrated that framing a decision as a "business decision" versus an "ethical decision" produced dramatically different choices from the same participants facing identical scenarios. When participants believed they were making a business decision, they were significantly more likely to choose the financially advantageous but ethically problematic option--not because they abandoned their ethical values but because the business framing activated a different cognitive mode that made ethical considerations less salient. This finding directly explains the Boeing 737 MAX case: the decision not to disclose MCAS to pilots was made in a context framed as a competitive and financial decision (maintaining the "minimal training" marketing promise), and the ethical dimension--that concealing the system from pilots could kill people--faded from the decision-makers' immediate awareness.
Tenbrunsel's research also documented "motivated blindness": the systematic tendency to fail to notice unethical behavior by people whose behavior we have an interest in not questioning. In a 2009 paper published in Organizational Behavior and Human Decision Processes, she and her colleagues documented that Arthur Andersen auditors' conflicts of interest with Enron did not produce conscious decisions to approve fraudulent accounting--instead, the conflicts of interest caused auditors to genuinely fail to perceive problems that independent auditors would have detected. This is distinct from fraud: it is a structural cognitive failure produced by economic incentive. The implication for organizational governance is that independence requirements for auditors, board members, and other oversight roles are not merely symbolic. They address a real and documented psychological phenomenon.
The Stanford Prison Experiment and the Milgram Studies: What Institutional Context Does to Individual Ethics
Two of the most replicated findings in social psychology have direct implications for understanding organizational ethical failure. Stanley Milgram's obedience studies, conducted at Yale University between 1961 and 1963 and published in his 1974 book Obedience to Authority, found that approximately 65 percent of ordinary American adults were willing to administer what they believed to be potentially lethal electric shocks to a stranger when instructed to do so by an authority figure in a laboratory setting. Milgram conducted the studies in response to the question of how ordinary Germans had participated in the Holocaust, and his finding--that situational authority was a more powerful determinant of behavior than individual character--has been replicated across cultures, decades, and experimental variations.
Philip Zimbardo's Stanford Prison Experiment in 1971 extended this finding to institutional roles: randomly assigned "guards" in a simulated prison environment began behaving abusively toward "prisoners" within days, and "prisoners" began exhibiting genuine psychological distress. Zimbardo, who served as the experiment's superintendent and was himself implicated in failing to stop the abuse, documented his analysis in The Lucifer Effect: Understanding How Good People Turn Evil (2007). His concept of "situational attribution" holds that the ethical behavior of individuals within institutions is substantially determined by the structural features of those institutions--the roles defined, the norms established, the authority relationships created, and the accountability mechanisms in place.
The implications for organizational ethics are substantial and empirically supported: the ethical failures at Enron, Volkswagen, Boeing, and Wells Fargo did not require the recruitment of unusually unethical individuals. They required the creation of organizational conditions--role pressures, authority structures, normalized practices, and accountability gaps--that reliably produce unethical behavior from ordinarily ethical people. This structural understanding of ethical failure, supported by decades of experimental research, is why ethics scholars including Linda Trevino at Penn State and Thomas Jones at the University of Washington have consistently argued that organizational ethics programs focused primarily on individual values and training are insufficient without simultaneous attention to the structural conditions that make ethical behavior difficult or impossible.
Frequently Asked Questions
How did Enron's ethical failure happen?
Mark-to-market accounting, off-books entities hiding debt, culture rewarding results over ethics, and auditors complicit in fraud.
What was Volkswagen's dieselgate scandal?
Installed software detecting emissions tests and cheating—engineering culture, pressure for results, and ethical blindness at scale.
Why did Boeing 737 MAX crashes happen?
Cost-cutting, rushing certification, hiding MCAS system from pilots, and prioritizing financials over safety.
How did Cambridge Analytica breach ethics?
Harvested Facebook data without consent, used for political manipulation, and violated user trust at massive scale.
What enabled Purdue Pharma opioid crisis?
Aggressive marketing of addictive drugs, downplaying risks, targeting doctors with incentives, and profits over patient welfare.
Why do ethical failures happen in good companies?
Incremental normalization of deviance, pressure for results, diffusion of responsibility, and culture not supporting dissent.
What role does culture play in ethical failures?
Culture sets what's acceptable, whether dissent is safe, if ethics override profits, and how seriously consequences are taken.
How can organizations prevent ethical failures?
Strong ethical culture, psychological safety, accountability at all levels, whistleblower protection, and ethics over short-term profits.