Decision-Making Failures Explained Through Real Cases: How Kodak, Nokia, Blockbuster, and Others Made Catastrophic Choices
In 1975, a young engineer at Eastman Kodak named Steven Sasson built the first digital camera. It was the size of a toaster, weighed eight pounds, captured a 0.01-megapixel black-and-white image, and took 23 seconds to record a photograph to a cassette tape. When Sasson presented his invention to Kodak's leadership, the response was discouraging. "That's cute," one executive reportedly said, "but don't tell anyone about it."
Kodak did not ignore digital photography entirely. Over the following decades, the company invested billions in digital imaging research, acquired digital photography companies, and developed digital products. But at every critical decision point, Kodak's leadership chose to protect the existing film business rather than aggressively pursue the digital future. The film business was enormously profitable--at its peak, Kodak's margins on film exceeded 60 percent--and cannibalizing that business to pursue an uncertain digital future seemed like destroying a sure thing for a gamble.
By the time Kodak fully committed to digital photography, it was too late. Competitors without legacy film businesses--Canon, Sony, Samsung, and eventually Apple and Samsung with smartphone cameras--had captured the digital market. Kodak filed for bankruptcy in 2012, thirty-seven years after one of its own engineers had built the technology that would ultimately destroy it.
The Kodak story is not a story of ignorance. It is a story of decision-making failure: the inability to make the right choices despite having the right information, the right talent, and the right technology. Decision-making failures of this kind are not anomalies. They are patterns--predictable, recurring patterns driven by cognitive biases, organizational structures, and cultural dynamics that systematically distort how organizations process information and choose courses of action.
What Are Common Causes of Decision Failures?
Decision-making failures in organizations typically result from a combination of cognitive, organizational, and structural factors:
Confirmation bias. Decision-makers seek and interpret evidence in ways that confirm their existing beliefs and preferences. Kodak's leadership believed film would remain dominant, so they interpreted market signals through that lens, underweighting evidence of digital photography's growth potential.
Groupthink. Cohesive groups converge on a shared view and suppress dissenting opinions, producing decisions that reflect social consensus rather than rigorous analysis. Irving Janis's original research on groupthink examined the Bay of Pigs invasion, the escalation of the Vietnam War, and other cases where groups of highly intelligent people made collectively poor decisions because the group's desire for harmony and conformity overrode realistic appraisal of alternatives.
Sunk cost fallacy. Organizations that have invested heavily in a particular strategy, technology, or business model resist abandoning that investment even when new information suggests they should. The investment already made feels like a reason to continue, even though rationally, sunk costs should not influence future decisions.
Poor framing. How a decision is framed--what options are considered, what criteria are used, what reference points are established--profoundly influences the choice made. A decision framed as "Should we protect our profitable film business?" produces different answers than the same situation framed as "How do we position ourselves to lead the next generation of imaging technology?"
Overconfidence. Decision-makers overestimate their ability to predict the future, assess risk, and control outcomes. This overconfidence leads to insufficient contingency planning, underestimation of competitors, and failure to prepare for scenarios that contradict the preferred forecast.
Ignoring base rates. Decision-makers focus on the specific case rather than the general pattern. A pharmaceutical company developing a new drug focuses on the promising results of its compound rather than the base rate that approximately 90 percent of drugs entering clinical trials ultimately fail.
Pressure for consensus. Organizations that value consensus may produce decisions that reflect the lowest common denominator of agreement rather than the best available analysis. The desire to avoid conflict produces decisions that no one loves but everyone can tolerate--decisions that are safe rather than optimal.
Kodak: Inventing the Future and Choosing Not to Pursue It
What Can We Learn from Kodak's Failure
Kodak's failure is the canonical case of the innovator's dilemma, a concept formalized by Harvard Business School professor Clayton Christensen in his 1997 book of the same name.
The innovator's dilemma describes a specific pattern: established companies fail to adopt disruptive technologies not because they are unaware of the technologies but because rational, well-managed decision-making leads them to protect existing profitable businesses rather than cannibalize them for uncertain future opportunities.
Kodak's decision-making followed this pattern precisely:
The film business was extraordinarily profitable. In the 1990s, Kodak's film and processing business generated profit margins exceeding 60 percent. No rational business leader willingly destroys a 60 percent margin business to pursue a business with unknown margins.
Digital photography initially served a different market. Early digital cameras were expensive, low-quality, and appealing primarily to niche users (professionals who needed instant review, real estate agents, insurance adjusters). They did not compete directly with consumer film photography. This made digital photography appear to be a separate, smaller, and less attractive market--not a threat to the core business.
Each year's decision was individually rational. In any given year during the 1990s, the decision to continue prioritizing film over digital was defensible. Film was still the larger market, still more profitable, and still growing in much of the world. The digital transition, which seemed inevitable in retrospect, did not seem inevitable at each individual decision point.
Organizational structure reinforced the status quo. Kodak's organizational structure was built around the film business. Revenue, profit, career advancement, resource allocation, and corporate identity were all tied to film. Digital photography was organizationally marginal--underfunded, understaffed, and politically weak relative to the film divisions. The people making resource allocation decisions were the people whose careers and compensation depended on the film business.
The lesson from Kodak is not that companies should always abandon profitable businesses to pursue unproven technologies. The lesson is that decision-making processes designed to optimize the current business are structurally biased against innovations that threaten the current business, and organizations that recognize this bias can create decision-making mechanisms that compensate for it.
Blockbuster: Choosing the Present Over the Future
Why Did Blockbuster Reject Netflix
In the year 2000, Reed Hastings, co-founder of Netflix, flew to Dallas to meet with John Antioco, CEO of Blockbuster, and his team. Hastings proposed a partnership: Netflix would handle Blockbuster's online presence, while Blockbuster would promote Netflix in its stores. According to accounts from both sides, the Blockbuster team was dismissive. Antioco reportedly found the proposal uncompelling, and members of his team struggled not to laugh.
At the time, the reaction was not irrational. Blockbuster was a $6 billion company with 9,000 stores worldwide and 65 million registered members. Netflix was a struggling startup with $5 million in revenue, losing money, and shipping DVDs by mail--a business model that seemed cumbersome compared to the instant gratification of walking into a Blockbuster store.
Blockbuster's decision-making failure was not the meeting itself. It was the sustained failure to recognize and respond to the strategic threat that Netflix represented as it evolved from DVD-by-mail to streaming:
Underestimated the threat. Blockbuster's leadership evaluated Netflix based on its current state (a small DVD-by-mail company) rather than its trajectory and potential. They failed to ask: "What happens if streaming technology improves? What happens if broadband penetration increases? What happens if Netflix's customer base grows to the point where it can invest heavily in content?"
Focused on the current profitable model. Blockbuster generated approximately $800 million annually from late fees alone. Any strategic shift toward online delivery would threaten this revenue stream. Like Kodak with film, Blockbuster's leadership made decisions that protected the current business model rather than positioning the company for the future.
Couldn't imagine a different future. Blockbuster's leadership suffered from a failure of strategic imagination--the inability to envision a future fundamentally different from the present. The physical retail model was so deeply embedded in Blockbuster's identity, operations, and culture that imagining a world without stores required a cognitive leap that the organization could not make.
| Company | Decision Failure | What They Had | What They Missed | Outcome |
|---|---|---|---|---|
| Kodak | Protecting film over digital | Digital camera invention, R&D investment | Timing of consumer transition | Bankruptcy (2012) |
| Blockbuster | Dismissing Netflix and streaming | 9,000 stores, 65M members | Streaming technology trajectory | Bankruptcy (2010) |
| Nokia | Wrong platform bet, internal silos | 40% global market share | Smartphone as computing platform | Sold to Microsoft (2014) |
| Theranos | Lack of technical oversight | $9B valuation, celebrity board | Technology didn't work | Criminal convictions |
| New Coke | Over-reliance on taste tests | Market leadership, brand loyalty | Emotional attachment to original | Reversal within 79 days |
Nokia: When Internal Culture Kills External Competitiveness
What Was Nokia's Decision-Making Mistake
In 2007, Nokia held approximately 40 percent of the global mobile phone market. It was the world's largest mobile phone manufacturer, a Finnish national champion, and the dominant force in an industry it had helped create. By 2013, Nokia's mobile phone business had been sold to Microsoft for $7.2 billion--a fraction of its peak valuation.
Nokia's decline is often attributed to the iPhone, launched in 2007, but the decision-making failures that led to Nokia's collapse were internal:
Bet on the wrong platform. Nokia's smartphone operating system, Symbian, was designed for the feature phone era--a time when phones were primarily communication devices with limited computing capabilities. As smartphones evolved into general-purpose computing platforms, Symbian's architecture could not support the app ecosystems, touch interfaces, and computing power that the market demanded. Nokia chose to continue investing in Symbian long after its architectural limitations became apparent, while Google's Android and Apple's iOS were built for the smartphone era from the ground up.
Organizational silos prevented coordination. Nokia's organizational structure divided the company into units that competed with each other for resources and market share. This internal competition prevented the coordination needed to develop a coherent smartphone strategy. Different teams developed competing products, incompatible technologies, and contradictory roadmaps.
Culture punished bearing bad news. Research by INSEAD professors Timo Vuori and Quy Huy found that Nokia's middle managers were afraid to communicate the full extent of the competitive threat to senior leadership. The company's culture, which valued confidence and optimism, created an environment where raising alarm about competitive threats was professionally risky. Middle managers who reported that Symbian was falling behind iOS and Android feared being seen as negative, uncommitted, or incompetent.
The Nokia failure demonstrates that organizational culture is a decision-making system. The norms, incentives, and power dynamics within an organization determine what information reaches decision-makers, how that information is interpreted, and what range of responses is considered acceptable. When the culture systematically filters out uncomfortable truths and discourages dissent, the decision-making process operates on distorted information and produces distorted outcomes.
Theranos: When Charisma Overrides Evidence
How Did Theranos Persist So Long
Theranos, founded by Elizabeth Holmes in 2003, claimed to have developed technology that could run hundreds of medical tests from a single drop of blood. The company achieved a peak valuation of $9 billion and attracted investment from some of the most experienced investors and board members in American business and government.
The technology did not work. It had never worked as claimed. The decision-making failure was not that Holmes deceived her investors and board--she did, and was convicted of fraud for doing so. The decision-making failure was that so many experienced, sophisticated people failed to detect the deception despite numerous warning signs:
Charismatic founder overrode technical scrutiny. Holmes was an extraordinarily persuasive communicator who cultivated a compelling personal narrative (young Stanford dropout disrupting healthcare) and carefully managed her public image. The charisma created a halo effect: people who found Holmes personally compelling extended that positive impression to her technology, her company, and her claims.
Lack of technical oversight. Theranos's board included distinguished military leaders, diplomats, and business executives--but no one with expertise in blood diagnostics, medical devices, or the specific science underlying the company's claims. The people evaluating the technology lacked the knowledge to evaluate it critically.
Intimidation of critics. Holmes and her COO, Ramesh "Sunny" Balwani, created a culture of fear and secrecy within the company. Employees who raised concerns about the technology were threatened with lawsuits, fired, or both. The company maintained extreme secrecy about its technology, claiming that proprietary protection required keeping details confidential. This secrecy prevented external experts from evaluating the technology's plausibility.
Confirmation bias from investors wanting to believe. The investors wanted Theranos to succeed because the vision was enormously compelling: cheap, accessible, minimally invasive blood testing could transform healthcare. This desire to believe made investors susceptible to confirmation bias--interpreting ambiguous signals as confirming the company's claims rather than questioning them.
New Coke: Ignoring Emotional Attachment to Data
What Decision Error Caused New Coke
In April 1985, The Coca-Cola Company announced the most dramatic product change in its 99-year history: the original Coca-Cola formula would be discontinued and replaced with "New Coke," a sweeter formulation that performed better in blind taste tests against rival Pepsi.
The decision was based on extensive market research. Coca-Cola had conducted approximately 200,000 taste tests, and the new formula consistently beat both the original Coca-Cola and Pepsi. The data was clear and overwhelming. The company's leadership, led by CEO Roberto Goizueta, made what appeared to be a data-driven decision.
The result was a catastrophe. Consumers revolted. Coca-Cola's switchboard received 400,000 angry calls and letters. Protest groups formed. The company reversed its decision within 79 days, reintroducing the original formula as "Coca-Cola Classic."
Over-reliance on taste tests ignoring emotional attachment to the brand. The taste tests measured one thing: which liquid tasted better in a blind sip. They did not measure the complex emotional, cultural, and identity-based relationship that consumers had with Coca-Cola. For many Americans, Coca-Cola was not merely a beverage preference. It was a cultural touchstone, a childhood memory, a piece of American identity. Changing the formula felt like a betrayal of something personal.
Not considering switching costs. The taste tests assumed that the relevant comparison was between two liquids. But in the real market, consumers were not choosing between two unfamiliar liquids. They were being asked to give up a product they had a lifetime relationship with and accept a replacement. The psychological cost of losing something familiar and beloved (loss aversion) far exceeded the marginal taste improvement measured in blind tests.
The New Coke failure is a case study in the limits of quantitative data for capturing qualitative human experience. The data was technically correct: the new formula did taste better in blind tests. But the data measured the wrong thing. It measured taste preference in isolation from the cultural, emotional, and identity-based factors that actually drove consumer behavior.
Why Do Smart People Make Bad Decisions?
The cases above share a common thread: the decision-makers were not incompetent. They were experienced, educated, and intelligent. Their failures were not failures of ability but failures of the decision-making process. Several factors explain why intelligence does not protect against bad decisions:
Intelligence does not prevent bias. Cognitive biases are not a function of intelligence. They are features of the cognitive architecture that all humans share, regardless of education, experience, or IQ. Research by Keith Stanovich has demonstrated that cognitive biases are largely uncorrelated with intelligence--smart people are not significantly less biased than average people; they are merely better at constructing rational-sounding justifications for their biased conclusions.
Overconfidence increases with knowledge. Experts in a domain often exhibit more overconfidence than novices, because their deep knowledge gives them confidence that extends beyond the boundaries of their actual predictive ability. The Long-Term Capital Management hedge fund failure, in which Nobel Prize-winning economists overestimated the completeness of their financial models, illustrates this pattern.
Expertise can create blind spots. Deep expertise in one domain can create blind spots in adjacent domains. Kodak's leadership had deep expertise in the film business, which gave them accurate models of how the film market worked but inaccurate models of how the digital market would develop. Their expertise in the current paradigm made it harder, not easier, to understand the next paradigm.
How Can Organizations Prevent Decision Failures?
Encourage dissent. Organizations that systematically encourage and protect dissenting opinions make better decisions than those that reward consensus. Alfred Sloan, the legendary CEO of General Motors, reportedly told his executives: "Gentlemen, I take it we are all in complete agreement on the decision here? Then I propose we postpone further discussion to give ourselves time to develop disagreement and perhaps gain some understanding of what the decision is about."
Red team ideas. Before implementing a major decision, assign a team specifically to argue against it--to find every flaw, every risk, and every alternative that the decision overlooks. The red team's job is not to make friends but to stress-test the decision.
Conduct pre-mortems. Gary Klein's pre-mortem technique asks: "Imagine that we have implemented this decision and it has failed catastrophically. What caused the failure?" This technique exploits hindsight bias constructively, making it easier to identify risks that optimism bias would otherwise conceal.
Seek diverse perspectives. Homogeneous groups are more susceptible to groupthink, confirmation bias, and shared blind spots. Groups that include people with different backgrounds, different expertise, and different perspectives are more likely to identify risks and alternatives that a homogeneous group would miss.
Use explicit decision processes. Informal, intuitive decision-making is susceptible to all the biases described above. Structured decision processes--with explicit criteria, systematic evidence review, and formal consideration of alternatives--reduce the influence of cognitive biases by requiring systematic analysis rather than allowing intuitive judgment to dominate.
Learn from mistakes. Organizations that conduct honest post-mortems of failed decisions--without blame, focused on understanding what went wrong and how to prevent similar failures--develop institutional learning that improves future decision-making. Organizations that punish failure produce a culture where failures are concealed rather than examined, ensuring that the same mistakes are repeated.
The most important insight from the study of decision-making failures is that the quality of an organization's decisions is a function of its decision-making process, not of the intelligence of its decision-makers. Smart people with bad processes make bad decisions. Average people with good processes make better decisions. The investment in improving decision-making processes--building dissent into the culture, structuring evaluation procedures, diversifying perspectives, and learning from failures--produces returns that far exceed the investment.
References and Further Reading
Christensen, C.M. (1997). The Innovator's Dilemma: When New Technologies Cause Great Firms to Fail. Harvard Business Review Press. https://en.wikipedia.org/wiki/The_Innovator%27s_Dilemma
Janis, I.L. (1972). Victims of Groupthink: A Psychological Study of Foreign-Policy Decisions and Fiascoes. Houghton Mifflin. https://en.wikipedia.org/wiki/Groupthink
Vuori, T.O. & Huy, Q.N. (2016). "Distributed Attention and Shared Emotions in the Innovation Process: How Nokia Lost the Smartphone Battle." Administrative Science Quarterly, 61(1), 9-51. https://doi.org/10.1177/0001839215606951
Keating, G. (2012). Netflixed: The Epic Battle for America's Eyeballs. Portfolio/Penguin. https://www.penguinrandomhouse.com/books/212850/netflixed-by-gina-keating/
Carreyrou, J. (2018). Bad Blood: Secrets and Lies in a Silicon Valley Startup. Knopf. https://en.wikipedia.org/wiki/Bad_Blood_(book)
Oliver, T. (1986). The Real Coke, the Real Story. Random House. https://en.wikipedia.org/wiki/New_Coke
Stanovich, K.E. (2009). What Intelligence Tests Miss: The Psychology of Rational Thought. Yale University Press. https://yalebooks.yale.edu/book/9780300164626/what-intelligence-tests-miss/
Klein, G. (2007). "Performing a Project Premortem." Harvard Business Review. https://hbr.org/2007/09/performing-a-project-premortem
Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux. https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow
Munger, C.T. (1995). "The Psychology of Human Misjudgment." Speech at Harvard Law School. https://en.wikipedia.org/wiki/Charlie_Munger
Tetlock, P.E. (2005). Expert Political Judgment: How Good Is It? How Can We Know?. Princeton University Press. https://press.princeton.edu/books/paperback/9780691128719/expert-political-judgment
Lucas, H.C. & Goh, J.M. (2009). "Disruptive Technology: How Kodak Missed the Digital Photography Revolution." Journal of Strategic Information Systems, 18(1), 46-55. https://doi.org/10.1016/j.jsis.2009.01.002
Sull, D.N. (1999). "Why Good Companies Go Bad." Harvard Business Review. https://hbr.org/1999/07/why-good-companies-go-bad