How Organizations Make Decisions: The Real Mechanics of Organizational Choice, Why Decisions Take So Long, How Power and Politics Shape Outcomes, and What Separates Good Decision-Making Organizations from Bad Ones
On January 28, 1986, the Space Shuttle Challenger launched from Kennedy Space Center in Florida, broke apart 73 seconds after liftoff, and killed all seven crew members. The technical cause was the failure of an O-ring seal in the right solid rocket booster, which allowed hot combustion gases to breach the joint and ignite the external fuel tank. The temperature at launch was 36 degrees Fahrenheit--15 degrees colder than any previous shuttle launch.
The night before the launch, engineers at Morton Thiokol, the company that manufactured the solid rocket boosters, recommended against launching. They presented data showing that the O-ring seals performed poorly at low temperatures and argued that launching in the forecast cold conditions was too risky. Their recommendation was clear: do not launch.
What happened next was not a rational evaluation of the evidence. It was an organizational decision-making process in which formal authority, informal power dynamics, time pressure, institutional culture, and the framing of the decision interacted to produce a catastrophically wrong outcome.
NASA managers pushed back on the Thiokol engineers' recommendation, asking them to "reconsider." This request, in the context of the customer-contractor relationship between NASA and Thiokol, carried enormous implicit pressure. Thiokol management asked for a private caucus--a five-minute offline discussion that would determine whether the engineers' no-launch recommendation would stand. During this caucus, Thiokol's senior vice president, Joe Kilminster, turned to Bob Lund, the vice president of engineering, and said: "Take off your engineering hat and put on your management hat." Lund reversed his position. The recommendation was changed from "do not launch" to "launch."
The Challenger disaster is the most studied organizational decision failure in history because it illustrates, with terrible clarity, every major pathology of organizational decision-making: hierarchy overriding expertise, pressure to conform overriding dissent, the framing of the decision determining the outcome, and the gap between the formal decision process (data-driven engineering analysis) and the actual decision process (power dynamics, customer pressure, career incentives).
How do organizations make decisions? Through formal processes, power dynamics, coalition building, and negotiation--rarely through a single rational actor making an optimal choice. Understanding how organizational decisions actually work--as opposed to how organization charts and decision-rights documents say they should work--is essential for anyone who wants to improve decision quality in their organization or navigate the decision-making processes that affect their work and career.
The Rational Model: How Organizations Are Supposed to Decide
The Textbook Version
Management textbooks present organizational decision-making as a rational process with clearly defined steps:
- Identify the problem or opportunity. Something is wrong that needs fixing, or something is possible that could be pursued.
- Gather relevant information. Collect data, consult experts, analyze the situation.
- Generate alternatives. Identify the possible courses of action.
- Evaluate alternatives. Assess each alternative against defined criteria (cost, risk, benefit, feasibility, strategic alignment).
- Select the best alternative. Choose the option that best satisfies the criteria.
- Implement the decision. Execute the chosen course of action.
- Evaluate the results. Assess whether the decision achieved its intended outcomes and learn from the experience.
This rational model is clean, logical, and wrong--not because organizations never follow these steps, but because the model omits the most powerful forces that actually shape organizational decisions: politics, power, culture, emotion, time pressure, information asymmetry, and the cognitive limitations of the human beings who make the decisions.
Herbert Simon, who won the Nobel Prize in Economics for his research on decision-making in organizations, introduced the concept of bounded rationality in the 1940s. Simon argued that organizational decision-makers cannot be fully rational because they operate under three fundamental constraints:
Limited information. Decision-makers never have complete information about the problem, the alternatives, or the consequences of each alternative. Information is expensive to acquire, takes time to process, and is often ambiguous or contradictory. The decision must be made with whatever information is available within the time and resource constraints, not with the complete information that rational analysis would require.
Limited cognitive capacity. The human brain can hold approximately four to seven items in working memory simultaneously. Complex organizational decisions involve dozens or hundreds of relevant variables, interdependencies, and trade-offs--far more than any individual can process simultaneously. Decision-makers cope with this limitation by simplifying: ignoring some variables, using heuristics (mental shortcuts), focusing on the most salient information, and accepting "good enough" solutions rather than searching for optimal ones.
Limited time. Organizational decisions must be made within deadlines imposed by markets, competitors, regulators, customers, and internal processes. The time available for decision-making is rarely sufficient for the thorough analysis that the rational model prescribes. Decision-makers must balance the desire for more information and analysis against the cost of delay.
Because of these constraints, Simon argued, organizational decision-makers do not optimize (find the best possible solution). They satisfice (find a solution that is good enough). The decision is made when the first acceptable alternative is identified--not when all alternatives have been systematically evaluated and the best one selected. Satisficing is not laziness or incompetence; it is a rational response to the reality of bounded rationality.
How Decisions Actually Get Made
The Garbage Can Model
In 1972, Michael Cohen, James March, and Johan Olsen published "A Garbage Can Model of Organizational Choice," one of the most influential (and initially controversial) papers in organizational theory. Their model described organizational decision-making as far messier than the rational model suggested.
In the garbage can model, organizations are characterized as organized anarchies where:
Preferences are unclear. People often do not know what they want until they see the options. Organizational goals are ambiguous, inconsistent, and constantly shifting. Different stakeholders want different things, and what any individual stakeholder wants may change from one meeting to the next.
Technology is unclear. People do not fully understand how their organization works. The relationship between actions and outcomes is poorly understood. Members learn by trial and error, not by systematic analysis.
Participation is fluid. The people involved in any given decision change depending on who is available, who is interested, and who has time. A decision that is made by one group of people on Tuesday might be made by a completely different group on Thursday.
In this environment, decision-making is not a sequential, rational process. Instead, it is the collision of four independent streams:
- Problems (things that need attention)
- Solutions (answers looking for questions--ideas that people are promoting regardless of what problems exist)
- Participants (people who happen to be available and interested)
- Choice opportunities (occasions when the organization is expected to make a decision, such as meetings, deadlines, and crises)
Decisions happen when these four streams happen to collide: when a problem, a solution, the right participants, and a choice opportunity come together at the same time. The decision that results may not be the rational response to the problem--it may be whatever solution happened to be available when the problem and the choice opportunity coincided with the right participants.
This model sounds absurd until you observe actual organizational decision-making. Meetings where the agenda item is one thing but the actual discussion and decision are about something completely different. Projects that get funded not because they are the highest priority but because a champion happened to be in the room when budget was being allocated. Problems that are "solved" not by the best solution but by whatever solution a particular advocate has been promoting for months.
The garbage can model does not describe all organizational decisions--many routine decisions follow predictable, rational processes. But it captures the messy reality of complex, non-routine decisions in organizations where goals are ambiguous, resources are contested, and participation is fluid.
The Political Model
How does power affect organizational decisions? Power shapes whose preferences matter, what information surfaces, which alternatives are considered, and how decisions are framed. In any organization with limited resources and multiple stakeholders with competing interests, decision-making is inherently political.
The political model of organizational decision-making, developed by scholars including Graham Allison, Jeffrey Pfeffer, and Henry Mintzberg, views decisions not as the output of rational analysis but as the outcome of bargaining among competing interests. The decision that emerges is not the objectively best choice but the choice that the most powerful coalition of stakeholders supports.
Sources of power in organizational decisions:
Formal authority. The organizational hierarchy defines who has the formal right to make which decisions. The CEO can authorize expenditures that a manager cannot. The board of directors can approve strategies that the CEO cannot. Formal authority is the most visible source of power, but it is often not the most influential.
Control of information. The person who controls what information reaches the decision-maker wields enormous power. A chief of staff who decides which reports the CEO sees, which meeting requests are accepted, and which concerns are surfaced shapes the CEO's decisions without making any decisions themselves. Middle managers who filter information as it flows up the hierarchy systematically shape the information environment in which senior leaders decide.
Expert power. People with specialized knowledge that others lack can influence decisions by interpreting information, defining what is technically feasible, and framing the technical constraints within which the decision must be made. When the Thiokol engineers said "the O-rings won't seal at this temperature," they were exercising expert power--their technical knowledge defined the parameters of the decision. When their management overrode that expertise, it was a collision between expert power and hierarchical power.
Network power. People with extensive networks of relationships across the organization can build coalitions, gather intelligence, and mobilize support for their preferred outcomes. The person who knows everyone and is owed favors by many has power that does not appear on any organization chart.
Agenda-setting power. Perhaps the most subtle and powerful form of organizational power is the ability to determine what gets decided. The person who sets the agenda for a meeting determines which decisions are made and which are deferred. The person who frames a problem determines which solutions are considered. A question framed as "Should we enter the Chinese market?" produces different alternatives than the same question framed as "How should we grow revenue by 20 percent?"
Political scientist Peter Bachrach and Morton Baratz identified what they called the "second face of power": the ability to prevent issues from reaching the decision agenda at all. An executive who ensures that a particular strategic option is never discussed in a board meeting has exercised power more effectively than one who argues against the option after it is proposed--because the option was defeated without ever being visible.
Why Are Organizational Decisions Slow?
Why are organizational decisions slow? Multiple stakeholders must be consulted, coordination across units is required, political dynamics must be navigated, risk aversion creates caution, and affected parties must buy in to ensure implementation. Each of these factors adds time to the decision process.
The Stakeholder Problem
Most organizational decisions affect multiple stakeholders with different interests, different information, and different preferences. A product decision affects engineering (who must build it), sales (who must sell it), finance (who must fund it), customer support (who must support it), legal (who must ensure compliance), and marketing (who must position it). Each stakeholder group has legitimate concerns that should be heard, and hearing all of them takes time.
The number of communication pathways increases geometrically with the number of stakeholders. Two stakeholders have one communication pathway. Five stakeholders have ten pathways. Ten stakeholders have forty-five pathways. As the number of people involved in a decision increases, the communication and coordination overhead increases much faster--which is why adding more people to a decision process often makes it slower rather than faster.
Brooks's Law--Fred Brooks's observation that "adding manpower to a late software project makes it later"--applies to decision-making as well: adding more stakeholders to a slow decision process makes it slower, not faster, because the additional communication and coordination overhead exceeds the additional analytical capacity.
The Risk Aversion Problem
Organizations are generally more risk-averse than individuals because the consequences of bad decisions in organizations are borne not just by the decision-maker but by many people who had no role in the decision. A CEO who makes a bad acquisition costs thousands of employees their jobs. A doctor who prescribes the wrong medication harms a patient who trusted the healthcare system. The asymmetry between the personal reward for a successful risky decision (credit and promotion for the decision-maker) and the distributed harm of a failed risky decision (layoffs, losses, and damage for many people) creates institutional caution.
This risk aversion manifests as a preference for decisions by committee rather than individual authority, extended analysis rather than rapid action, and incremental change rather than bold transformation. Each of these tendencies adds time and reduces the probability of extreme outcomes--both extremely good and extremely bad.
The Buy-In Problem
Organizational decisions that lack buy-in from the people who must implement them are frequently sabotaged--not necessarily through deliberate obstruction, but through passive non-compliance, half-hearted execution, and failure to apply the discretionary effort that turns a decision into a result.
Building buy-in takes time. It requires explaining the rationale for the decision, listening to concerns, addressing objections, and sometimes modifying the decision to incorporate stakeholder input. Decision-makers who skip this process in the interest of speed often find that the time they saved on the decision is lost many times over during implementation, as the organization drags its feet on a decision it does not support.
The Japanese concept of nemawashi--the practice of building consensus through informal, one-on-one discussions before a formal decision meeting--explicitly trades decision speed for implementation speed. The formal meeting at which the decision is announced is often a formality; the actual decision was made gradually through dozens of individual conversations over the preceding weeks. This process is slow at the decision stage but fast at the implementation stage because everyone who needs to act has already been consulted, their concerns addressed, and their support secured.
What's the Difference Between Formal and Actual Decision-Making?
What's the difference between formal and actual decision-making? The formal process is the official, documented way decisions are supposed to be made: the decision rights matrix, the approval workflows, the governance committees. The actual process involves politics, informal influence, pre-decisions, and coalitions. The two often diverge significantly.
Pre-Decisions: Where Real Decisions Happen
In many organizations, the formal decision meeting is a ratification ceremony for a decision that was effectively made earlier, informally, by a smaller group. The CEO discusses a strategic option with two or three trusted advisors over lunch. By the end of the conversation, the decision has been made. The subsequent executive committee meeting, with its structured agenda, presentation, and discussion, is an exercise in collective validation--confirming a decision that the most powerful participants have already reached.
Pre-decisions are not inherently bad. They can accelerate decision-making by leveraging the expertise and judgment of key individuals. But they become pathological when they exclude relevant perspectives, when they present a false appearance of inclusive deliberation (decisions that appear to be made by the group were actually made by a subset), or when they produce bad decisions because the pre-decision group lacked information or perspective that a broader process would have provided.
Informal Influence Networks
Organization charts depict formal reporting relationships. They do not depict the informal influence networks that often determine how decisions actually get made. The executive assistant who controls the CEO's schedule. The respected senior engineer whose technical opinion carries more weight than their title suggests. The long-tenured director who has survived three reorganizations and knows where the institutional bodies are buried.
These informal influence networks can be more powerful than formal authority. A proposed change that the formal decision-maker supports but the informal influence network opposes will often fail during implementation, as the network subtly undermines the change through non-compliance, passive resistance, and re-interpretation of directives.
Understanding the informal influence network is essential for anyone who wants to get a decision made in an organization. The formal question is "who has the authority to approve this?" The practical question is "whose support do I need for this to actually happen?" The answers are often very different.
What Is Groupthink?
What is groupthink? Groupthink occurs when the desire for consensus suppresses dissent and alternative views, leading to poor decisions because harmony is valued over critical evaluation. The term was coined by psychologist Irving Janis in 1972, based on his analysis of several catastrophic decisions in American foreign policy, including the Bay of Pigs invasion, the escalation of the Vietnam War, and the failure to anticipate the attack on Pearl Harbor.
How Groupthink Develops
Groupthink develops when a cohesive group with a strong leader faces a decision under pressure. The group's desire to maintain cohesion and avoid conflict produces several observable symptoms:
Illusion of invulnerability. The group develops an overconfident belief that its decisions will succeed. Past successes create a sense of momentum and rightness that discourages critical examination of the current decision. NASA's culture before the Challenger disaster exhibited this symptom: a long string of successful launches created institutional confidence that the shuttle system was fundamentally safe, despite accumulating evidence of O-ring problems.
Collective rationalization. Group members discount warnings and negative feedback that might challenge the group's assumptions. Information that contradicts the preferred course of action is reinterpreted, minimized, or ignored. During the lead-up to the 2003 invasion of Iraq, intelligence that contradicted the assumption that Iraq possessed weapons of mass destruction was systematically discounted by senior officials who had already committed to the invasion.
Self-censorship. Individual members who have doubts suppress them rather than expressing them to the group. They assume that their doubts are unique and that expressing them would be disruptive, disloyal, or unwelcome. The result is that the group appears to be more united than it actually is, because the doubters are silent.
Illusion of unanimity. Because dissenters self-censor, the group perceives unanimous agreement where none actually exists. The leader interprets silence as consent. Members who interpret silence the same way feel even more pressure to suppress their own doubts, creating a self-reinforcing cycle of false consensus.
Direct pressure on dissenters. When someone does express a dissenting view, other group members apply pressure--through ridicule, social exclusion, or explicit criticism--to bring the dissenter back into line. At Morton Thiokol the night before the Challenger launch, the instruction to "take off your engineering hat and put on your management hat" was a direct application of pressure on a dissenter to conform.
Mindguards. Some group members take it upon themselves to protect the group from information that might challenge the consensus. They filter out negative data, discourage outsiders from presenting contradicting viewpoints, and manage the information flow to maintain the group's confidence in its chosen course.
Preventing Groupthink
Janis recommended several practices to prevent groupthink:
Assign a devil's advocate. Explicitly assign one group member the role of challenging the group's assumptions and preferred conclusions. Rotate the role so that the same person does not always play the critic, which would allow the group to dismiss them as "the person who always disagrees."
Encourage independent analysis. Before the group discusses a decision, have each member independently analyze the options and formulate their own recommendation. This prevents anchoring on the first opinion expressed and ensures that diverse perspectives are generated before group discussion compresses them.
Invite outside experts. Bring in people who are not part of the group's social cohesion--external advisors, experts from other departments, consultants--to provide perspectives that the group's internal dynamics might suppress.
The leader speaks last. When the group leader expresses an opinion early in the discussion, other members anchor on that opinion and are reluctant to disagree. If the leader reserves their opinion until all other members have spoken, the discussion produces more diverse input.
Alfred Sloan, the legendary CEO of General Motors, once concluded a meeting of his senior executives by saying: "Gentlemen, I take it we are all in complete agreement on the decision here." Everyone nodded. "Then I propose we postpone further discussion of this matter until our next meeting to give ourselves time to develop disagreement and perhaps gain some understanding of what the decision is all about." Sloan understood that instant unanimity in a group of intelligent people facing a complex decision is not a sign of good decision-making--it is a sign that groupthink has suppressed the critical analysis that complex decisions require.
Why Do Organizations Make Bad Decisions?
Why do organizations make bad decisions? The causes are systematic and predictable:
Information Filtering
As information travels up the organizational hierarchy, it is filtered at each level. Bad news is softened. Complexity is simplified. Uncertainty is reduced. Ambiguity is resolved (often in the direction that confirms what the audience wants to hear). By the time information reaches the senior decision-maker, it may bear little resemblance to the original data.
This filtering is not usually deliberate deception. It is the natural result of each level trying to present clear, actionable summaries to the level above. A front-line manager who reports "there are some concerns about the project timeline, but the team is working hard to address them" is filtering the engineer's blunt assessment that "we're going to miss the deadline by three months because the requirements keep changing." The filtering serves a social function--maintaining positive relationships and organizational harmony--but it degrades the information quality on which decisions depend.
Incentive Misalignment
When the people making a decision face different incentives than the people affected by the decision, bad decisions are predictable. An executive whose bonus depends on quarterly earnings has different incentives than employees whose job security depends on long-term company health. A doctor who earns more for performing procedures has different incentives than a patient who benefits only from necessary procedures. A politician who faces re-election in two years has different incentives than citizens who bear the consequences of policies for decades.
The principal-agent problem--the divergence between the interests of decision-makers (agents) and the interests of those affected by decisions (principals)--is one of the most well-studied phenomena in organizational economics. The solution is aligning incentives, but perfect alignment is difficult because the information asymmetry between agents and principals makes monitoring costly and incomplete.
Disconnect Between Deciders and Consequences
Organizational hierarchies often place decision authority far from the consequences of decisions. A corporate headquarters that decides to close a factory does not experience the unemployment, community disruption, and personal devastation that the closure produces. A military commander who orders an attack from hundreds of miles away does not face the bullets. A healthcare administrator who decides to reduce nursing staff does not provide patient care.
This disconnect is not inherently bad--sometimes decisions must be made at a level that has the perspective, authority, and information to make them well. But when deciders are insulated from consequences, they may underweight costs that they do not personally bear, leading to decisions that are rational from the decider's perspective but harmful from the perspective of those affected.
How Can Organizations Improve Decision-Making?
How can organizations improve decision-making? Through explicit, practical changes to decision structures, processes, and culture:
Clarify Decision Rights
The single most impactful improvement is making explicit who has the authority to make which decisions. The RACI framework (Responsible, Accountable, Consulted, Informed) or Amazon's "single-threaded owner" model clarify decision rights so that decisions are not delayed by ambiguity about who is supposed to decide, are not undermined by people who believe they should have been consulted, and are not relitigated by people who disagree with the outcome.
Encourage Dissent
Create structural mechanisms that make dissent safe and expected. Red teams that argue against proposed strategies. Pre-mortems that imagine failure. Devil's advocate roles in decision meetings. Anonymous feedback channels. The leader-speaks-last norm. All of these mechanisms counteract the natural tendency of organizational groups to converge on the leader's preference and suppress disagreement.
Separate Fact-Finding from Advocacy
When the same person or group is responsible for both analyzing a situation and recommending a course of action, the analysis is inevitably shaped by the advocacy. The person who believes the company should acquire a competitor will unconsciously (or consciously) present information that supports the acquisition and minimize information that contradicts it.
Separating these functions--having one group analyze and present the facts, and a different group advocate for specific courses of action--reduces the bias that advocacy introduces into analysis.
Align Incentives with Decision Quality
Reward people for good decision processes, not just good outcomes. A decision that follows a sound process but produces a bad outcome (because of unforeseeable circumstances) should be treated differently than a decision that follows a poor process but produces a good outcome (because of luck). If organizations punish bad outcomes regardless of process quality, they incentivize risk avoidance rather than good decision-making. If they reward good outcomes regardless of process quality, they incentivize recklessness and luck-seeking.
Annie Duke, in Thinking in Bets, argues that organizations should evaluate decisions based on the quality of the process at the time the decision was made, with the information available at that time--not based on the outcome, which includes factors beyond the decision-maker's control. This "resulting"--judging decision quality by outcome quality--is one of the most pervasive and destructive evaluation errors in organizations.
Make the Decision Process Explicit
Many organizational decisions fail not because the wrong choice was made but because the decision process was never defined. Who is deciding? What information is needed? Who needs to be consulted? What criteria will be used? What is the deadline? When these process questions are answered explicitly before the decision process begins, decisions are faster, more inclusive, and more defensible.
What's the Difference Between Consensus and Unanimity?
What's the difference between consensus and unanimity? Consensus means everyone can live with the decision, even if it is not their first choice. Unanimity means everyone actively agrees with the decision. Seeking unanimity often prevents decisions entirely because finding a course of action that every stakeholder enthusiastically endorses is frequently impossible.
Effective organizations pursue consent rather than consensus: the question is not "does everyone agree?" but "can everyone live with this?" When a team member says "I have reservations, but I will support the decision and give it my full effort," that is consent--and it is sufficient for organizational action. When a team insists on full agreement before acting, a single holdout can block any decision, producing the organizational paralysis that many consensus-seeking organizations experience.
Jeff Bezos's "disagree and commit" principle captures this distinction: once a decision has been debated and made, those who disagree commit to executing it fully. The commitment is to the team and the process, not to the specific decision. This allows the organization to act decisively while preserving the right of individuals to express disagreement and to advocate for reconsideration if new evidence emerges.
The quality of organizational decision-making is not determined by the intelligence of the individuals involved. It is determined by the systems, structures, processes, and culture within which those individuals operate. Smart people in bad decision-making systems produce bad decisions. Average people in good decision-making systems produce good decisions. Improving organizational decisions is not primarily a matter of hiring smarter people--it is a matter of building better decision-making systems and maintaining the cultural conditions that allow those systems to function honestly.
References and Further Reading
Simon, H.A. (1947). Administrative Behavior: A Study of Decision-Making Processes in Administrative Organizations. Macmillan. https://en.wikipedia.org/wiki/Administrative_Behavior
Cohen, M.D., March, J.G. & Olsen, J.P. (1972). "A Garbage Can Model of Organizational Choice." Administrative Science Quarterly, 17(1), 1-25. https://doi.org/10.2307/2392088
Janis, I.L. (1972). Victims of Groupthink: A Psychological Study of Foreign-Policy Decisions and Fiascoes. Houghton Mifflin. https://en.wikipedia.org/wiki/Groupthink
Allison, G.T. (1971). Essence of Decision: Explaining the Cuban Missile Crisis. Little, Brown. https://en.wikipedia.org/wiki/Essence_of_Decision
Pfeffer, J. (1992). Managing with Power: Politics and Influence in Organizations. Harvard Business School Press. https://www.gsb.stanford.edu/faculty-research/books/managing-power-politics-influence-organizations
Vaughan, D. (1996). The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA. University of Chicago Press. https://press.uchicago.edu/ucp/books/book/chicago/C/bo22781921.html
March, J.G. (1994). A Primer on Decision Making: How Decisions Happen. Free Press. https://en.wikipedia.org/wiki/James_G._March
Kahneman, D., Lovallo, D. & Sibony, O. (2011). "Before You Make That Big Decision." Harvard Business Review. https://hbr.org/2011/06/the-big-idea-before-you-make-that-big-decision
Duke, A. (2018). Thinking in Bets: Making Smarter Decisions When You Don't Have All the Facts. Portfolio. https://www.annieduke.com/books/
Mintzberg, H., Raisinghani, D. & Theoret, A. (1976). "The Structure of 'Unstructured' Decision Processes." Administrative Science Quarterly, 21(2), 246-275. https://doi.org/10.2307/2392045
Bachrach, P. & Baratz, M.S. (1962). "Two Faces of Power." American Political Science Review, 56(4), 947-952. https://doi.org/10.2307/1952796
Edmondson, A.C. (2018). The Fearless Organization. Wiley. https://fearlessorganization.com/
Tetlock, P.E. & Gardner, D. (2015). Superforecasting: The Art and Science of Prediction. Crown. https://en.wikipedia.org/wiki/Superforecasting
Cyert, R.M. & March, J.G. (1963). A Behavioral Theory of the Firm. Prentice Hall. https://en.wikipedia.org/wiki/A_Behavioral_Theory_of_the_Firm
Rogers Commission. (1986). Report of the Presidential Commission on the Space Shuttle Challenger Accident. https://history.nasa.gov/rogersrep/genindex.htm