Decision Support System Ideas
In 1988, a U.S. Navy cruiser named the USS Vincennes shot down Iran Air Flight 655, killing all 290 people on board. The crew had mistaken a civilian Airbus A300 for a hostile military aircraft. Post-incident analyses pointed to a phenomenon called "scenario fulfillment" -- under stress, the crew sought information that confirmed their existing belief rather than information that might challenge it. The decision support systems aboard the Vincennes provided overwhelming data but without the structured frameworks to prevent cognitive bias from dominating life-or-death conclusions.
The incident is studied not because military decision-making is representative of everyday organizational decisions, but because it illustrates with unusual clarity a failure mode that appears in far less consequential contexts: the decision-making process that felt systematic and defensible in the moment was actually driven by the decision-makers' prior beliefs, with the data serving as confirmation rather than genuine evidence.
Decision support systems -- whether formal tools, structured processes, or personal practices -- exist to counteract this tendency. They impose structure on the decision-making process in ways that make bias visible, ensure relevant information is gathered and weighed, and create accountability for the decision's eventual outcome.
Understanding How Decisions Go Wrong
Before examining specific decision support approaches, it helps to understand the cognitive landscape within which decisions occur. Daniel Kahneman's research on cognitive biases, summarized in "Thinking, Fast and Slow," identifies dozens of systematic errors in human judgment. Several are particularly relevant to organizational decision-making.
Confirmation bias: The tendency to search for, interpret, and recall information in a way that confirms prior beliefs. Decision-makers with a strong view on the right course of action unconsciously weight supporting evidence more heavily and discount contradicting evidence. The USS Vincennes crew exhibited this: once they believed the aircraft was hostile, every ambiguous signal was interpreted as threatening.
Availability heuristic: Judging the likelihood of events by how easily examples come to mind. Decisions made after a dramatic success or failure are heavily influenced by that recent experience, even when it is not statistically representative.
Anchoring: The tendency to rely too heavily on the first piece of information encountered. In salary negotiation, price setting, and project estimation, the initial anchor number has disproportionate influence on the eventual outcome.
Sunk cost fallacy: Continuing a course of action because of past investment rather than future value. Organizations continue failing projects because they have already spent resources, not because continued investment is expected to produce positive returns.
Overconfidence: Most people rate their performance and judgment above average, which is statistically impossible. In decision-making, overconfidence leads to insufficient scenario planning, underestimation of uncertainty, and excessive willingness to make irreversible decisions.
Group dynamics effects: In organizational settings, group decisions are additionally distorted by groupthink (conforming to perceived group consensus), authority bias (deferring to the most senior person present), and shared information bias (discussing information that all members already know rather than information that only some members have).
Personal Decision Support Tools
The Decision Journal
A decision journal is a systematic record of important decisions: the context in which the decision was made, the options considered, the information available, the reasoning applied, and the predicted outcome. The journal's value comes not from the documentation itself but from the retrospective review that reveals whether the reasoning process was sound -- independent of whether the outcome was good.
Why outcome review is insufficient: Outcome review without process review creates attribution errors. A good decision with a bad outcome (lost a poker hand with a strong hand played well) looks like a bad decision. A bad decision with a good outcome (bet heavily without looking at cards, won by luck) looks like a good decision. Learning from outcomes without examining the underlying reasoning process leads to reinforcing lucky decisions and abandoning sound but unlucky ones.
What a decision journal entry should contain:
- Date and decision being made
- The alternatives considered
- The information available at the time
- The reasoning that led to the chosen option
- What the decision-maker expects to happen and on what timeline
- The level of confidence in the decision (0-100%)
Review practice: Periodically reviewing past journal entries (quarterly or annually) reveals patterns -- recurring biases, systematic over- or under-confidence, common reasoning errors -- that can be deliberately addressed. Studies of professional athletes, gamblers, and investors who use similar journaling practices show improved calibration over time.
Example: Annie Duke, a professional poker player turned decision scientist, describes using decision journaling practices throughout her poker career to separate decision quality from outcome quality. In her book "Thinking in Bets," she describes how systematic review of past decisions (both good and bad outcomes) produced insights about her own cognitive patterns that intuitive review could not have revealed.
The 10-10-10 Framework
Suzy Welch's 10-10-10 framework asks three questions about any significant decision:
- How will I feel about this decision in 10 minutes?
- How will I feel about this decision in 10 months?
- How will I feel about this decision in 10 years?
The framework is not prescriptive -- it does not dictate what the answers should be. It functions as a perspective expansion tool: forcing consideration of both immediate emotional reactions and long-term consequences, which are systematically underweighted in the heat of difficult decisions.
Applications: The 10-10-10 framework is most useful for emotionally charged decisions where immediate feelings are likely to dominate -- difficult personnel decisions, responses to criticism or conflict, and decisions involving personal relationships. It is less useful for purely analytical decisions with clear data.
The Pre-Mortem Analysis
The pre-mortem, developed by Gary Klein and popularized by Daniel Kahneman, is a structured imagination exercise conducted before a decision or project launch. Participants are asked to imagine that one year has passed since the project launched and that it has failed spectacularly. They then work backward to identify what caused the failure.
Why pre-mortems outperform standard risk analysis: Standard risk analysis asks "what could go wrong?" in an optimistic context where everyone is motivated to support the plan. The pre-mortem shifts the frame to "assume it went wrong -- explain why." This reframe makes it socially acceptable to voice concerns and identify risks that group dynamics would otherwise suppress.
Pre-mortem mechanics:
- Define the decision or plan clearly
- Ask participants to silently generate failure scenarios for 5-10 minutes, writing them down
- Have each participant share one failure scenario, going around the room until all scenarios have been named
- Discuss the most important failure scenarios and what could be done to prevent them
- Revise the plan or decision based on the most actionable identified risks
Example: Pixar's production process includes a version of pre-mortem thinking in their "Braintrust" meetings -- regular reviews where the creative team examines what is not working in a film. The meetings are specifically designed to surface problems early, when they are less expensive to fix. Director notes from these meetings are not orders but rather diagnosis of problems, with the creative team determining how to address them. This structure prevents the groupthink that tends to occur in hierarchical creative feedback.
Criteria-Based Decision Making
Criteria-based decision making structures the decision by explicitly defining the criteria that matter, weighting them by importance, and scoring each option against each criterion before calculating a composite score.
The process:
- Define the decision to be made
- List all relevant criteria (price, quality, speed, risk, strategic alignment)
- Weight the criteria by importance (10 criteria can each be weighted 1-10; the weights should sum to a fixed total)
- Generate all serious options
- Score each option against each criterion (1-10)
- Calculate weighted composite scores for each option
- Use the scores as input to the decision (not necessarily as the final decision, but as a structured foundation for discussion)
The limitations of criteria matrices: Criteria matrices can produce false precision -- the appearance of analytical rigor while concealing subjective weighting decisions. The weighting step is where most of the judgment resides, and different weighting assumptions can produce dramatically different scores for the same options. The matrix is most useful as a tool for making the reasoning explicit and debatable, not as a mechanical decision generator.
Organizational Decision Support Systems
The RAPID Framework
Bain & Company's RAPID framework is one of the most widely deployed organizational decision frameworks, designed to clarify who plays which role in any significant decision.
RAPID roles:
- Recommend: Proposes the decision and is responsible for gathering input
- Agree: Has veto power over the recommendation (reserved for specific types of decisions)
- Perform: Executes the decision once made
- Input: Provides information and perspective, but does not have veto power
- Decide: Makes the final decision
The framework's key insight is that "agree" (veto power) should be reserved for decisions that genuinely require consensus -- not used as a default that requires everyone to sign off on everything. Most organizational decisions that become slow and contested are slow because the "agree" role has been assigned too broadly.
Applying RAPID in practice: Organizations using RAPID typically build a decision register -- a document or database of recurring decision types (budget approvals at different thresholds, vendor selections, hiring decisions, product feature decisions) with the RAPID roles pre-assigned. This pre-assignment eliminates the ambiguity that causes decisions to stall.
Example: Amazon's use of a similar framework in their "one-way door vs. two-way door" distinction reflects the same underlying principle. Two-way door decisions (reversible, lower-stakes) can be made quickly by whoever is closest to the information. One-way door decisions (irreversible, higher-stakes) require appropriate deliberation with senior involvement. Applying the same deliberation to both types wastes decision-making resources.
Reversibility-Based Decision Routing
Not all decisions are equally consequential, and treating them equally -- with the same deliberation, stakeholder involvement, and documentation -- creates unnecessary overhead for low-stakes decisions while providing insufficient scrutiny for high-stakes ones.
The reversibility spectrum:
- Easily reversible decisions: Can be undone or adjusted quickly and cheaply. Decide fast, delegate broadly.
- Somewhat reversible decisions: Can be changed, but with some cost or delay. Appropriate deliberation proportional to stakes.
- Largely irreversible decisions: Difficult or impossible to undo. Require thorough analysis, appropriate stakeholder involvement, and documentation.
The discipline is routing decisions to the appropriate level of deliberation based on reversibility, rather than applying uniform process to all decisions.
Digital Decision Support Tools
Decision Management Software
Enterprise decision management software -- used primarily in financial services, insurance, and healthcare -- encodes business rules and decision logic into systems that make or recommend decisions automatically based on defined criteria.
Use cases:
- Loan approval systems that score applications against credit criteria
- Insurance underwriting systems that price policies based on risk factors
- Fraud detection systems that flag transactions matching suspicious patterns
- Medical diagnostic support systems that suggest diagnoses based on symptom and test data
These systems are not decision-making replacements for human judgment in complex situations; they are tools for systematically applying well-defined rules at scale and speed that humans cannot match.
Analytics and Data Visualization for Decision Support
Dashboards, analytics platforms, and visualization tools support decisions by making relevant data accessible and comprehensible to decision-makers who would otherwise lack visibility into the state of the system they are deciding about.
The key design principle: Decision support dashboards should surface the specific metrics relevant to the specific decisions that users need to make, not all available data. A sales dashboard for a sales manager deciding which accounts to prioritize should show pipeline stage, deal size, close probability, and last activity date -- not every piece of data in the CRM.
Common failure modes: Dashboards that show too much data overwhelm decision-makers. Dashboards with insufficient real-time data lead to decisions based on stale information. Dashboards that measure outputs rather than leading indicators arrive too late to support the decisions they are meant to inform.
Example: Tableau's success as a business intelligence platform has been driven significantly by its ability to enable non-technical business users to build and interpret their own dashboards. Before Tableau (and competitors like Looker and Metabase), data access was gated through data teams; Tableau democratized data access in ways that enabled faster, better-informed decision-making at more levels of the organization.
Measuring Decision Quality Over Time
The ultimate test of any decision support system is whether it produces better decisions over time. Measuring decision quality requires addressing the outcome-vs-process distinction described earlier.
Calibration measurement: For decisions with numerical forecasts (revenue projections, project timelines, market size estimates), track whether actual outcomes fall within the confidence intervals the decision-maker assigned. Decision-makers with well-calibrated uncertainty should see approximately 80% of their "80% confident" predictions correct, 95% of their "95% confident" predictions correct, and so on.
Process adherence measurement: Track whether the defined decision process was followed -- whether alternatives were genuinely considered, whether criteria were defined before options were evaluated, whether the right stakeholders were involved.
Outcome measurement: Track decision outcomes over multiple time horizons (6 months, 1 year, 3 years) to build a database of actual consequences from different types of decisions made under different processes.
The combination of calibration, process, and outcome measurement creates a feedback loop that improves decision quality over time -- not through individual decisions alone, but through systematic learning from the pattern of many decisions.
See also: Feedback System Design, Process Optimization Strategies, and Lightweight System Design Principles.
References
- Kahneman, Daniel. Thinking, Fast and Slow. Farrar, Straus and Giroux, 2011. https://www.amazon.com/Thinking-Fast-Slow-Daniel-Kahneman/dp/0374533555
- Klein, Gary. Sources of Power: How People Make Decisions. MIT Press, 1998. https://www.amazon.com/Sources-Power-People-Make-Decisions/dp/0262611465
- Duke, Annie. Thinking in Bets: Making Smarter Decisions When You Don't Have All the Facts. Portfolio, 2018. https://www.annieduke.com/books/
- Welch, Suzy. 10-10-10: 10 Minutes, 10 Months, 10 Years. Scribner, 2009. https://www.amazon.com/10-10-10-Minutes-Months-Years/dp/1416591834
- Bain and Company. "RAPID Decision Making." Bain Insights. https://www.bain.com/insights/rapid-tool-to-clarify-decision-accountability/
- Amazon. "Jeff Bezos on Type 1 and Type 2 Decisions." Amazon Shareholder Letters. https://ir.aboutamazon.com/annual-reports-proxies-and-shareholder-letters/annual-reports/default.aspx
- Tableau. "About Tableau." Tableau. https://www.tableau.com/about
- Pixar. "The Braintrust." Creativity Inc.. https://www.amazon.com/Creativity-Inc-Overcoming-Unseen-Inspiration/dp/0812993012
- Russo, J. Edward and Schoemaker, Paul. Winning Decisions: Getting It Right the First Time. Currency, 2001. https://www.amazon.com/Winning-Decisions-Getting-Right-First/dp/0385502257
- Hammond, John, Keeney, Ralph, and Raiffa, Howard. Smart Choices: A Practical Guide to Making Better Decisions. Harvard Business Review Press, 1999. https://www.amazon.com/Smart-Choices-Practical-Guide-Decisions/dp/0767908864