Introduction
You're choosing between job offers. One pays more but the company's future looks shaky. The other offers stability but might limit growth. You can't know which industry will thrive in five years. You can't predict whether new management will improve or destroy culture. You can't forecast your own preferences as you age.
You have to decide anyway.
"The only certainty is that nothing is certain." — Pliny the Elder
This is decision making under uncertainty—the normal human condition. Not the sanitized version taught in business schools where probabilities are known and outcomes are quantifiable. The messy reality where you lack critical information, can't predict the future, and must choose anyway with consequences riding on your judgment.
Frank Knight (1921) distinguished "risk" from "uncertainty" in his foundational economic work. Risk involves known probabilities—rolling dice, drawing cards, mortality tables. You don't know the specific outcome, but you know the distribution. Uncertainty means you can't even assign meaningful probabilities. Will this startup succeed? Will your career pivot work out? Will this relationship last?
"Risk comes from not knowing what you're doing." — Warren Buffett
Most important decisions live in uncertainty territory, not risk territory. Yet most decision-making advice assumes you're operating with known probabilities. The frameworks that work beautifully when you can calculate expected value often fail completely when you can't even estimate the probabilities.
The Spectrum of Knowability
Pure Risk (Rare in Real Decisions)
Characteristics:
- Known possible outcomes
- Known probability distribution
- Repeatable events that converge toward expected value
- Mathematical optimization is possible
Examples:
- Casino games → Exact probabilities calculable
- Insurance actuarial tables → Large sample sizes, stable distributions
- Quality control in manufacturing → Statistical process control works
These situations are easy for decision-making frameworks. Run the expected value calculation, apply Kelly criterion for bet sizing, done. The problem? Almost no important life decisions have these characteristics.
Quantifiable Uncertainty (Common in Business)
Characteristics:
- Known possible outcomes
- Probability estimates are educated guesses, not facts
- Historical data exists but may not apply
- Bayesian updating is possible as new information arrives
Examples:
| Decision | Why It's Uncertain |
|---|---|
| Product launch success | Market response unpredictable, competitors may react, timing matters, countless unknown factors |
| Hiring outcomes | Person may excel, fail, or quit; culture fit unknowable in advance; role requirements evolve |
| Investment returns | Markets reflect millions of individual decisions, future conditions differ from past |
Here, frameworks help but require humility. You can build models and estimate probabilities, but you're fooling yourself if you think "42% chance of success" is meaningfully different from "38% chance of success." The precision is false.
Deep Uncertainty (Common in Life)
Characteristics:
- Outcomes themselves are unclear
- No reasonable way to assign probabilities
- Unique, non-repeatable situations
- Fundamentally qualitative judgment required
Examples:
- Should I marry this person?
- Should I change careers entirely?
- Should we have children?
- Which city should I live in?
Mathematical frameworks fail here—not because you're doing them wrong, but because they're inapplicable. You can't calculate expected value of "living in Portland vs. Austin" across dimensions like community, weather, career opportunities, relationship implications, and future preference evolution.
Different tools are needed.
Radical Uncertainty (Rare but Consequential)
Characteristics:
- Unknown unknowns dominate
- Historical patterns may be irrelevant
- Black swan events possible
- Robust strategies matter more than optimal strategies
Examples:
- Technological paradigm shifts (internet, AI)
- Pandemic response in early 2020
- Climate change adaptation planning
- Geopolitical regime changes
Taleb's insight: in domains with radical uncertainty, avoiding catastrophic downside matters more than optimizing expected return. You can't calculate your way to the right answer—you need strategies that survive surprising bad outcomes.
Core Principles for Uncertain Decisions
Accept Irreducible Uncertainty
The first mistake most people make: treating uncertainty as a problem to solve through more research. Sometimes more information helps. Often it doesn't exist, costs too much, or arrives too late.
Kay and King (2020) argue that much of modern decision theory—expected utility, rational choice—fails because it assumes problems that are "small world" (well-defined, known probabilities) when reality is "large world" (open-ended, unknowable).
You're choosing between career paths. How much research resolves the uncertainty? You can:
- Talk to people in each field (selection bias—successful people are visible, failures aren't)
- Project salary trajectories (assumes industry trends continue)
- Consider job satisfaction surveys (measures past satisfaction, your preferences may differ)
None of this makes the decision certain. At some point, you must decide under irreducible uncertainty or not decide at all—which is itself a decision with consequences.
Practical principle: Distinguish "uncertainty that research can reduce" from "uncertainty that's intrinsic to the situation." Invest in reducing the first. Accept the second.
Satisfice Rather Than Optimize
Herbert Simon (1956) introduced "satisficing"—seeking solutions that meet thresholds rather than maximizing expected value. Under uncertainty, optimization is often impossible and sometimes harmful.
Why satisficing works better:
| Optimization Approach | Satisficing Approach |
|---|---|
| "Find the absolute best option" → Requires comparing all options → Impossible under uncertainty | "Find an option that meets my criteria" → Stop searching when threshold met |
| Vulnerable to analysis paralysis | Bounded search with clear stopping rules |
| Assumes you know the value function | Acknowledges you might not know what you want until you experience it |
| Small differences matter | Margin of error exceeds small differences anyway |
Example - Choosing where to live:
Don't search for the optimal city by scoring 47 dimensions. Instead:
- Define minimum requirements (job market in my field, cost of living under $X, climate I can tolerate)
- Explore options sequentially
- When you find one meeting all criteria, choose it
The "optimal" city might be 3% better on average, but you can't know that in advance, and it might be 15% worse on dimensions you haven't identified yet.
Make Reversible Decisions Quickly
Under uncertainty, information value comes from action, not analysis. Reversible decisions let you discover what you can't predict.
Two-way door decisions under uncertainty:
- Test hypotheses through small experiments
- Launch minimum viable products
- Try relationships before major commitment
- Accept jobs with clear exit options
Bezos's insight applies especially under uncertainty. You can't analyze your way to certainty on reversible choices. You can try something, learn, adjust. The trying itself generates information that no amount of upfront analysis would reveal.
Example - Career uncertainty:
"Should I become a data scientist or product manager?" is unknowable through research alone. Better approach:
- Take a PM role with explicit 12-month evaluation point
- If it's not clicking, switching to DS track is feasible
- You've gained direct experience that beats any amount of informational interviews
The cost of choosing wrong is low if you build in reversibility mechanisms explicitly.
Plan for Multiple Scenarios
If you can't predict which future will occur, prepare for several. Scenario planning acknowledges irreducible uncertainty and builds adaptive capacity.
Shell Oil's famous application in the 1970s: Rather than forecasting oil prices, they developed scenarios (stable prices, oil shock, gradual decline). When the 1973 oil crisis hit, Shell was uniquely prepared because they'd already thought through that scenario's implications.
Scenario planning process:
- Identify critical uncertainties → What factors most affect outcomes but are least knowable?
- Develop 2-4 distinct scenarios → Not best/worst/most likely, but structurally different futures
- Identify actions that work across scenarios → Robust strategies that don't depend on one future occurring
- Flag early indicators → What would tell you which scenario is unfolding?
Example - Startup strategy under uncertainty:
Critical uncertainty: "Will enterprises adopt our product, or will it remain consumer-focused?"
| Scenario | Strategic Implication |
|---|---|
| Enterprise adoption | Need sales team, compliance infrastructure, longer sales cycles |
| Consumer only | Focus on viral growth, consumer support, community building |
Robust actions (work in both scenarios):
- Build excellent core product
- Develop brand reputation for reliability
- Create API/integration infrastructure
- Maintain financial runway
Early indicators:
- Enterprise pilot conversion rates
- Inbound interest from IT departments
- Competitor moves in either direction
You don't need to predict the future—you need to recognize which future is arriving and adjust accordingly.
Frameworks for Operating Under Uncertainty
Bayesian Updating: Incremental Belief Revision
"Probability is not a mere computation of odds on the dice or more complicated variants; it is the acceptance of the lack of certainty in our knowledge." — Nassim Nicholas Taleb
You can't know the truth with certainty, but you can update your beliefs systematically as evidence accumulates. Probabilistic thinking provides the foundation, and Bayesian reasoning provides the mathematical structure.
Bayes' Theorem (simplified):
P(Hypothesis|Evidence) = P(Evidence|Hypothesis) × P(Hypothesis) / P(Evidence)
In plain language: Your belief after seeing evidence should combine your prior belief with how much the evidence favors your hypothesis over alternatives.
Non-mathematical application:
You're deciding whether to hire a candidate. Initial interview suggests they're strong (your prior is positive). Then you check references:
| Evidence | What It Means | Updated Belief |
|---|---|---|
| Glowing reference | Consistent with "strong candidate" hypothesis | Increase confidence slightly |
| Lukewarm reference | Unexpected if truly strong | Decrease confidence substantially |
| No response from references | Could mean many things | Modest decrease in confidence, flag for investigation |
The key insight: strong evidence against your hypothesis should move your belief more than weak evidence supporting it. Most people do the opposite—they overweight confirming evidence and dismiss contradictory evidence.
Tetlock's superforecasters excel at Bayesian updating. They:
- Start with explicit probability estimates (e.g., "55% confident X will happen")
- Update incrementally as new information arrives (adjust to 62% after positive news, 48% after negative)
- Track their confidence levels over time
- Distinguish meaningful updates from noise
Practical technique: Before encountering evidence, ask "What evidence would make me update my belief significantly?" This prevents motivated reasoning—deciding evidence is weak after it contradicts you.
Real Options Thinking
"If in doubt, don't." — Benjamin Franklin
Financial options give you the right (not obligation) to take an action. Under uncertainty, creating options has value independent of whether you exercise them.
Real options apply this logic to non-financial decisions. The value isn't just in the action itself—it's in preserving the ability to choose as uncertainty resolves.
Option value components:
| Component | Meaning |
|---|---|
| Upside potential | If things go well, you benefit |
| Limited downside | You're not obligated to proceed if things go poorly |
| Time value | Longer time to decision point = more information = higher option value |
| Volatility value | Greater uncertainty = higher option value (more chance of extremes) |
Example - Career decision under uncertainty:
Don't ask "Should I commit to this path forever?" Ask "Can I create options that let me defer commitment while learning?"
- Take consulting roles rather than full-time (preserves optionality)
- Build general skills that transfer across roles (increases option value)
- Maintain broad network (keeps multiple options open)
- Save money (creates financial options to take risks later)
Taleb's "barbell strategy": In highly uncertain domains, combine extreme safety (90% of capital) with extreme risk-taking (10% of capital). The safe portion protects downside; the risky portion captures upside in surprise scenarios. Middle-ground "moderate risk" lacks either protection or upside.
Application: In career terms, maintain stable income source while taking bounded risks on high-upside opportunities. Don't bet everything on medium-probability scenarios.
Robust Decision-Making
When you can't predict the future, choose strategies that perform acceptably across many futures rather than optimally in one future.
Robust vs. optimal strategies:
| Optimal Strategy | Robust Strategy |
|---|---|
| Maximizes expected value given assumptions | Performs well even when assumptions prove wrong |
| Vulnerable to model error | Resistant to specification error |
| Higher performance if you're right | Lower regret if you're wrong |
| Brittle | Resilient |
Example - Investment strategy under uncertainty:
Optimal approach: Calculate expected returns for each asset class, construct portfolio maximizing Sharpe ratio. Problem: Assumes your return estimates and correlation matrix are accurate.
Robust approach:
- Broad diversification across asset classes, geographies, time horizons
- Avoid leverage (limits downside in surprising scenarios)
- Maintain liquid reserves (preserves options)
- Rebalance mechanically (removes judgment calls about timing)
The robust portfolio underperforms if your return estimates are perfect. It outperforms in reality where your estimates are wrong in unpredictable ways.
Lempert et al.'s (2003) robust decision-making framework for policy decisions:
- Specify decision alternatives
- Identify uncertainties that matter
- Evaluate performance across scenarios (not just expected value)
- Find strategies that perform well in many scenarios
- Trade-off analysis (if strategy A is more robust but lower upside, is it worth it?)
This inverts traditional analysis. Instead of "What's the best decision given my forecast?", ask "Which decision do I regret least across possible forecasts?"
Pre-Mortem Under Uncertainty
Gary Klein's pre-mortem works especially well under uncertainty. When you don't know the probabilities, imagining failure scenarios surfaces risks that analysis misses.
Enhanced pre-mortem for uncertainty:
- Assume the decision failed catastrophically
- Generate explanations from different perspectives:
- "The core assumption was wrong" (not just execution failure)
- "The world changed in ways we didn't anticipate"
- "We misunderstood our own preferences/capabilities"
- "Second-order effects dominated first-order effects" (a systems thinking lens helps surface these)
- For each failure mode, ask: "What early signal would indicate this is unfolding?"
- Build checkpoints where you explicitly reevaluate based on those signals
Example - Deciding to start a business:
Standard risk analysis focuses on "Will customers buy?" and "Can we build it?"
Pre-mortem surfaces deeper uncertainties:
- "We succeeded technically but couldn't hire enough engineers to scale"
- "Regulatory environment changed, making our approach illegal"
- "We were solving a problem people said they had but didn't actually pay to solve"
- "Founder team had incompatible working styles that only emerged under stress"
These aren't quantifiable risks—they're structural uncertainties about whether your model of reality is correct. Pre-mortem forces you to question foundational assumptions, not just execution details.
Common Errors in Uncertain Environments
False Precision
Generating five-decimal-place probability estimates when you're fundamentally uncertain creates dangerous illusion of knowledge.
Example error: "I'm 47.3% confident this startup succeeds."
Reality: You have vague intuitions. The difference between 40% and 50% is meaningless noise, but the false precision makes you treat the estimate as fact.
Better approach: Use broad confidence bands. "Somewhere between 30-60% likely" acknowledges genuine uncertainty. If someone forces you to decide where in that range, you've learned your estimate is basically useless for decision-making.
Taleb's critique of risk models: They assign probabilities to events (financial crises, pandemics) that are inherently non-repeating. The probabilities aren't "unknown but estimable"—they're not meaningful in the way probability theorems require.
Waiting for Certainty
Uncertainty is uncomfortable. Natural response: delay deciding until you have more information. Sometimes rational. Often costly.
Information value analysis:
Ask: "What's the value of waiting for more information?"
| Factor | Consideration |
|---|---|
| Cost of delay | What do I lose by not deciding now? (opportunities, time, resources) |
| Information clarity | Will waiting actually reduce uncertainty, or am I just procrastinating? |
| Decision reversibility | If I can adjust later, waiting has low value |
| Option expiration | Some choices disappear if you wait |
Example - Job offer with deadline:
Waiting might give you:
- Competing offers to compare (if they arrive in time)
- More information about company trajectory (unlikely to be decisive)
- Better sense of your own preferences (probably not—preferences form through experience)
Waiting costs you:
- The offer expires
- Opportunity cost of whatever you're doing instead
- Mental burden of unresolved decision
Unless you have specific information arriving soon that would definitively change your decision, waiting is often procrastination disguised as diligence.
Jeff Bezos: "Most decisions should probably be made with somewhere around 70% of the information you wish you had. If you wait for 90%, in most cases, you're probably being slow."
Ignoring Model Uncertainty
Most decision frameworks ask "What's the probability of outcome X?" Few ask "What's the probability my model of this situation is completely wrong?"
Model uncertainty means your conceptual framework for understanding the decision might be flawed. You're not just uncertain about parameters—you're uncertain about the right way to think about the problem.
Example - Career decision:
You model career satisfaction as function of: salary, interesting work, work-life balance, status, growth opportunities.
Model uncertainty questions:
- "Am I missing a crucial dimension?" (relationships at work, geographic location, mission alignment)
- "Do these factors combine linearly, or are there thresholds/interactions?" (maybe interesting work only matters if work-life balance is decent)
- "Will my preferences change?" (future-you might weight these completely differently)
Robust approach under model uncertainty:
- Diversify across models (don't bet everything on one way of thinking)
- Look for strategies that work across models (even if your value function is wrong, some choices are clearly better)
- Build in adjustment mechanisms (decide now but reevaluate in 6 months)
- Consult people with different mental models (they'll surface blindspots in your framing)
Narrative Fallacy
Humans are story-making machines. We construct narratives that make sense of the past, then mistake these narratives for understanding that lets us predict the future.
Nassim Taleb's example: Before 9/11, no one predicted it. After 9/11, everyone explained why it was "obvious in retrospect." The post-hoc narrative creates false sense of predictability.
Under uncertainty, narrative fallacy manifests as (alongside other cognitive biases):
- Overconfidence in explanations of past success (attributing outcomes to your decisions rather than luck)
- Believing you understand complex situations because you have coherent stories
- Mistaking prediction for understanding after the fact
Counter-strategy: Distinguish explanation from prediction. You can often explain outcomes after they occur without being able to predict them beforehand. That's not hypocrisy—it's acknowledgment that many systems are fundamentally unpredictable despite being comprehensible in retrospect.
Practical test: Before the outcome is known, write down your prediction and confidence level. After the outcome, write your explanation. Compare them. If your explanation is vastly more confident than your prediction was, you're engaging in narrative fallacy.
Domain-Specific Applications
Hiring Under Uncertainty
You're hiring someone. Interviews are 2-4 hours. The actual job is 2,000+ hours per year. Transferring interview performance to job performance is deeply uncertain.
Research shows:
- Unstructured interviews predict job performance barely better than random
- Even structured interviews show low correlation with performance
- Reference checks suffer from selection bias (candidates provide friendly references)
- Past performance is some signal but context-dependent
Better approach under uncertainty:
| Traditional Hiring | Uncertainty-Informed Hiring |
|---|---|
| Optimize interview process to predict performance | Accept you can't predict performance; focus on learning fast |
| Lengthy evaluation before decision | Reasonable bar for hiring, then rapid feedback loops |
| Treat hiring as one-way door | Build in explicit evaluation checkpoints |
| Focus on résumé credentials | Focus on trial work, realistic job previews |
Specific tactics:
- Paid trial projects → Observe actual work, not proxies
- Explicit probation periods → Both parties evaluate fit with clear exit
- Diverse perspectives in evaluation → Multiple people with different models reduce model uncertainty
- Scenarios-based questions → "What would you do if..." surfaces thinking process
Most importantly: Acknowledge you're uncertain. Great hiring outcomes depend more on rapid feedback and adjustment than on selecting perfectly upfront.
Product Development Under Uncertainty
"Should we build feature X?" is fundamentally uncertain. You don't know if users want it, if they'll pay for it, if it causes unexpected problems, if it distracts from more valuable work.
Lean Startup methodology is fundamentally an uncertainty management framework:
- Hypothesis formation → Explicit assumption about what creates value
- Minimum viable test → Smallest experiment that tests the hypothesis
- Measured learning → Specific metrics that would confirm/disconfirm
- Pivot or persevere → Based on evidence, not attachment to original plan
Key insight: Under uncertainty, the goal isn't to make the right decision upfront. It's to structure learning so you make better decisions as uncertainty resolves.
Example:
| Certainty-Seeking Approach | Uncertainty-Managing Approach |
|---|---|
| "Let's do extensive market research to know if users want X" | "Let's build a prototype and see if users engage with X" |
| 6 months research → Decision → 6 months development | 2 weeks prototype → Measure → Iterate or kill |
| High confidence before building | Low confidence, high learning rate |
The uncertainty-managing approach doesn't require you to predict the future. It requires you to learn from the future faster than competitors.
Investment Under Uncertainty
Financial markets reflect millions of participants making decisions with incomplete information. Beating markets consistently requires either better information, better models, or exploiting behavioral biases—all highly uncertain.
Approaches that respect uncertainty:
1. Index investing → Admits you can't predict winners; captures market average at low cost
2. Diversification → Spreads uncertainty across uncorrelated bets; loses to concentrated bets if you're right, beats concentrated bets in expectation
3. Value investing with margin of safety → Buffett and Graham's approach: Don't predict the future; buy things so cheap that many futures produce profit
4. Barbell strategy → Taleb's approach: Extreme safety + extreme risk; avoid medium-risk (fragile to uncertainty)
What doesn't work under uncertainty:
- Precise price targets ("stock will reach $127 in 18 months")
- Leverage (amplifies both correct and incorrect predictions)
- Market timing (requires predicting turning points)
- Concentrated bets without edge (confuses confidence with knowledge)
Ray Dalio's "All Weather Portfolio": Designed to perform reasonably across diverse economic scenarios (growth/recession × inflation/deflation). Explicitly rejects prediction in favor of robustness.
Career Planning Under Uncertainty
Twenty-year career plans make assumptions about:
- Industry trajectories (many industries don't exist in 20 years)
- Your preferences (most people's interests evolve substantially)
- Economic conditions (multiple recessions likely)
- Personal circumstances (relationships, health, geographic constraints)
All highly uncertain.
Better framework:
| Long-term Planning | Adaptive Career Strategy |
|---|---|
| "I will become VP of Engineering at a top tech company" | "I will build valuable, transferable skills and maintain optionality" |
| Optimize for specific goal | Build robust capabilities |
| Brittle to changing conditions | Resilient to surprises |
Concrete tactics under career uncertainty:
- Build general skills over narrow expertise (writing, quantitative reasoning, people management transfer widely)
- Maintain diverse networks (professional community across sectors/geographies)
- Financial reserves (options to take risks or weather downturns)
- Periodic reevaluation (explicit checkpoints to reconsider direction)
- Reversible moves preferred over irreversible commitments
Cal Newport's "career capital" framework: Don't optimize for a specific role. Build skills, connections, and reputation that create options as the future unfolds. You're not predicting where you'll end up—you're ensuring you'll have good choices when decision points arrive.
Philosophical Implications
Acknowledging Limits of Rationality
Herbert Simon, Gerd Gigerenzer, and the bounded rationality school recognize that perfect rationality requires:
- Complete information (you don't have it)
- Infinite computational power (you don't have it)
- Consistent, known preferences (you don't have them)
Under uncertainty, "rational" decisions aren't about maximizing expected utility. They're about using heuristics appropriately, learning from feedback, and avoiding catastrophic mistakes.
Good decision-making under uncertainty looks like:
- Following simple rules that work across many contexts
- Adapting when rules fail
- Recognizing which type of problem you face
- Matching decision strategy to problem structure
It doesn't look like:
- Complex optimization that pretends away uncertainty
- False precision in probability estimates
- Paralysis while seeking certainty that doesn't exist
Embracing Productive Discomfort
Uncertainty is uncomfortable. We're wired to prefer clear answers, even wrong ones, over ambiguous situations. Under uncertainty, comfort is often the signal that you're ignoring reality.
Keynes: "It is better to be roughly right than precisely wrong." Most frameworks produce precise-wrong answers by forcing complex reality into simplified models. Better to acknowledge "I'm uncertain" than convince yourself the model is reality.
Decision-making maturity involves:
- Distinguishing "I don't know" (genuine uncertainty) from "I haven't researched enough" (reducible uncertainty)
- Making decisions despite discomfort when waiting has costs
- Avoiding false confidence through sophisticated analysis
- Building adaptive capacity rather than perfect plans
Annie Duke: "The quality of our lives is the sum of decision quality plus luck." Under uncertainty, you can't control outcomes. You can control process. Good process produces better outcomes probabilistically, not certainly.
Practical Implementation
Daily Practice
Calibration training: Make predictions about near-term uncertain events with explicit confidence levels. Track accuracy. Most people discover they're dramatically overconfident.
Example predictions:
- "I'm 70% confident project X will ship by Friday"
- "60% confident the candidate we hire will still be here in a year"
- "80% confident the meeting will run over scheduled time"
After 50-100 predictions, patterns emerge in where you're overconfident vs. underconfident.
Decision Journal for Uncertainty
Beyond standard decision journals, track:
- What uncertainties you identified upfront (vs. which surprised you)
- How uncertainty resolved (was information valuable when it arrived?)
- Whether you updated beliefs appropriately (or stuck to initial view despite evidence?)
- Which type of uncertainty (risk, quantifiable uncertainty, deep uncertainty, radical uncertainty)
This trains recognition of different uncertainty types and appropriate strategies for each.
Building Organizational Practices
Teams and organizations can systematize uncertainty management:
1. Assumption mapping → Before major decisions, explicitly list critical assumptions and rate confidence in each
2. Scenario planning workshops → Quarterly exercises exploring different futures and robust strategies
3. Decision retrospectives → Review past decisions focusing on process quality, not outcome quality
4. Normalizing "I don't know" → Culture where acknowledging uncertainty is valued over false confidence
5. Pilot programs as default → Test new initiatives on small scale before full commitment
Google's "20% time" and Amazon's "two-pizza teams" create organizational options—ways to explore uncertain opportunities without betting the company.
Conclusion
Decision-making under uncertainty isn't about eliminating uncertainty—that's impossible. It's about operating effectively despite uncertainty.
The key shifts:
From → "Gather enough information to be certain"
To → "Distinguish reducible from irreducible uncertainty; invest appropriately in each"
From → "Make optimal decisions"
To → "Make robust decisions that work across scenarios"
From → "Predict the future"
To → "Adapt quickly as the future unfolds"
From → "Avoid uncertainty"
To → "Embrace uncertainty and build optionality"
From → "Detailed long-term plans"
To → "Rough direction plus adaptive capacity"
High performers in uncertain environments don't have better crystal balls. They have better processes for learning under uncertainty. They make smaller bets, gather feedback faster, update beliefs more readily, and maintain flexibility to adjust as conditions change.
Knight's risk vs. uncertainty distinction suggests that uncertainty, unlike risk, can't be managed through probability calculations. But uncertainty can be managed through:
- Robust strategies that work across scenarios
- Options that preserve flexibility
- Rapid learning cycles that reduce uncertainty over time
- Acceptance of irreducible uncertainty without paralysis
The goal isn't certainty. It's good judgment despite uncertainty—distinguishing when to gather more information, when to decide with incomplete data, when to choose reversible paths, and when to acknowledge that no amount of analysis will resolve fundamental unknowability.
Modern life won't become less uncertain. Information abundance doesn't reduce uncertainty—it often increases it by revealing complexity we previously ignored. The skill isn't eliminating uncertainty. It's deciding and acting effectively while uncertainty remains.
"It is not the strongest of the species that survives, nor the most intelligent, but the one most responsive to change." — Charles Darwin
What Research Shows About Decision Making Under Uncertainty
Formal academic study of how people actually decide under uncertain conditions accelerated dramatically after World War II, driven by both military planning needs and the emerging field of cognitive psychology.
Frank Knight's foundational distinction (1921) between risk and uncertainty remained largely theoretical for three decades before Daniel Kahneman and Amos Tversky began the empirical research program that would transform the field. In a landmark 1974 paper in Science, "Judgment Under Uncertainty: Heuristics and Biases," Kahneman and Tversky documented three systematic cognitive shortcuts — representativeness, availability, and anchoring — that people use when facing uncertain outcomes. These heuristics often produce good-enough results but generate predictable, replicable errors. The research demonstrated that uncertainty does not lead people to randomly guess; it leads them to apply systematic but flawed patterns.
Philip Tetlock's Expert Political Judgment study (2005), spanning 20 years and 82,361 forecasts from 284 experts, produced one of the most robust findings in uncertainty research: most domain experts forecast little better than random chance when predicting outcomes in their specialty. Experts with narrow, singular frameworks — what Tetlock called "hedgehogs," who know one big thing — performed worse than generalists who drew on multiple frameworks — "foxes." The foxes who outperformed significantly used probabilistic thinking, updated beliefs incrementally, and resisted overconfident point predictions.
Gerd Gigerenzer's research program at the Max Planck Institute challenged the Kahneman-Tversky framing. Gigerenzer (1991, 2002) argued that heuristics are not cognitive failures but adaptive tools that work well in the environments they evolved for. His "fast-and-frugal" heuristics research showed that simple rules often outperform complex optimization models precisely under uncertainty, because complex models over-fit to past data and fail when conditions change. The debate between these two research traditions remains active and productive — both camps agree that uncertainty demands different approaches than risk; they disagree about whether simplicity or sophistication is the appropriate response.
The Good Judgment Project (2011-2015), run by Tetlock and Barbara Mellers, identified a class of "superforecasters" whose probabilistic predictions were dramatically more accurate than expert baselines. Key differentiators: they expressed precise probability estimates rather than vague qualitative judgments, updated those estimates frequently in response to new evidence, and showed no domain-specific expertise advantage — accurate forecasting under uncertainty was a generalizable skill, not a subject-matter one.
Real-World Case Studies in Uncertain Decision Making
Shell Oil's Scenario Planning (1972-1973): The most cited organizational case study in uncertainty management. Pierre Wack, head of scenario planning at Shell, developed a methodology that explicitly rejected single-point forecasting in favor of multiple internally consistent narratives about possible futures. Shell's planners developed a scenario in which OPEC successfully imposed an oil embargo — not as their base case, but as a plausible alternative. When the 1973 oil crisis materialized, Shell was the only major oil company with strategic plans ready for that contingency. While competitors scrambled, Shell moved quickly to adjust refinery capacity, hedging positions, and customer pricing. The episode transformed corporate scenario planning from an academic exercise into a mainstream strategic tool. Shell formalized the methodology, which was later documented by Peter Schwartz in The Art of the Long View (1991).
NASA's Mars Climate Orbiter failure (1999): A case study in false precision under uncertainty. The $327.6 million spacecraft was lost because one engineering team used metric units and another used imperial units in navigation software — a classic failure mode where certainty about one piece of a calculation created blindness to system-level uncertainty. Post-mortems identified not just the unit conversion error but the organizational culture that prevented engineers from escalating uncertainty signals. The review board found that uncertainty flags had been raised but deprioritized. The case is now taught in systems engineering programs as an example of how reducing uncertainty in components can paradoxically increase system-level fragility.
Amazon's AWS launch decision (2003-2006): Jeff Bezos and the Amazon leadership team faced deep uncertainty about whether other companies would pay to rent computing infrastructure. The business case required multiple uncertain assumptions to all be correct simultaneously: that companies would trust Amazon with their data, that cloud computing would prove reliable enough for production workloads, that pricing could be profitable at volume, and that competitors would not immediately match offerings. Rather than attempting to resolve these uncertainties analytically, Amazon used a strategy of parallel small bets — starting with internal infrastructure, then selectively offering access to external developers, then launching S3 and EC2 publicly. Each step generated real-world data that reduced uncertainty before the next commitment was made. AWS is now Amazon's most profitable division, generating over $90 billion in annual revenue.
U.S. Military's VUCA framework adoption (1990s): Following the end of the Cold War, U.S. Army War College faculty Warren Bennis and Burt Nanus codified the operating environment as Volatile, Uncertain, Complex, and Ambiguous — VUCA. Military strategists recognized that Cold War planning frameworks, designed for a relatively stable bipolar world, were inadequate for post-Cold War asymmetric threats. The framework has since migrated widely into corporate strategy, particularly after the 2008 financial crisis, as organizations confronted operating environments that defied stable-world planning assumptions.
The Science Behind Uncertainty Tolerance and Decision Quality
The psychological mechanism underlying effective decision making under uncertainty is epistemic tolerance — the capacity to hold unresolved questions without either forcing premature closure or becoming paralyzed.
Need for Cognitive Closure research: Arie Kruglanski (University of Maryland) developed the Need for Cognitive Closure (NCC) scale in the 1990s to measure individual differences in tolerance for ambiguity. High-NCC individuals experience uncertainty as aversive and seek definitive answers quickly — a tendency that correlates with poorer decision quality under genuine uncertainty because it produces premature commitment before sufficient information is gathered. Low-NCC individuals can sustain uncertainty longer and update beliefs more readily. Kruglanski's research showed that NCC increases under time pressure and cognitive load, which explains why uncertain decisions made in high-stakes, high-speed environments often deteriorate.
Neuroscience of uncertainty: Research using fMRI by Ming Hsu and colleagues (2005) demonstrated that uncertainty (as distinct from risk) activates the amygdala and orbitofrontal cortex differently than known-probability risk does. The amygdala response to uncertainty is generally stronger than to equivalent-expected-value risk, providing a neurological basis for Knight's theoretical distinction. This finding suggests that aversion to uncertainty is not merely cognitive — it is emotionally encoded in a way that requires deliberate regulatory effort to override.
Calibration training research: Studies by Lichtenstein and Fischhoff (1977, 1980) established that people are systematically overconfident in their probability estimates — when people say they are 90% confident, they are correct only about 70-75% of the time on factual questions. However, calibration can be improved with training. Forecasting tournaments, structured feedback on prediction accuracy, and deliberate practice with probability expression all measurably improve calibration. The superforecasters identified in Tetlock's research had typically engaged in years of structured probabilistic prediction and feedback — suggesting that accurate uncertainty quantification is a skill that can be developed, not a fixed trait.
How Organizations Quantify the Cost of Uncertainty Avoidance
A persistent failure in organizational decision-making is treating uncertainty as a reason for inaction rather than a condition to be managed. Research by Roger Martin at the University of Toronto's Rotman School of Management, documented in The Opposable Mind (2007), found that executives who performed best in uncertain environments shared a capacity he called integrative thinking -- the ability to hold two conflicting models simultaneously and generate a creative resolution. Executives who performed worst under uncertainty consistently fell into one of two failure modes: premature closure (deciding quickly to escape the discomfort of ambiguity) or paralysis (refusing to decide until certainty arrived that never came). Martin's case studies covered decisions at companies including Procter & Gamble, Red Hat, and IDEO, finding that integrative thinkers achieved measurably better outcomes not by eliminating uncertainty but by constructing novel options that performed well across multiple uncertain scenarios.
The cost of uncertainty avoidance in capital allocation decisions has been quantified by economists studying investment patterns during volatile periods. Bloom, Bond, and Van Reenen (2007), publishing in Review of Economic Studies, found that firms facing high uncertainty reduce and delay capital investment by significantly more than expected-value calculations would justify. The excess delay -- beyond what uncertainty about project returns could explain -- they attributed to real options value: firms waiting to see uncertainty resolve before committing. The mechanism is rational in theory but generates systematic underinvestment when uncertainty is persistent. Their analysis of UK manufacturing firms found that a one-standard-deviation increase in uncertainty measures reduced investment rates by approximately 7 percent -- an economically substantial effect operating across the entire economy.
McKinsey research published in 2023 documented a related pattern in digital transformation decisions. Firms that delayed cloud migration decisions due to uncertainty about which platforms would dominate found that the cost of delay -- measured in foregone efficiency gains -- substantially exceeded the cost of choosing the ultimately non-dominant platform and migrating later. The analysis suggested that for many technology investment decisions under uncertainty, early commitment to any reasonable option outperforms waiting for the uncertainty to resolve, because the competitive cost of delay accumulates while the option value of waiting is smaller than intuition suggests.
The practical implication is a decision-making heuristic: when facing uncertainty about which of several reasonable options to pursue, explicitly calculate the cost of delay. If the cost of waiting one year for more information exceeds 20 percent of the decision's total value, the information you would gain rarely justifies the cost. This is not a universal rule -- it depends heavily on the reversibility of the decision and the rate at which the relevant uncertainty is likely to resolve -- but it provides a concrete analytical check on the intuitive pull toward waiting.
Uncertainty in Medicine: How Physicians and Patients Navigate Irreducible Ambiguity
The medical context is uniquely instructive for understanding uncertainty because it is one of the few domains where decision-making under uncertainty has been studied systematically for decades, the outcomes are measurable, and the stakes create strong incentives to understand what works.
Jerome Groopman's How Doctors Think (2007) documented, through interviews with hundreds of physicians, the systematic patterns by which medical uncertainty is handled well and badly. The physicians who navigated uncertainty most effectively shared several characteristics: they explicitly named uncertainty to patients rather than projecting false confidence, they used diagnostic tests as uncertainty-reduction tools (not confirmation tools), and they built in explicit decision checkpoints rather than committing to treatment trajectories upfront. The physicians who performed worst under uncertainty -- as measured by diagnostic error rates -- consistently sought to resolve uncertainty by committing to an initial diagnosis rather than maintaining multiple working hypotheses until evidence forced convergence.
A systematic review by Heneghan and colleagues (2017) published in Diagnosis analyzed the diagnostic reasoning of 41 general practitioners in the United Kingdom using think-aloud protocols -- asking physicians to verbalize their reasoning while examining patients. The review found that effective diagnostic reasoning under uncertainty involved maintaining an average of 3.4 working hypotheses simultaneously, while diagnostic errors were associated with convergence to a single hypothesis too early (less than 2 working hypotheses). The finding has direct implications for how to teach and evaluate diagnostic reasoning: the capacity to sustain multiple simultaneous hypotheses is a measurable skill that distinguishes effective from ineffective reasoners under uncertainty.
Shared decision-making research has examined how uncertainty should be communicated to patients. A meta-analysis by Stacey and colleagues (2017) in the Cochrane Database of Systematic Reviews, covering 105 randomized controlled trials and 31,000 patients, found that decision aids -- tools that present options, outcomes, and probabilities to patients facing preference-sensitive medical decisions -- improved patient knowledge, reduced uncertainty-driven decisional conflict, and increased participation in decision-making without worsening health outcomes. The trials covered decisions from hip replacement to cancer screening to cardiovascular risk management. The finding is consistent with the broader principle that structured uncertainty quantification, even imperfect, produces better decisions than intuitive clinical judgment delivered with false authority. Patients who understood the probability distributions of outcomes made decisions more aligned with their actual values and reported less regret than those who received only qualitative guidance.
References and Further Reading
Knight, F. H. (1921). Risk, Uncertainty, and Profit. Boston: Houghton Mifflin. — The foundational text establishing the distinction between measurable risk and unmeasurable uncertainty (Knightian uncertainty), still the canonical reference for decision theory.
Kay, J., & King, M. (2020). Radical Uncertainty: Decision-Making Beyond the Numbers. New York: W.W. Norton. — Argues that most important decisions involve radical uncertainty that cannot be resolved by probability calculations; critiques expected-utility theory as a model for real-world choice.
Taleb, N. N. (2007). The Black Swan: The Impact of the Highly Improbable. New York: Random House. — Explores how rare, high-impact events dominate outcomes in uncertain domains and why standard probability models systematically underestimate tail risk.
Taleb, N. N. (2012). Antifragile: Things That Gain from Disorder. New York: Random House. — Develops strategies (including the barbell approach) for building systems and decisions that benefit from volatility and uncertainty rather than merely surviving it.
Simon, H. A. (1956). "Rational Choice and the Structure of the Environment." Psychological Review, 63(2), 129-138. https://doi.org/10.1037/h0042769 — Introduces satisficing and bounded rationality as more realistic models of decision-making under cognitive and informational constraints.
Tetlock, P. E., & Gardner, D. (2015). Superforecasting: The Art and Science of Prediction. New York: Crown. — Draws on the Good Judgment Project to show how calibrated probabilistic thinking and disciplined Bayesian updating measurably improve forecasting accuracy.
Kahneman, D., Slovic, P., & Tversky, A. (Eds.). (1982). Judgment Under Uncertainty: Heuristics and Biases. Cambridge: Cambridge University Press. — The foundational research collection demonstrating systematic cognitive biases that distort probability judgments and decisions under uncertainty.
Lempert, R. J., Popper, S. W., & Bankes, S. C. (2003). Shaping the Next One Hundred Years: New Methods for Quantitative, Long-Term Policy Analysis. Santa Monica: RAND Corporation. — Introduces the robust decision-making (RDM) framework: evaluating strategies across many plausible futures rather than optimizing against a single forecast.
Dixit, A. K., & Pindyck, R. S. (1994). Investment Under Uncertainty. Princeton: Princeton University Press. — The standard academic treatment of real options theory, explaining why the option to defer irreversible commitments has substantial economic value under uncertainty.
Schwartz, P. (1991). The Art of the Long View: Planning for the Future in an Uncertain World. New York: Currency Doubleday. — The definitive practical guide to scenario planning as a method for strategic decision-making when the future cannot be predicted, drawing on Shell's experience with the 1973 oil crisis.
Additional Sources Cited in This Article:
- Duke, A. (2018). Thinking in Bets: Making Smarter Decisions When You Don't Have All the Facts. New York: Portfolio. [Probabilistic thinking, separating decision quality from outcomes]
- Gigerenzer, G., Todd, P. M., & ABC Research Group. (1999). Simple Heuristics That Make Us Smart. Oxford: Oxford University Press. [Fast and frugal heuristics under bounded rationality]
- Kahneman, D. (2011). Thinking, Fast and Slow. New York: Farrar, Straus and Giroux. [System 1/System 2 thinking, cognitive biases]
- Klein, G. (1998). Sources of Power: How People Make Decisions. Cambridge, MA: MIT Press. [Recognition-primed decisions; pre-mortem methodology]
- Ries, E. (2011). The Lean Startup. New York: Crown Business. [Build-measure-learn as uncertainty management]
- Silver, N. (2012). The Signal and the Noise: Why So Many Predictions Fail—But Some Don't. New York: Penguin Press. [Bayesian reasoning applied to forecasting]
- Weick, K. E., & Sutcliffe, K. M. (2007). Managing the Unexpected: Resilient Performance in an Age of Uncertainty (2nd ed.). San Francisco: Jossey-Bass. [High-reliability organizations and uncertainty management]
Frequently Asked Questions
What is decision making under uncertainty?
Decision making under uncertainty involves choosing between options when you don't know all the facts or can't predict outcomes with certainty.
How is uncertainty different from risk?
Risk involves known probabilities (like a dice roll), while uncertainty means you can't even assign probabilities to outcomes.
What strategies help with uncertain decisions?
Use probabilistic thinking, scenario planning, reversible choices, gather critical information first, and accept imperfect knowledge.
Should you wait for more information before deciding?
Only if the value of information exceeds the cost of delay. Sometimes deciding with incomplete data is better than waiting.
How do experts handle uncertainty?
They use frameworks, make explicit assumptions, plan for multiple scenarios, and update decisions as new information emerges.