Probabilistic Thinking: How Better Decisions Are Made

The Illusion of Certainty

A product manager announces: "This feature will increase conversions." An investor declares: "This startup will succeed." A consultant promises: "This strategy will work."

Each statement sounds confident. Each is almost certainly wrong—not because the speaker is incompetent, but because they're expressing false certainty about inherently uncertain outcomes.

Reality doesn't deal in guarantees. Features sometimes increase conversions and sometimes don't. Startups succeed at roughly 10% base rates. Strategies work in some contexts and fail in others. The world is probabilistic, yet most people think and communicate in absolutes.

This mismatch—between probabilistic reality and binary thinking—produces systematic errors:

  • Overconfidence in predictions (because you didn't quantify uncertainty)
  • Inability to learn from outcomes (because you framed decisions as right/wrong rather than good-process/bad-process)
  • Poor risk assessment (because you ignored the full distribution of possible outcomes)
  • Missed opportunities (because something with 40% success odds looks like "failure" in binary framing)

Probabilistic thinking means reasoning in likelihoods and distributions rather than certainties. It's not just for statisticians or gamblers—it's how reality works, and matching your thinking to reality produces better decisions.

Why Binary Thinking Fails

The Certainty Trap

Binary thinking forces every uncertain situation into false categories:

  • "This will work" vs. "This won't work"
  • "This is true" vs. "This is false"
  • "This person is competent" vs. "This person is incompetent"

Reality offers no such clean divisions. That feature has maybe a 60% chance of increasing conversions. That claim is probably true with some important caveats. That person is highly competent in domain X and moderately competent in domain Y.

The damage: When you compress uncertainty into false certainty, you:

  1. Lose calibration → Can't distinguish 60% confidence from 90% confidence
  2. Can't learn properly → Good decisions that produced bad outcomes look like "mistakes"
  3. Misallocate resources → Treat 55% opportunities the same as 95% opportunities
  4. Become overconfident → Every prediction feels equally certain

The Outcome Bias Trap

Annie Duke (professional poker player, decision strategist) calls this "resulting"—judging decision quality by outcomes rather than process.

Binary logic: "I chose X. X failed. Therefore I made a bad decision."

Probabilistic logic: "I chose X because it had 70% success odds and highest expected value. It failed (30% outcome). The decision quality was still correct."

Poker players understand this deeply. You can make the mathematically optimal bet and lose the hand. That doesn't mean the bet was wrong—it means you encountered the 30% outcome.

In most domains, people lack this clarity. A hiring decision that doesn't work out gets labeled "bad hire" rather than "reasonable bet that didn't pan out." This destroys your ability to learn from experience because you're evaluating process by irrelevant data (single outcomes).

Example: The Interview Problem

Binary thinking:
Candidate seems strong → Hire
Candidate seems weak → Don't hire

Result: You're making decisions as if interviews perfectly predict performance. They don't. Even great interviews have maybe 65-70% predictive validity.

Probabilistic thinking:
Strong interview → 70% confidence this person succeeds
Weak interview → 30% confidence this person succeeds
Also consider: base rate of success for this role, reference data, work samples, trial projects

Result: You're explicitly accounting for uncertainty. You might still hire the strong-interview candidate, but you also plan for the 30% chance they don't work out (shorter initial commitment, clearer success metrics, backup plans).

Core Principles of Probabilistic Thinking

1. Express Uncertainty Numerically

Replace vague language with numbers:

Vague Expression Probabilistic Expression Decision Implication
"Probably will work" 60-70% likely to work Moderate confidence—have backup plan
"Very likely to work" 85-95% likely to work High confidence—commit resources
"Might work" 35-50% likely to work Low confidence—cheap test first
"Almost certain" 95%+ likely Near-certainty—plan as if it will happen

Why numbers matter: "Probably" means different things to different people (anywhere from 50% to 90% in studies). Numerical probability forces precision.

Practice technique: Before important decisions, write down your confidence level as a percentage. Just the act of quantifying forces clarity.

2. Think in Distributions, Not Point Estimates

Most predictions are point estimates: "This will take 3 weeks." "Revenue will be $500K."

Probabilistic approach: Estimate the distribution of outcomes.

Point Estimate Probabilistic Distribution
"Project takes 3 weeks" P10: 2 weeks, P50: 3.5 weeks, P90: 6 weeks
"Revenue is $500K" P10: $350K, P50: $500K, P90: $750K
"Candidate rating: 8/10" P10: Works out poorly (5/10), P50: 8/10, P90: Exceptional (9.5/10)

P10/P50/P90 represents the 10th, 50th, and 90th percentiles—giving you a sense of the range and asymmetry of outcomes.

Key insight: Most distributions are asymmetric. Projects usually overrun more than they underrun. Revenue upside often exceeds downside. Recognizing this asymmetry changes planning.

Application: When someone gives you a confident point estimate, ask: "What's the range? What's the 10th percentile worst case and 90th percentile best case?" This forces acknowledgment of uncertainty.

3. Update Beliefs from Evidence (Bayesian Updating)

Thomas Bayes provided the mathematical framework. The intuition: Start with a prior belief. Encounter evidence. Update your belief proportionally to how diagnostic that evidence is.

Formula intuition (not full math):

Posterior probability = Prior probability × (Evidence strength / Base rate)

Example - Evaluating a new hire's performance after Month 1:

Prior: Based on interview/references, 70% confident they'll succeed

Evidence observed: Strong Month 1 performance (completes projects, positive feedback)

Question: How much should this update your confidence?

Key factor: How diagnostic is "strong Month 1"?

  • If 90% of eventual high-performers have strong Month 1, but also 40% of eventual low-performers have strong Month 1 (honeymoon effect), then strong Month 1 is only moderately diagnostic
  • Your confidence might update to 80% (not 95%)

Contrast - Wrong update pattern:

  • Binary thinker: "Strong Month 1! I was right, they're definitely great" (jumps to 100%)
  • Bayesian thinker: "Strong Month 1 is evidence, but not overwhelming evidence given base rates" (updates to 80%)

General principle: Update incrementally based on evidence strength. Don't swing from 50% to 95% on weak evidence. Don't stay at 50% when strong evidence arrives.

4. Track Calibration

Calibration means your confidence levels match reality. If you say "70% confident" across 100 predictions, roughly 70 should occur.

Philip Tetlock's superforecasters excel at calibration. They don't have secret information—they're just better at assessing their own uncertainty.

How to build calibration:

  1. Make probability estimates (not just predictions)
  2. Record them (you can't improve without data)
  3. Check outcomes
  4. Calculate calibration error

Example tracking:

Prediction Your Confidence Outcome Result
Candidate A succeeds 80% Success Correct
Feature increases conversions 60% No increase Correct (you said 60%, not certain)
Project finishes in 3 weeks 90% Took 5 weeks Overconfident
Partnership succeeds 50% Success Correct (coin flip)

After 50-100 tracked predictions, patterns emerge:

  • Are your 70% predictions actually 70%? Or more like 50%?
  • Are you systematically overconfident? Underconfident?
  • Which domains are you well-calibrated in vs. poorly calibrated?

Result: Your probability estimates become increasingly accurate, which directly improves decision quality.

5. Consider Base Rates

Base rate: The general frequency of an event in the reference class.

Most people ignore base rates (called the base rate fallacy). They focus on specific details while ignoring the statistical foundation.

Classic example: Startup success rates

Specific thinking: "This founder is smart, the idea is good, the market is large → high chance of success"

Probabilistic thinking: "Base rate: ~10% of VC-backed startups return capital. Even with positive signals, maybe 20-30% chance of success."

Why base rates matter: They're your prior probability before considering specific evidence. Specific evidence adjusts this prior, but rarely by as much as people think.

Scenario Base Rate Specific Evidence Reasonable Posterior
Startup success 10% Strong team, validated problem 25-35%
New product success 40% (industry average) Extensive testing, pilot success 60-70%
Candidate success 50% (historical rate for role) Great interview, strong references 70-80%
Medical diagnosis 1% prevalence Positive test (90% accurate) ~10% (not 90%!)

Notice the medical diagnosis case: Even with a highly accurate test, a positive result only raises probability from 1% to ~10% because the base rate is so low. This is why probabilistic thinking matters—intuition fails here.

Application: Before evaluating specific evidence, ask: "What's the base rate for this reference class?" Use that as your starting point.

Practical Applications

Decision-Making Under Uncertainty

Standard approach: Analyze situation → Pick best option → Hope it works

Probabilistic approach:

  1. Generate options
  2. Estimate outcome distributions for each (P10/P50/P90)
  3. Assign probabilities to different scenarios
  4. Calculate expected value
  5. Consider variance/risk (not just average outcome)
  6. Choose best distribution (not necessarily highest average)

Example - Hiring decision:

Option A: Senior hire ($200K salary)

  • P10: Moderate contributor ($300K value)
  • P50: Strong contributor ($600K value)
  • P90: Exceptional contributor ($1M+ value)
  • Probability distribution: 20% low, 60% middle, 20% high

Option B: Junior hire + training ($100K total)

  • P10: Doesn't work out ($50K value)
  • P50: Solid contributor after 6 months ($300K value)
  • P90: Develops into senior in 18 months ($500K value)
  • Probability distribution: 30% low, 50% middle, 20% high

Expected value calculation:

  • Option A: 0.2($300K) + 0.6($600K) + 0.2($1M) = $620K expected value
  • Option B: 0.3($50K) + 0.5($300K) + 0.2($500K) = $265K expected value

But also consider:

  • Downside risk: Option B has higher risk of near-zero value
  • Opportunity cost: Option A costs $200K vs. $100K (that $100K difference has alternative uses)
  • Option value: Option B preserves flexibility (can still hire senior later if junior doesn't work)

Result: Not a formulaic answer, but a structured way to think about trade-offs using probability.

Forecasting and Planning

Planning fallacy: People systematically underestimate time/cost and overestimate benefits.

Probabilistic antidote:

Instead of: "This project takes 8 weeks"

Use:

  • Best case (P10): 6 weeks (everything goes right)
  • Likely case (P50): 10 weeks (typical issues)
  • Worst case (P90): 16 weeks (significant problems)

Then plan for the P70-P80 outcome, not the P50. Most organizations plan for median case, then act surprised when things take longer.

Why P70-P80? It's realistic without being paranoid. You hit your target 70-80% of the time, which builds trust and allows for realistic roadmapping.

Risk Assessment

Binary thinking: "Is this risky?" → Yes/No

Probabilistic thinking: "What's the distribution of outcomes? What's the downside risk? What's the upside potential?"

Risk assessment framework:

Factor Assessment Method
Downside risk What's the P10 outcome? Can we absorb it?
Upside potential What's the P90 outcome? Is it transformative?
Expected value Probability-weighted average outcome
Variance How wide is the outcome distribution?
Skew Is distribution symmetric or asymmetric?
Tail risk What are the extreme (<P5) scenarios?

Example - Asymmetric risk assessment:

Opportunity A:

  • Expected value: $100K
  • Distribution: Narrow (very predictable)
  • P10: $80K, P90: $120K

Opportunity B:

  • Expected value: $100K
  • Distribution: Wide (highly uncertain)
  • P10: -$50K, P90: $400K

Same expected value, completely different risk profiles.

  • Choose A if you need predictability and can't absorb downside
  • Choose B if you have asymmetric upside (small downside, massive upside) and can tolerate variance

Key insight: Expected value is necessary but insufficient. You also need to evaluate the shape of the distribution.

Strategic Decisions

Example - Market entry decision:

Binary thinking: "Should we enter Market X?" → Analyze, decide, commit

Probabilistic thinking:

Scenario modeling:

Scenario Probability Outcome Expected Value
Market grows rapidly 25% Dominate early, $50M value $12.5M
Market grows slowly 40% Modest success, $10M value $4M
Market stagnates 25% Breakeven, $0 value $0
Market contracts 10% Failure, -$5M value -$0.5M
Total expected value $16M

But also consider:

  • What's the variance? (25% chance of $50M vs. 10% chance of -$5M)
  • What's the option value of waiting for more information?
  • What's the opportunity cost of capital and attention?
  • What's the irreversibility of the decision? (Can we exit easily if wrong?)

Result: Not just "yes/no" but "yes at what scale, with what staging, and what exit triggers."

Advanced Techniques

Expected Value Thinking

Expected Value (EV) = Σ (Probability × Outcome)

Why it matters: Intuition fails at comparing options with different probability/outcome structures.

Example - Venture investment:

Investment A:

  • 80% chance of 1.5× return → EV = 0.8 × 1.5 = 1.2×
  • 20% chance of 0× return → EV = 0.2 × 0 = 0
  • Total EV: 1.2× return

Investment B:

  • 10% chance of 20× return → EV = 0.1 × 20 = 2.0×
  • 90% chance of 0× return → EV = 0.9 × 0 = 0
  • Total EV: 2.0× return

Counterintuitive result: Investment B is better in expectation despite 90% failure rate, because the upside is asymmetric.

This is how venture capital works: Most investments fail, but the few successes return enough to more than compensate.

When to use EV thinking:

  • ✅ Repeated decisions (law of large numbers applies)
  • ✅ You can afford downside of individual bets
  • ✅ Outcomes are quantifiable
  • ❌ One-shot decisions with catastrophic downside (survival matters more than EV)
  • ❌ Outcomes aren't independent (correlated risks)

Kelly Criterion: Optimal Bet Sizing

Question: You have an edge (positive expected value bet). How much should you risk?

Too little: You don't capitalize on your edge
Too much: Variance kills you even if EV is positive

Kelly Criterion provides the mathematically optimal bet size:

f = (bp - q) / b

Where:

  • f = fraction of capital to bet
  • b = odds received (if you win)
  • p = probability of winning
  • q = probability of losing (1-p)

Example: 60% chance of doubling money (b=1, p=0.6, q=0.4)

f = (1×0.6 - 0.4) / 1 = 0.2 → Bet 20% of capital

Key insight: Even with 60% win rate and 2:1 payoff, optimal bet is only 20%. Betting more increases variance to dangerous levels.

Practical application (not literal betting):

  • Hiring: How much of your team capacity should this role consume?
  • Product bets: How many resources to allocate to uncertain new feature?
  • Career moves: How much career capital to risk on uncertain opportunity?

Modified Kelly: Most practitioners use fractional Kelly (½ Kelly or ¼ Kelly) to reduce variance while still capturing most of the edge.

Scenario Planning

When facing high uncertainty, don't predict the future—model multiple plausible futures and prepare for each.

Process:

  1. Identify key uncertainties (factors that strongly affect outcomes but are highly uncertain)
  2. Generate 3-5 distinct scenarios (not best/worst/likely—use independent drivers)
  3. For each scenario, define implications
  4. Identify robust strategies (work across multiple scenarios)
  5. Define triggers (early signals indicating which scenario is unfolding)

Example - Tech company strategic planning:

Uncertainties: AI development pace, regulation, competition

Scenarios:

Scenario Description Probability Implication
"AI Boom" Rapid AI capabilities, light regulation 30% Aggressive AI investment, talent competition
"Regulated AI" Moderate AI growth, heavy regulation 25% Compliance focus, partnership with regulators
"AI Winter" Hype exceeds reality, stagnation 20% Focus on non-AI core, cost efficiency
"Winner-Take-All" One player dominates (not us) 25% Niche specialization or pivot

Robust strategies (work across multiple scenarios):

  • Build flexible architecture (not locked into one AI paradigm)
  • Develop proprietary data assets (valuable in most scenarios)
  • Maintain strong financial position (survive any scenario)

Triggers (indicate which scenario is unfolding):

  • Regulatory proposals (→ Regulated AI)
  • Breakthrough announcements (→ AI Boom)
  • Funding rounds drying up (→ AI Winter)
  • Market consolidation (→ Winner-Take-All)

Result: You're not trying to predict which scenario happens—you're prepared for any of them.

Common Pitfalls

False Precision

Mistake: Treating probability estimates as exact when they're inherently uncertain.

"There's a 67.3% chance this succeeds" (false precision—you can't estimate that accurately)

Better: "Somewhere between 60-75% likely to succeed" (acknowledging estimation uncertainty)

Rule of thumb: For most business/personal decisions, round to 10% increments (30%, 40%, 50%, etc.). Finer precision is usually false confidence.

Ignoring Correlation

Mistake: Treating independent probabilities as if events are unrelated when they're actually correlated.

Example: "Each of our 10 product bets has 60% success rate, so we expect 6 to succeed"

Reality: If they're all in the same market facing the same macro conditions, they're positively correlated. Either most succeed (favorable environment) or most fail (unfavorable environment).

Implication: Diversification provides less protection than simple probability suggests when risks are correlated.

Over-Optimizing for EV

Mistake: Maximizing expected value while ignoring risk of ruin.

Example: Even if a bet has positive EV, if the downside is bankruptcy, you shouldn't take it (no matter how good the odds).

Taleb's point: Ergodicity matters. Expected value assumes you can repeat the bet many times. Some decisions are one-shot with absorbing barriers (once you lose, you can't play again).

Practical rule: Never risk more than you can afford to lose, even on positive-EV bets, if losing means permanent damage.

Probability Without Consequences

Mistake: Focusing on likelihood while ignoring magnitude of outcomes.

"There's only a 5% chance of this disaster scenario" (sounds low, but if disaster = company failure, 5% is enormous)

Better framing: Combine probability × magnitude

  • 5% chance of losing $10 = Expected loss of $0.50 (trivial)
  • 5% chance of losing $10M = Expected loss of $500K (major concern)
  • 5% chance of company failure = Unacceptable risk requiring mitigation

Principle: Low-probability, high-impact events often dominate expected value calculations.

Updating on Noise

Mistake: Adjusting probability estimates based on non-diagnostic information.

Example: You estimate 60% chance a hire succeeds. After one day, they make a good comment in a meeting. You update to 75%.

Problem: One comment is extremely weak evidence (high noise-to-signal ratio). Updating significantly on weak evidence produces volatility without accuracy.

Better approach: Update proportionally to evidence strength. One positive signal might move you from 60% → 62% (not 60% → 75%).

Tetlock's finding: Superforecasters update frequently but incrementally. Amateurs update infrequently but dramatically.

Building Probabilistic Intuition

Daily Micro-Calibration

Most people lack intuition for probability because they never practice.

Solution: Make small predictions daily and track outcomes.

Examples:

  • "Will this meeting start on time?" (estimate probability)
  • "Will this email get a response within 2 hours?" (estimate probability)
  • "Will this PR be approved without changes?" (estimate probability)

After 100-200 micro-predictions, your calibration improves dramatically because you get immediate feedback.

Key: Actually write down your estimates (even in a simple note). Memory is unreliable—you'll convince yourself you "knew it all along" without a record.

Reference Class Forecasting

When estimating probability, don't just evaluate the specific situation—find comparable reference classes and use their base rates.

Example - Estimating project timeline:

Inside view: "This project is unique, we have great people, we're motivated → 3 months"

Reference class view: "Our last 10 projects of similar scope took: 2, 4, 5, 3, 6, 4, 7, 5, 4, 8 months → median is 4.5 months"

Better estimate: Start with reference class (4.5 months), then adjust based on specific differences (but modestly—most "this time is different" thinking is overconfident).

Why reference classes work: They incorporate all the complexity and surprises you can't anticipate. Your inside view is inherently overconfident because it only considers factors you can see.

Probability Calibration Games

Websites for practice:

  • PredictionBook.com: Make predictions, track accuracy
  • Metaculus.com: Forecasting tournaments with community scoring
  • Manifold Markets: Play-money prediction markets

Why games work: Immediate feedback on calibration errors. You quickly learn whether your "70% confident" matches reality.

Alternative - Peer calibration:

With colleagues, make monthly predictions about business outcomes. Track who's best calibrated. This creates accountability and competitive motivation to improve.

The "Bet Test"

Calibration check: Would you actually bet money at the odds your probability implies?

"I'm 80% confident this feature increases conversions"

Translation: You'd bet $80 to win $100 if feature works (and lose $80 if it doesn't). Would you actually take that bet?

If not, you're not really 80% confident—maybe 60-65%.

This doesn't mean literally betting on decisions. It means using betting odds as a calibration tool to force intellectual honesty.

From Probability to Action

Decision Rules Under Uncertainty

When to act despite uncertainty:

Condition Action Threshold
High reversibility Act at 50-60% confidence (easy to undo)
Low cost of error Act at 60-70% confidence
High opportunity cost of delay Act at 65-75% confidence
Irreversible, high stakes Wait for 80-90% confidence
Existential risk Require 95%+ confidence or don't act

Example - Product launch:

  • Beta launch (reversible): 60% confidence sufficient
  • Full marketing push (expensive): 75% confidence required
  • Enterprise commitment (contracts, SLAs): 85%+ confidence required

Key principle: Confidence threshold should match stakes and reversibility, not be uniform across decisions.

Communicating Probabilistically

Challenge: Most people aren't trained in probabilistic thinking. How do you communicate uncertainty without losing credibility?

Bad approach: "Maybe this works, maybe it doesn't" (sounds like you don't know)

Good approach: "Based on reference classes and available evidence, I estimate 70% confidence this succeeds, 20% it produces modest results, 10% it fails. Here's why..."

What this signals:

  • You've thought deeply (not guessing)
  • You've considered alternatives (not anchored)
  • You've acknowledged uncertainty (not overconfident)
  • You've quantified your belief (not vague)

For executive communication, translate probabilities to risk language:

  • 90%+ → "High confidence, proceed"
  • 70-90% → "Moderate confidence, manageable risk"
  • 50-70% → "Uncertain outcome, requires careful monitoring"
  • <50% → "Low confidence, recommend not proceeding OR cheap test first"

Probabilistic Organizations

Companies that build probabilistic culture:

Intel (under Andy Grove): "Only the paranoid survive"—always modeled multiple scenarios, never assumed one future

Amazon: Two-way door vs. one-way door decisions (reversibility determines confidence threshold)

Bridgewater (Ray Dalio): Expected value calculations required for major decisions, systematic tracking of prediction accuracy

Building probabilistic culture:

  1. Require probability estimates (not just yes/no recommendations)
  2. Track prediction accuracy (public calibration scores)
  3. Reward good process, not just good outcomes (don't punish 70% bets that fail)
  4. Model multiple scenarios (not just base case)
  5. Post-mortems analyze process (not just outcome)

Result: Decisions become better calibrated, organizations learn faster, risk assessment becomes more sophisticated.

The Meta-Skill

Probabilistic thinking isn't just about numbers. It's a fundamental epistemological shift: from believing the world is knowable with certainty to accepting it's knowable only in probabilities.

This shift produces:

Better predictions: By acknowledging uncertainty rather than pretending it doesn't exist

Better learning: By evaluating decision quality separately from outcome quality

Better risk management: By modeling distributions rather than single scenarios

Better communication: By quantifying confidence rather than projecting false certainty

Better humility: By recognizing how much you don't know

The world is probabilistic. Binary thinking—certain/not certain, true/false, success/failure—is a comfortable illusion that doesn't match reality.

Embrace the uncertainty. Model it. Quantify it. Update from evidence. Track your calibration.

Your decisions will improve not because you gain certainty, but because you stop pretending uncertainty doesn't exist.


Essential Readings

Foundational Texts:

  • Tetlock, P. E., & Gardner, D. (2015). Superforecasting: The Art and Science of Prediction. New York: Crown. [The definitive work on prediction and calibration]
  • Silver, N. (2012). The Signal and the Noise: Why So Many Predictions Fail—But Some Don't. New York: Penguin. [Applied probabilistic thinking across domains]
  • Duke, A. (2018). Thinking in Bets: Making Smarter Decisions When You Don't Have All the Facts. New York: Portfolio. [Poker player's guide to probabilistic decisions]

Bayesian Thinking:

  • Jaynes, E. T. (2003). Probability Theory: The Logic of Science. Cambridge: Cambridge University Press. [Rigorous Bayesian foundation]
  • Pearl, J., & Mackenzie, D. (2018). The Book of Why: The New Science of Cause and Effect. New York: Basic Books. [Causal reasoning, Bayesian networks]
  • McGrayne, S. B. (2011). The Theory That Would Not Die. New Haven: Yale University Press. [History of Bayesian thinking]

Decision-Making Under Uncertainty:

  • Kahneman, D. (2011). Thinking, Fast and Slow. New York: Farrar, Straus and Giroux. [Heuristics, biases, probability judgment errors]
  • Taleb, N. N. (2007). The Black Swan: The Impact of the Highly Improbable. New York: Random House. [Fat tails, extreme events, limits of prediction]
  • Kahneman, D., Sibony, O., & Sunstein, C. R. (2021). Noise: A Flaw in Human Judgment. New York: Little, Brown. [Variability in probabilistic judgment]

Expected Value and Risk:

  • Thorp, E. O. (2017). A Man for All Markets: From Las Vegas to Wall Street. New York: Random House. [Kelly criterion, advantage play, risk management]
  • Poundstone, W. (2005). Fortune's Formula: The Untold Story of the Scientific Betting System That Beat the Casinos and Wall Street. New York: Hill and Wang. [Kelly criterion history and application]

Forecasting and Calibration:

  • Armstrong, J. S. (2001). Principles of Forecasting: A Handbook for Researchers and Practitioners. New York: Springer. [Academic forecasting methods]
  • Makridakis, S., Wheelwright, S. C., & Hyndman, R. J. (1998). Forecasting: Methods and Applications. New York: Wiley. [Technical forecasting approaches]

Scenario Planning:

  • Schwartz, P. (1996). The Art of the Long View: Planning for the Future in an Uncertain World. New York: Currency. [Scenario planning methodology]
  • Ramirez, R., & Wilkinson, A. (2016). Strategic Reframing: The Oxford Scenario Planning Approach. Oxford: Oxford University Press.

Practical Applications:

  • Hubbard, D. W. (2014). How to Measure Anything: Finding the Value of Intangibles in Business (3rd ed.). New York: Wiley. [Applied probability in business decisions]
  • Mlodinow, L. (2008). The Drunkard's Walk: How Randomness Rules Our Lives. New York: Pantheon. [Intuitive probability failures]

Online Resources:

  • LessWrong (lesswrong.com) [Rationality community, Bayesian thinking]
  • 80,000 Hours (80000hours.org) [Expected value in career decisions]
  • GiveWell (givewell.org) [Expected value in philanthropy]