Key Decision-Making Terms People Misuse

When Precision Matters Most

A CEO announces: "We'll optimize our hiring process." What they mean: make it good enough.

An analyst reports: "There's an 80% probability this succeeds." What they mean: 80% is their credence (personal belief), not actual probability.

A consultant advises: "Make the rational choice." What they mean: the reasonable choice given your values and constraints.

Imprecise decision-making language creates imprecise decisions.

When you confuse "risk" with "uncertainty," you apply the wrong analytical tools. When you confuse "optimizing" with "satisficing," you waste resources seeking perfection where "good enough" would work. When you confuse "correlation" with "causation," you make interventions that fail.

Herbert Simon (Nobel laureate, decision science): "The first step in problem-solving is problem representation—and that depends crucially on having the right language."

Decision-making terminology comes from multiple disciplines (economics, statistics, psychology, game theory, operations research), each with precise meanings. Casual usage blurs these distinctions, leading to poor decisions dressed in sophisticated language.

This guide clarifies the most commonly confused decision-making terms—because you can't make better decisions if you can't think precisely about what you're deciding.

Risk vs. Uncertainty

Risk

Definition (Frank Knight, 1921): Situations where the possible outcomes are known and probabilities can be meaningfully assigned.

Characteristics:

  • Known outcome space (you know what can happen)
  • Known or estimable probabilities
  • Calculable expected value
  • Statistical methods apply

Examples:

  • Casino games (known outcomes, known probabilities)
  • Insurance (actuarial tables based on large data)
  • Quality control (defect rates from production history)
  • Clinical trials (response rates in controlled populations)

Management approach: Calculate expected value, diversify, hedge, use statistical risk management.

Uncertainty

Definition (Knight): Situations where outcomes and/or probabilities are fundamentally unknown or unknowable.

Characteristics:

  • Unknown outcomes (unknown unknowns exist)
  • No meaningful probability assignments
  • Expected value calculations meaningless
  • Novel or unique situations

Examples:

  • Startup success (no stable reference class, each is unique)
  • Technological disruption timing
  • Geopolitical events (wars, revolutions)
  • Novel product market reception

Management approach: Build robustness, maintain optionality, adapt quickly, avoid catastrophic downside.

Why Confusion Is Dangerous

Misusing "risk" for uncertainty leads to false precision:

  • You calculate probabilities that don't exist
  • You feel confident based on models that don't apply
  • You miss unknown unknowns (factors you didn't model)

Example: Pre-2008 financial models called the housing market a "calculable risk" (treated as uncertainty but modeled as risk). Result: catastrophic failure when unmeasured factors emerged.

Correct usage:

  • ✓ "This investment carries risk—here are the probability distributions"
  • ✓ "This venture faces uncertainty—we can't meaningfully estimate probabilities"
  • ✗ "This startup has a 73.2% probability of success" (false precision—uncertainty treated as risk)

Rational vs. Reasonable

Rational

Definition: Logically consistent according to formal axioms of rationality (transitivity, completeness, independence).

Core concept: Your preferences and choices obey mathematical consistency rules, independent of what you value.

Rational behavior:

  • If you prefer A to B and B to C, you prefer A to C (transitivity)
  • You can compare any two options (completeness)
  • You maximize expected utility given your preferences
  • Your choices don't depend on irrelevant alternatives

Example:

  • Rational: Preferring $100 to $50 (consistent with value-maximizing)
  • Also rational: Preferring leisure to money (consistent preference, even if others disagree)
  • Irrational: Preferring A to B, B to C, but C to A (intransitive, violates rationality axioms)

Reasonable

Definition: Sensible, appropriate, and justifiable given context, values, constraints, and available information.

Core concept: Good judgment in real-world contexts, incorporating factors formal rationality ignores.

Reasonable considerations:

  • Context and constraints matter
  • Values and ethics matter
  • Practical limitations matter
  • Social and relational factors matter
  • "Good enough" often beats "optimal"

Example:

  • Reasonable: Paying more for local business despite lower prices online (values community)
  • Reasonable: Choosing satisfactory career over optimal career (family considerations)
  • Reasonable: Using simple heuristic instead of complex calculation (speed/effort trade-off)

Why Confusion Is Dangerous

"Be rational" is often unhelpful advice because:

  1. Strictly rational behavior may be unreasonable (perfectly logical but practically foolish)
  2. Rationality is value-neutral (equally "rational" to be selfish or altruistic)
  3. Perfect rationality is impossible for boundededly rational humans

Example - "Rational" choice that's unreasonable:

  • Rational: Spend 40 hours calculating optimal vacation destination (if you value optimization)
  • Unreasonable: 40 hours of analysis for marginal improvement in vacation choice (poor time allocation)

Correct usage:

  • ✓ "Your preferences are irrational—they contradict each other" (logical inconsistency)
  • ✓ "That's a reasonable decision given your constraints" (sensible in context)
  • ✗ "Be rational—take the higher salary" (imposes values; reasonable choice depends on individual preferences)

Optimization vs. Satisficing

Optimization

Definition: Seeking the best possible solution from the available set—maximizing or minimizing an objective function.

Characteristics:

  • Exhaustive search or systematic algorithm
  • Finds global optimum (or proves one doesn't exist)
  • High computational cost
  • Requires well-defined objective function

When appropriate:

  • High stakes decisions where optimum matters significantly
  • Computational resources available
  • Objective function is clear
  • Search space is bounded and searchable

Example: Portfolio optimization (maximize return for given risk level)

Satisficing

Definition (Herbert Simon): Seeking a good enough solution that meets your aspiration level—stopping when criteria are satisfied without exhaustive search.

Characteristics:

  • Sequential search with stopping rule
  • Accepts first option meeting threshold
  • Low computational cost
  • Bounded rationality (recognizes cognitive limits)

When appropriate:

  • Diminishing returns to search (finding "best" has huge cost for small improvement)
  • Time pressure
  • Opportunity cost of search is high
  • "Good enough" is actually good enough

Example: Job search (accept first offer meeting salary, culture, growth criteria rather than interviewing at every company)

Why Confusion Is Dangerous

Calling everything "optimization" creates problems:

  1. Premature effort: Optimizing when satisficing would work wastes resources
  2. Analysis paralysis: Seeking optimum when you can't define it clearly
  3. False precision: "Optimizing" in domains where objective functions are unclear

Example - Misplaced optimization:

  • Claimed: "We're optimizing team productivity"
  • Reality: Productivity isn't a single variable; trade-offs exist; you're satisficing (making improvements until they're good enough)

Correct usage:

  • ✓ "Optimize this route" (clear objective: minimize distance/time)
  • ✓ "Satisfice on this decision" (good enough is sufficient, exhaustive search not worth cost)
  • ✗ "Optimize your life" (no clear objective function, actually satisficing on various dimensions)

Heuristic vs. Bias

Heuristic

Definition: A mental shortcut or rule of thumb that produces adequate (though imperfect) solutions with minimal cognitive effort.

Nature: Cognitive strategy—a tool your brain uses.

Evaluation: Context-dependent—useful in some situations, misleading in others.

Examples:

  • Availability heuristic: Estimate frequency by ease of recall
  • Recognition heuristic: If you recognize one option, choose it
  • Take-the-best heuristic: Decide based on most important cue, ignore others
  • 1/N heuristic: Divide resources equally among N options

Value: Speed, efficiency, often "good enough" accuracy.

Bias

Definition: Systematic error in judgment that results when heuristics are applied inappropriately or cognitive processes malfunction.

Nature: Error pattern—what happens when heuristics fail.

Evaluation: Always negative (by definition, it's systematic error).

Examples (biases resulting from heuristic misapplication):

  • Availability bias: Overestimate rare but memorable events (availability heuristic applied where memory ≠ frequency)
  • Confirmation bias: Seek evidence confirming beliefs (motivated reasoning, not helpful heuristic)
  • Anchoring bias: Over-weight first number encountered (adjustment heuristic fails)

Cost: Systematic errors, poor judgments, predictable mistakes.

Why Confusion Is Dangerous

Conflating heuristic and bias leads to:

  1. Dismissing useful shortcuts: "That's just a heuristic" (meant as criticism, but heuristics are often optimal given constraints)
  2. Misunderstanding causation: Biases are often results of heuristics, not the heuristics themselves

Relationship:

  • Heuristic = mental shortcut (tool)
  • Bias = systematic error (failure mode of tool)

Example:

  • Availability heuristic (tool): Judge frequency by ease of recall
  • Works well when: Frequent things are memorable (usually true)
  • Produces availability bias (failure) when: Memorable things aren't actually frequent (terrorism, plane crashes)

Correct usage:

  • ✓ "Use the recognition heuristic—it's fast and often accurate in this context"
  • ✓ "Beware availability bias—memorable events aren't necessarily frequent"
  • ✗ "That's just a bias" (when describing a heuristic that's actually working well)

Probability vs. Likelihood vs. Credence

Probability

Definition (Frequentist): Long-run frequency of an outcome in repeated trials.

Requirements:

  • Repeatable event
  • Well-defined reference class
  • Empirical grounding

Example: "Probability this coin lands heads is 0.5" (if flipped infinitely, 50% would be heads)

Likelihood

Definition (Statistical): How well a hypothesis explains observed data. NOT the probability the hypothesis is true.

Context: Bayesian inference, model comparison.

Example: "Likelihood of data given hypothesis H" (how probable is this data if H were true?)

Not the same as: "Likelihood that H is true" (that would be posterior probability in Bayesian framework)

Credence

Definition (Bayesian/Subjective): Your degree of belief in a proposition, expressed as probability.

Characteristics:

  • Subjective (varies by person based on information, priors)
  • Can apply to unique events (not requiring repetition)
  • Should be updated via Bayes' theorem as evidence arrives

Example: "My credence that this startup succeeds is 0.6" (my belief based on available information, not claim about objective frequency)

Why Confusion Is Dangerous

Misusing these terms creates false precision or confusion:

Statement Correct Term Why
"80% probability this project succeeds" Credence Projects aren't repeatable; this is your belief
"Likelihood this hypothesis is true" Posterior probability Likelihood is P(data|hypothesis), not P(hypothesis|data)
"Probability Trump wins 2024" Credence or forecast Unique event, not repeatable trial

Correct usage:

  • ✓ "Probability of heads is 0.5" (frequentist, repeatable event)
  • ✓ "My credence in this theory is 0.7" (subjective belief)
  • ✓ "Likelihood of observations given hypothesis H is..." (technical Bayesian term)
  • ✗ "There's an 80% probability AI will be transformative" (unique event, not frequency; should be "credence")

Causation vs. Correlation

Correlation

Definition: Statistical relationship where two variables tend to vary together (X increases → Y increases, or X increases → Y decreases).

What it means: Variables are associated.

What it doesn't mean: One causes the other.

Measurement: Correlation coefficient (r) from -1 to +1.

Causation

Definition: Relationship where changes in X directly produce changes in Y (manipulating X changes Y).

Requirements (Hill's criteria, Pearl's causal calculus):

  • Temporal precedence (cause before effect)
  • Covariation (correlation exists)
  • No plausible alternative explanations
  • Mechanism (how X causes Y)
  • Intervention changes outcome

Gold standard: Randomized controlled trials (RCTs).

Why Confusion Is Dangerous

Correlation ≠ Causation fallacy leads to:

  1. Spurious causation: Treating correlations as causal when they're not
  2. Ineffective interventions: Changing X when it doesn't actually cause Y
  3. Missing confounds: Ignoring Z that causes both X and Y

Classic examples of correlation without causation:

  • Ice cream sales correlate with drowning deaths (confound: summer weather)
  • Number of firefighters correlates with fire damage (confound: fire size determines both)
  • Countries with more chocolate consumption have more Nobel laureates (confound: wealth)

Correct usage:

  • ✓ "X and Y are correlated" (statistical association observed)
  • ✓ "X causes Y" (only after establishing causation via experiment, mechanism, or strong inference)
  • ✗ "X correlates with Y, therefore X causes Y" (causation requires more than correlation)

Strategy vs. Tactics

Strategy

Definition: High-level plan for achieving long-term objectives, including what to do and (crucially) what not to do.

Characteristics:

  • Integrated set of choices creating competitive advantage
  • Trade-offs: You can't be everything to everyone
  • Sustainable: Difficult for competitors to copy
  • Guides resource allocation: What to invest in, what to ignore

Questions strategy answers:

  • Where will we compete? (markets, segments)
  • How will we win? (competitive advantage)
  • What capabilities do we need?
  • What will we NOT do?

Example: "We'll serve premium customers with customized solutions, sacrificing scale for margin."

Tactics

Definition: Specific actions and maneuvers to execute strategy.

Characteristics:

  • Concrete actions: What you do day-to-day
  • Flexible: Can change as situations evolve
  • Implementation details: How strategy manifests
  • Shorter time horizon: Weeks to months vs. years

Example: "Run targeted LinkedIn ads, attend premium industry conferences, develop personalized onboarding process."

Why Confusion Is Dangerous

Calling tactics "strategy" creates problems:

  1. No real strategy: Collection of tactics without coherent direction
  2. Reactive behavior: Constantly changing "strategy" (actually tactics)
  3. No trade-offs: Trying to do everything without focus

Example - "Strategy" that's actually tactics:

  • ✗ "Our strategy is to increase social media presence, launch new features, and expand sales team" (those are tactics)
  • ✓ "Our strategy is to dominate premium segment by providing superior customization, accepting lower volume for higher margins" (makes trade-offs, guides tactics)

Correct usage:

  • ✓ "Strategy: Focus on enterprise customers" (high-level choice with trade-offs)
  • ✓ "Tactics: Cold email campaign, attend TechCrunch Disrupt, publish whitepapers" (specific actions executing strategy)
  • ✗ "Strategy: Do more marketing" (that's an action, not strategy)

Outcomes vs. Outputs

Outputs

Definition: The direct products or deliverables of your activities—what you produce.

Characteristics:

  • Under your control
  • Measurable immediately
  • Activity-focused
  • Necessary but insufficient

Examples:

  • Lines of code written
  • Number of sales calls made
  • Blog posts published
  • Features shipped

Outcomes

Definition: The actual results or changes that outputs create—the impact.

Characteristics:

  • Partially beyond your control
  • Take time to materialize
  • Results-focused
  • What actually matters

Examples:

  • Working software (not just code volume)
  • Sales revenue (not just call volume)
  • Audience engagement (not just posts published)
  • User adoption (not just features shipped)

Why Confusion Is Dangerous

Optimizing outputs instead of outcomes leads to:

  1. Goodhart's Law: When output becomes the target, it ceases to be useful
  2. Busy-work: High output, low outcome (activity without results)
  3. Missing the point: Measuring what's easy rather than what matters

Classic mistakes:

  • Measure "lines of code" instead of "working software delivered"
  • Measure "hours worked" instead of "problems solved"
  • Measure "features shipped" instead of "user value created"

Correct usage:

  • ✓ "Our output is 100 sales calls/week; our outcome is $50K in new revenue"
  • ✓ "Don't confuse outputs (activities) with outcomes (results)"
  • ✗ "We're succeeding—look at our high output" (output alone doesn't prove success; outcomes matter)

Expected Value vs. Expected Utility

Expected Value

Definition: Probability-weighted average of monetary outcomes. Objective, dollar-denominated.

Formula: EV = Σ (Probability × Outcome)

Example:

  • 50% chance of $100, 50% chance of $0
  • EV = 0.5($100) + 0.5($0) = $50

Use: When outcomes are monetary and you're risk-neutral.

Expected Utility

Definition: Probability-weighted average of subjective utility (satisfaction/value). Incorporates risk preferences.

Formula: EU = Σ (Probability × Utility(Outcome))

Key insight: Utility isn't linear with money:

  • $100 → $200 increase feels bigger than $10,000 → $10,100 (diminishing marginal utility)
  • Losing $100 hurts more than gaining $100 feels good (loss aversion)

Use: When incorporating risk preferences (risk-averse, risk-neutral, risk-seeking).

Why Confusion Is Dangerous

Using EV when you should use EU leads to poor decisions:

Example - St. Petersburg Paradox:

  • Flip coin until tails. Payout = $2^n where n = flips until tails.
  • Expected value = infinite (paradox: no one would pay much to play)
  • Expected utility = finite (explains actual behavior—diminishing marginal utility)

Real-world application:

  • Startup equity: High EV, but also high variance and loss risk → need to consider utility, not just EV
  • Insurance: Negative EV, but positive EU (you're paying for risk reduction)

Correct usage:

  • ✓ "Expected value is $1M" (average monetary outcome)
  • ✓ "Expected utility incorporates my risk aversion" (subjective value)
  • ✗ "Choose highest expected value" (ignores risk preferences; should be expected utility)

Precision Enables Better Decisions

Imprecise language produces imprecise thinking. When you can't distinguish "risk" from "uncertainty," you apply the wrong tools. When you can't distinguish "optimization" from "satisficing," you waste resources.

Herbert Simon: "One reason for the technical terminology is that it forces precision of thought that ordinary language does not."

Practice using these distinctions correctly:

  1. Catch yourself (and others) using terms loosely
  2. Ask clarifying questions ("By 'optimize,' do you mean find the absolute best or just improve significantly?")
  3. Use precise terms consciously (even when it feels pedantic)
  4. Explain distinctions (teaching clarifies your own understanding)

Over time, precision in language becomes precision in thought—and precision in thought becomes better decisions.


Essential Readings

Decision Theory Foundations:

  • Gilboa, I. (2009). Theory of Decision Under Uncertainty. Cambridge: Cambridge University Press.
  • Peterson, M. (2009). An Introduction to Decision Theory. Cambridge: Cambridge University Press.
  • Luce, R. D., & Raiffa, H. (1957). Games and Decisions. New York: Wiley.

Risk and Uncertainty:

  • Knight, F. H. (1921). Risk, Uncertainty and Profit. Boston: Houghton Mifflin.
  • Kay, J., & King, M. (2020). Radical Uncertainty: Decision-Making Beyond the Numbers. New York: Norton.
  • Taleb, N. N. (2007). The Black Swan. New York: Random House.

Bounded Rationality and Satisficing:

  • Simon, H. A. (1982). Models of Bounded Rationality (Vols. 1-3). Cambridge, MA: MIT Press.
  • Gigerenzer, G., & Selten, R. (Eds.). (2001). Bounded Rationality: The Adaptive Toolbox. Cambridge, MA: MIT Press.
  • Simon, H. A. (1955). "A Behavioral Model of Rational Choice." Quarterly Journal of Economics, 69(1), 99-118.

Heuristics and Biases:

  • Kahneman, D. (2011). Thinking, Fast and Slow. New York: Farrar, Straus and Giroux.
  • Gigerenzer, G. (2007). Gut Feelings: The Intelligence of the Unconscious. New York: Viking.
  • Tversky, A., & Kahneman, D. (1974). "Judgment Under Uncertainty: Heuristics and Biases." Science, 185(4157), 1124-1131.

Probability and Statistics:

  • Pearl, J., & Mackenzie, D. (2018). The Book of Why: The New Science of Cause and Effect. New York: Basic Books.
  • Jaynes, E. T. (2003). Probability Theory: The Logic of Science. Cambridge: Cambridge University Press.
  • Tetlock, P. E., & Gardner, D. (2015). Superforecasting. New York: Crown.

Strategy:

  • Rumelt, R. (2011). Good Strategy Bad Strategy. New York: Crown.
  • Porter, M. E. (1996). "What Is Strategy?" Harvard Business Review, 74(6), 61-78.