Intuition is the rapid, automatic judgment that arrives without conscious deliberation -- the gut feeling that a hire is wrong, the flash of recognition that a chess position is winning, the firefighter's sudden certainty that a floor is about to collapse. It is one of the most powerful and most misunderstood tools in human cognition. The science of intuition reveals that gut feelings are reliable guides in some environments and dangerous illusions in others, and the difference depends not on how confident the feeling is, but on the structure of the domain in which it was learned.

Understanding when to trust intuition -- and when to override it with deliberate analysis -- is among the most consequential skills in decision-making. The research on this question spans cognitive psychology, behavioral economics, expertise studies, and artificial intelligence, and it converges on a surprisingly clear framework first articulated by two researchers who spent decades on opposite sides of the debate.

"The confidence that people have in their intuitions is not a reliable guide to their validity. In other words, do not trust anyone -- including yourself -- to tell you how much you should trust their judgment." -- Daniel Kahneman, Thinking, Fast and Slow (2011)

What Is Intuition, Exactly?

Intuition is fast, automatic, pattern-based judgment that arrives without conscious deliberate reasoning. It is not mystical, and it is not random. Intuition is the output of pattern recognition: the brain matching current sensory inputs and contextual cues against patterns stored from past experience. When those stored patterns were learned in environments with genuine regularities and reliable feedback, the resulting intuition can be extraordinarily accurate. When they were learned in noisy, unpredictable environments, the intuition feels equally compelling but produces unreliable judgments.

The neurological basis of intuition has been studied extensively. Research by Antonio Damasio at the University of Southern California, published in his landmark book Descartes' Error (1994), demonstrated that emotional signals from the body -- what Damasio called somatic markers -- play a critical role in rapid decision-making. In his Iowa Gambling Task experiments, participants began making advantageous choices before they could consciously articulate why, guided by physiological stress responses that preceded conscious awareness. Damasio's work showed that intuition is not the opposite of reason but a form of embodied cognition that integrates emotional learning with pattern recognition.

Herbert Simon, the Nobel Prize-winning economist and cognitive scientist, offered perhaps the most precise definition: "The situation has provided a cue; this cue has given the expert access to information stored in memory, and the information provides the answer. Intuition is nothing more and nothing less than recognition" (Simon, 1992). This definition strips away the mystique and reveals the mechanism: intuition is compiled experience, accessed rapidly through pattern matching.

System 1 and System 2: The Two Minds

Daniel Kahneman's framework of System 1 and System 2 thinking, popularized in Thinking, Fast and Slow (2011), provides the cognitive architecture for understanding intuition.

System 1 thinking is fast, automatic, associative, and largely unconscious. It operates continuously, requires no effort, and generates impressions, feelings, and snap judgments. When you recognize a friend's face in a crowd, understand a simple sentence, or flinch at a sudden noise, System 1 is at work. Intuition is a System 1 output.

System 2 thinking is slow, deliberate, effortful, and rule-following. It performs explicit reasoning, calculation, and sequential analysis. When you multiply 17 by 24, compare the terms of two insurance policies, or plan a route through an unfamiliar city, System 2 is engaged.

The critical insight is that System 2 is lazy. It tends to accept System 1's outputs without scrutiny unless given a strong reason to engage. Most of the time, this works well -- System 1's rapid assessments are adequate for the routine decisions that fill most of life. The problems arise when System 1 generates confident judgments in domains where its pattern recognition has not been properly calibrated, and System 2 does not intervene.

A study by Shane Frederick (2005) at MIT illustrated this vividly with what he called the Cognitive Reflection Test. The classic example: "A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?" System 1 immediately generates the answer "10 cents" -- an answer that feels right but is wrong. (The correct answer is 5 cents.) More than 50% of students at elite universities gave the intuitive wrong answer, demonstrating how readily System 2 accepts System 1's outputs even when they are demonstrably incorrect.

The Kahneman-Klein Framework: Two Conditions for Reliable Intuition

The most important contribution to the science of intuition came from an unlikely collaboration. Daniel Kahneman, who spent his career documenting the failures of human judgment, and Gary Klein, who spent his career documenting the successes of expert intuition, discovered through years of dialogue that they were not actually in disagreement. They were studying different environments.

Their 2009 paper "Conditions for Intuitive Expertise: A Failure to Disagree," published in American Psychologist, identified two necessary conditions for intuitive expertise to develop:

Condition 1: The Environment Must Be High-Validity

A high-validity environment is one that is sufficiently regular that its patterns are genuinely learnable. The cues available to the decision-maker must actually predict outcomes. Chess is a high-validity environment: the same structural features reliably predict future play, and positions recur across games. A burning building is a high-validity environment: structural features, fire behavior, and smoke patterns reliably predict how the fire will develop.

A low-validity environment is one where outcomes are driven primarily by factors that do not produce reliable signals in available information. Daily stock price movements incorporate information from millions of actors worldwide; individual company analysis does not provide a consistent edge. Political outcomes depend on contingencies that no expert can systematically track. Long-term weather forecasting beyond about ten days operates in a low-validity zone because atmospheric systems are chaotic.

The crucial finding: in low-validity environments, experienced practitioners still develop confident intuitions. They see patterns, they feel certain, they can articulate compelling narratives about why their judgment is correct. But those intuitions are not accurate predictors because the patterns being recognized are not genuine regularities. They are noise interpreted as signal.

Condition 2: The Practitioner Must Have Prolonged Practice with Feedback

Even in high-validity environments, intuitions are only reliable if the practitioner has had extensive experience with accurate, timely feedback. The feedback must meet three criteria:

  • Timely: Feedback received long after the decision cannot correct the specific intuition that generated it. A surgeon who sees the outcome of an operation within days receives feedback that can calibrate future intuitions. A pension fund manager who learns the result of an allocation decision after twenty years does not.
  • Accurate: The feedback must correctly attribute outcomes to the decisions that produced them. When outcomes are influenced by many factors beyond the decision in question, the feedback is noisy and calibration is poor.
  • Sufficient in volume: Pattern recognition requires many exposures. A chess master has studied tens of thousands of positions. A radiologist has reviewed hundreds of thousands of images. These numbers matter.

This framework explains apparent contradictions elegantly. Chess players develop extraordinary intuition because they operate in a high-validity environment with immediate feedback across millions of positions. Expert firefighters develop reliable intuition because fire behavior is physically governed and feedback comes within the incident. Stock analysts do not develop reliable intuition despite decades of experience because the environment is low-validity and feedback is contaminated by market noise.

Domain Validity Feedback Quality Intuition Reliability
Chess and Go High -- rule-governed, patterns recur Immediate -- win/loss per game Very high after years of study
Firefighting High -- physics-governed fire behavior Rapid -- outcomes within the incident High for experienced commanders
Emergency medicine High -- symptom-diagnosis patterns Moderate -- lab confirmation within hours/days High for pattern recognition tasks
Surgical decisions High -- anatomical regularities Moderate -- outcome visible within days/weeks High for experienced surgeons
Stock picking Low -- driven by unpredictable factors Poor -- outcomes confounded by market noise Poor despite decades of experience
Political forecasting Low -- contingent on unknowable events Very poor -- delayed and ambiguous Poor for most experts
Long-range weather Low beyond ~10 days (chaotic systems) Good for short-range; poor for long Good for 1-5 days; poor beyond 10
Personnel selection Low-moderate -- job performance multiply caused Delayed and ambiguous Poor for unstructured interviews

Gary Klein's Firefighters: When Intuition Saves Lives

Gary Klein spent years following firefighters into burning buildings as part of his research on naturalistic decision making (NDM) -- the study of how experienced practitioners make decisions in real-world conditions of time pressure, uncertainty, and high stakes. His findings, published in Sources of Power: How People Make Decisions (1998), fundamentally challenged the classical model of decision-making.

The classical model assumed decision-makers generate multiple options, compare them against criteria, and select the best. Klein found that experienced fireground commanders did none of this. Instead, they used what he called recognition-primed decision making (RPD): the commander recognized the situation type, which activated a plausible course of action, which the commander then mentally simulated to check for problems. If the mental simulation succeeded, the action was taken. If it revealed a flaw, the next most plausible action was tried.

Klein's most famous case involved a fire commander who ordered his crew out of a burning house moments before the floor collapsed into a basement fire that had been invisible from ground level. When Klein asked the commander how he knew, the commander could not articulate it. He said it "didn't feel right." Klein's analysis revealed that the commander had unconsciously registered two cues: the fire was hotter than it should have been for its visible size, and the house was quieter than expected (fires make noise proportional to their intensity; a quiet, hot fire means the fire is somewhere you cannot see it). The commander's intuition integrated these cues without conscious analysis and produced the correct life-saving judgment.

"When you have to make a decision in a burning building, you don't have time to analyze options. The question is whether your pattern recognition is good -- and that depends on whether you've seen enough fires. If you have, trust it." -- Gary Klein, Sources of Power (1998)

This is intuition at its best: fast, accurate, life-saving, and built on thousands of hours of valid experience with reliable feedback.

The Graveyard of Expert Intuition: When Gut Feelings Fail

The opposite side of the ledger is equally well documented and considerably more troubling.

Financial Forecasting

The evidence that professional stock analysts and fund managers do not, on average, outperform passive index investing is among the most robustly replicated findings in financial economics. Burton Malkiel's A Random Walk Down Wall Street (1973) first presented the case; subsequent decades of data have strengthened it. The S&P Indices Versus Active (SPIVA) scorecard, published annually by S&P Dow Jones Indices, consistently finds that over 15-year periods, approximately 87-92% of actively managed large-cap funds underperform the S&P 500 index (SPIVA U.S. Scorecard, 2023). Professional fund managers have extensive experience, access to sophisticated analysis, and enormous confidence in their judgments. The environment simply does not reward their intuitions because it lacks sufficient validity.

Clinical Judgment vs. Statistical Prediction

In 1954, Paul Meehl published Clinical Versus Statistical Prediction, which compared the accuracy of expert clinical judgment against simple statistical formulas for predicting outcomes ranging from academic performance to psychiatric diagnosis to criminal recidivism. In study after study, the simple formula matched or outperformed the expert. A meta-analysis by William Grove and colleagues (2000), published in Psychological Assessment, reviewed 136 studies and found that mechanical prediction was equal to or superior to clinical prediction in 94% of cases.

The mechanism is instructive. Clinicians interview patients, gather information, form impressions, and integrate everything into a holistic judgment. The statistical model uses the same variables but combines them with fixed weights. The model wins not because it accesses better information but because it applies consistent rules without the noise, fatigue, mood effects, and anchoring biases that affect human judgment. The clinician's intuition adds noise where the formula maintains discipline.

Political Forecasting

Philip Tetlock's two-decade study of political expert predictions, published in Expert Political Judgment: How Good Is It? How Can We Know? (2005), tracked 28,000 predictions by 284 recognized political experts. The results were devastating: experts performed barely better than chance and significantly worse than simple statistical models using base rates. Experts who were most confident in their predictions and who relied on a single grand explanatory theory -- what Tetlock called "hedgehogs" -- performed worst. Those who were more humble, drew on multiple frameworks, and updated their views with new evidence -- "foxes" -- performed better, though still not impressively.

Tetlock's subsequent work with the Good Judgment Project (2011-2015), funded by the Intelligence Advanced Research Projects Activity (IARPA), showed that superforecasters -- ordinary people trained in probabilistic reasoning and base-rate thinking -- outperformed intelligence analysts with access to classified information by 30% on geopolitical prediction tasks. The superforecasters succeeded not through superior intuition but through disciplined analytical processes that corrected for the biases intuition introduces.

"We found that the forecasting accuracy of experts was not significantly better than the forecasting accuracy of non-experts. Both groups performed worse than a simple regression model that used base rates." -- Philip Tetlock, Expert Political Judgment (2005)

Job Interviews

Unstructured interviews -- where an interviewer spends time with a candidate and forms an overall impression -- are among the least valid predictors of job performance. A meta-analysis by Frank Schmidt and John Hunter (1998), published in Psychological Bulletin, found that unstructured interviews have a validity coefficient of approximately 0.20 for predicting job performance, while structured interviews (consistent questions, standardized scoring rubrics) achieve approximately 0.51. Work sample tests and cognitive ability tests outperform both. The interviewer's confident intuition about a candidate is primarily shaped by irrelevant factors: physical appearance, communication style, similarity to the interviewer, and impression management skill.

The Illusion of Validity: Why Bad Intuitions Feel Good

One of the most important findings in the psychology of judgment is that the subjective experience of an intuition -- its felt certainty, its compelling quality, its sense of rightness -- bears no reliable relationship to its accuracy. Kahneman called this the illusion of validity: the persistent subjective confidence in one's predictive abilities even in the face of evidence that those predictions are inaccurate.

The illusion persists because intuitions feel compelling when they are coherent -- when the available information fits together into a plausible story. But coherence is not accuracy. A stock analyst who constructs a compelling narrative about a company's prospects feels confident because the story hangs together, not because the prediction is likely to be correct. A hiring manager who feels a strong connection with a candidate experiences coherence between the candidate's presentation and the manager's template of a "good hire," regardless of whether that template predicts actual performance.

Research by Hillel Einhorn and Robin Hogarth (1978) at the University of Chicago demonstrated that confidence in judgment increases with the amount of information gathered, even when additional information does not improve accuracy. This is the information illusion: more data creates more material for coherent narratives without necessarily producing better predictions. It explains why experienced professionals in low-validity domains become more confident over time without becoming more accurate -- they accumulate more stories, not more valid patterns.

How to Diagnose Whether Your Intuition Is Trustworthy

The Kahneman-Klein framework provides a practical diagnostic. Before trusting a gut feeling on a consequential decision, run through these checks:

Check 1: Is the Environment High-Validity?

Ask: Does this domain have genuine regularities that produce reliable signals? Are there patterns that consistently predict outcomes? Or is the domain heavily influenced by factors that do not show up in available information?

If you are an experienced emergency physician reading symptoms, the environment is high-validity. If you are predicting which startup in your portfolio will achieve a 10x return, the environment is low-validity. The distinction matters more than your years of experience.

Check 2: Have You Received Adequate Feedback?

Ask: Have I received timely, accurate feedback on similar judgments in the past? Can I point to specific cases where my intuition was wrong and I learned from the error? Or has most of my experience been in situations where outcomes were ambiguous, delayed, or attributable to multiple causes?

The physician whose diagnostic hunches are confirmed or corrected by lab results within hours has been learning. The venture capitalist who learns the outcome of an investment decision seven years later, confounded by market conditions, co-investor behavior, and management changes, has not been learning -- not in the pattern-recognition sense that builds reliable intuition.

Check 3: Is the Intuition Emotionally Loaded?

Intuitions that arrive with strong emotional coloring -- excitement, fear, anxiety, desire -- are more likely to reflect System 1's emotional machinery than its pattern recognition. Antonio Damasio's somatic marker hypothesis shows that emotion can be informative, but it can also be misleading when the emotional response is triggered by irrelevant features of the situation. A sudden urgent sense that you must make an investment is more likely to reflect FOMO than genuine insight. A visceral discomfort with a job candidate that feels like professional judgment may reflect unconscious bias triggered by appearance or accent.

Check 4: Does the Intuition Survive a Base-Rate Check?

The most powerful discipline for calibrating intuition is comparison with the statistical base rate for the outcome you are predicting. If your intuition says this startup will succeed, but the base rate for startups at this stage is a 10% survival rate, the burden of proof is on the intuition. Your gut feeling needs to be justified by specific, articulable reasons why this case differs from the reference class -- not by the feeling of certainty alone.

Daniel Kahneman and Amos Tversky demonstrated in their foundational work on base-rate neglect (1973) that people systematically ignore statistical base rates when vivid case-specific information is available. A compelling founder story, an exciting product demo, or a charismatic interview candidate activates System 1's narrative machinery and suppresses attention to the statistical reality. Deliberately consulting base rates before making a judgment is one of the most effective debiasing techniques available.

Check 5: Can You Articulate the Cues?

Klein's research suggests a useful heuristic: if an expert can articulate, even roughly, what cues triggered their intuition, the intuition is more likely to be genuine pattern recognition. Klein's fire commander could not initially explain his decision, but under structured questioning, the relevant cues (anomalous heat, unexpected quiet) emerged. When an intuition resists any articulation -- when the person can only say "I just feel it" without identifying any environmental cue -- the judgment may be pattern recognition, but it may also be mood, bias, or noise.

The Integration: Using Both Systems Well

The research does not support either "always trust your gut" or "always follow the analysis." It supports a conditional approach that recognizes the strengths and limitations of each cognitive system:

In high-validity, high-experience domains: Defer to expert intuition, particularly for time-pressured pattern recognition tasks. A chess grandmaster's positional judgment, an experienced surgeon's assessment of tissue viability, a veteran firefighter's sense of structural danger -- these should be trusted. Overriding them with formal analysis typically makes performance worse, not better, because the analysis cannot process the relevant cues as quickly or as integratively.

In low-validity domains or situations outside your experience: Be systematically skeptical of intuitions, regardless of how confident they feel. Seek statistical reference points, use structured decision processes, and weight base rates heavily. This applies to most business forecasting, investment decisions, hiring judgments, and political predictions.

When intuition and analysis conflict: This disagreement is itself the most valuable signal. It means either the intuition is detecting something the analysis misses (possible in high-validity domains) or the intuition is reflecting bias the analysis correctly overrides (likely in low-validity domains). The conflict should trigger deeper investigation, not automatic deference to either system.

When stakes are high and reversibility is low: Err toward formal analysis even in domains where intuition is usually reliable. The asymmetry of consequences justifies the additional time cost.

For a structured approach to stress-testing decisions before committing, see pre-mortem analysis. For more on using probability and base rates in decisions, see probabilistic thinking in decisions. For the cognitive traps that distort judgment, see common decision traps. To understand how framing shapes intuitive reactions, see how framing affects decisions.

Practical Applications Across Domains

Medicine

Experienced diagnosticians should trust their pattern-recognition intuitions for conditions they have seen many times, especially when confirmed by rapid diagnostic feedback loops. But for rare conditions, complex multi-system presentations, or situations where anchoring on an initial diagnosis is likely, structured differential diagnosis processes outperform intuition. Pat Croskerry, an emergency physician and researcher at Dalhousie University, has documented how cognitive forcing strategies -- deliberately slowing down and considering alternatives -- reduce diagnostic error rates in emergency medicine (Croskerry, 2003).

Business Strategy

Most strategic business decisions occur in moderate-to-low validity environments. The experienced CEO's intuition about market direction, competitor response, or customer behavior should be treated as a hypothesis to be tested, not a conclusion to be acted on. Companies that combine executive judgment with structured processes -- scenario planning, reference class forecasting, pre-mortem analysis -- outperform those that rely on either alone.

Hiring

The research is unambiguous: structured interviews outperform unstructured interviews for predicting job performance. Organizations that allow hiring managers to override structured assessment scores with intuitive impressions systematically reduce the quality of their hiring. The most effective approach uses structured assessment as the primary input and reserves intuitive judgment for a narrow, defined role -- such as assessing cultural fit after all competency criteria have been evaluated objectively.

Personal Life

In personal decisions with high emotional content -- choosing a partner, deciding where to live, selecting a career path -- intuition plays a legitimate and important role because the "outcome" is partly defined by subjective experience. How a relationship feels matters independently of any objective measure. The key discipline is distinguishing between intuitions that reflect genuine self-knowledge (accumulated experience with what makes you happy and what does not) and intuitions that reflect anxiety, social pressure, or short-term emotion.

The Expertise Paradox

One of the most counterintuitive findings in this literature is that the people most likely to have unreliable intuitions -- experienced professionals in low-validity domains -- are also the people most confident in their intuitions. The overconfidence effect is strongest precisely where calibration is worst. Stock analysts are more confident in their picks than their track records warrant. Experienced interviewers are more confident in their candidate assessments than the validity data supports. Political commentators express more certainty about future events than their prediction accuracy justifies.

This creates what might be called the expertise paradox: genuine expertise in high-validity domains produces calibrated confidence (experts know what they know and what they don't), while pseudo-expertise in low-validity domains produces miscalibrated confidence (practitioners feel certain about judgments that are not reliably accurate). The feeling of expertise is the same in both cases. Only the track record distinguishes them.

The practical implication is that confidence is not evidence. When evaluating anyone's intuitive judgment -- your own included -- the relevant question is never "How confident are you?" but rather "What is the validity of the environment in which this intuition was trained, and what is the quality of the feedback you have received?"


References and Further Reading

  • Kahneman, D., & Klein, G. (2009). Conditions for Intuitive Expertise: A Failure to Disagree. American Psychologist, 64(6), 515-526. https://doi.org/10.1037/a0016755
  • Klein, G. (1998). Sources of Power: How People Make Decisions. MIT Press.
  • Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.
  • Damasio, A. (1994). Descartes' Error: Emotion, Reason, and the Human Brain. Putnam.
  • Tetlock, P. E. (2005). Expert Political Judgment: How Good Is It? How Can We Know? Princeton University Press.
  • Tetlock, P. E., & Gardner, D. (2015). Superforecasting: The Art and Science of Prediction. Crown.
  • Meehl, P. E. (1954). Clinical Versus Statistical Prediction: A Theoretical Analysis and a Review of the Evidence. University of Minnesota Press.
  • Grove, W. M., Zald, D. H., Lebow, B. S., Snitz, B. E., & Nelson, C. (2000). Clinical Versus Mechanical Prediction: A Meta-Analysis. Psychological Assessment, 12(1), 19-30. https://doi.org/10.1037/1040-3590.12.1.19
  • Schmidt, F. L., & Hunter, J. E. (1998). The Validity and Utility of Selection Methods in Personnel Psychology. Psychological Bulletin, 124(2), 262-274. https://doi.org/10.1037/0033-2909.124.2.262
  • Frederick, S. (2005). Cognitive Reflection and Decision Making. Journal of Economic Perspectives, 19(4), 25-42. https://doi.org/10.1257/089533005775196732
  • Ericsson, K. A., Krampe, R. T., & Tesch-Romer, C. (1993). The Role of Deliberate Practice in the Acquisition of Expert Performance. Psychological Review, 100(3), 363-406. https://doi.org/10.1037/0033-295X.100.3.363
  • Croskerry, P. (2003). The Importance of Cognitive Errors in Diagnosis and Strategies to Minimize Them. Academic Medicine, 78(8), 775-780.
  • Simon, H. A. (1992). What is an "Explanation" of Behavior? Psychological Science, 3(3), 150-161.
  • Dreyfus, H. L., & Dreyfus, S. E. (1986). Mind Over Machine: The Power of Human Intuition and Expertise in the Era of the Computer. Free Press.
  • Malkiel, B. G. (1973). A Random Walk Down Wall Street. W. W. Norton.
  • S&P Dow Jones Indices. (2023). SPIVA U.S. Scorecard. https://www.spglobal.com/spdji/en/research-insights/spiva/

Frequently Asked Questions

Is intuition reliable for making decisions?

It depends entirely on the domain. Intuition is reliable when the decision-maker has extensive experience in a domain with rapid, accurate feedback — chess, firefighting, surgery, certain forms of pattern recognition. Intuition is unreliable when feedback is delayed, ambiguous, or absent — stock market predictions, political forecasting, many business judgments. The key question is not whether you have experience, but whether your experience in this domain has calibrated your intuitions against accurate feedback.

What conditions make expert intuition trustworthy?

Gary Klein and Daniel Kahneman, summarizing their debate in a 2009 paper, identified two conditions required for reliable intuition: (1) The environment must be sufficiently regular that patterns can be learned — high validity. (2) The learner must have had adequate experience with that environment, including timely and accurate feedback — high experience with feedback. Both conditions must hold. An experienced doctor whose diagnoses are never confirmed has experience without valid feedback. A chess computer faces a valid environment but has no intuitive processing.

What is the difference between System 1 and System 2 thinking?

System 1 is fast, automatic, associative, and largely unconscious — it is the system that produces intuitions, gut feelings, and snap judgments. System 2 is slow, deliberate, effortful, and rule-following — it is the system that performs explicit reasoning, calculation, and analysis. Intuition is System 1 output. For intuition to be reliable, System 1 must have been trained by experience in a valid environment. For decisions outside that experience, System 2 analysis is necessary.

Are experts better at intuitive decisions than novices?

In high-validity domains with good feedback, yes — substantially. Expert chess players can evaluate positions in seconds that novices cannot assess in minutes. Expert surgeons notice anomalies that novices miss. Expert firefighters detect dangerous structural conditions before instruments can measure them. But expertise only produces reliable intuition in the domain of expertise and under conditions similar to those in which the expertise was acquired. Experts are not generally better intuitive judges — only in their specific trained domain.

When should I override my intuition with analysis?

Override your intuition when: the domain has low validity (outcomes are driven by noise and chance); your feedback in the domain has been delayed, sparse, or ambiguous; the stakes are high and the decision is hard to reverse; your emotional state may be driving the intuition; or the intuition conflicts with strong statistical or base-rate evidence. Intuition should not be trusted when the conditions for reliable calibration are absent, regardless of how confident the intuition feels.

Why do confident intuitions sometimes feel right but be wrong?

Confidence in an intuition reflects the fluency and familiarity of the associated cognitive pattern — not the accuracy of the underlying judgment. Intuitions feel compelling when they are coherent: when the available information fits together into a plausible story. But coherence is not accuracy. The planning fallacy, overconfidence bias, and many other systematic errors feel just as compelling as genuine insights because they share the same feeling of cognitive fluency.