Intuition vs Analytical Decision Making: When to Trust Your Gut, When to Use Data, and How Expert Decision-Makers Combine Both
A chess grandmaster looks at a board mid-game and within seconds identifies the best move. She cannot fully explain why it is the best move--the pattern simply feels right. She has played tens of thousands of games, studied thousands of positions, and her brain has encoded this vast experience into pattern-recognition circuits that fire automatically, producing what she experiences as intuition. When researchers test her choice against computer analysis, it matches the optimal move approximately 70 percent of the time--without any conscious calculation.
Meanwhile, in a hospital across town, a doctor reviews a patient's lab results, symptoms, and medical history. The patient presents with fatigue, weight loss, and night sweats. The doctor's intuition suggests lymphoma--he has seen this combination before. But before acting on that intuition, he systematically works through the differential diagnosis: checking each symptom against possible conditions, ordering additional tests to rule out alternatives, and consulting clinical guidelines that encode the collective analytical wisdom of medical research. The systematic process takes longer than the intuitive hunch, but it is also more likely to catch the cases where intuition would have been wrong.
These two scenarios illustrate the central tension in human decision-making: intuition is fast, effortless, and often remarkably accurate in familiar domains, while analysis is slow, effortful, and often more reliable in novel or complex situations. The question of when to trust your gut and when to rely on systematic analysis is not merely an academic curiosity--it is a practical challenge that every professional faces daily, with consequences ranging from trivial to life-altering.
The debate between intuition and analysis has a long intellectual history. Philosophers from Plato to Descartes emphasized the superiority of reason. Romantics from Rousseau to the Transcendentalists celebrated the wisdom of feeling. Modern psychology, particularly the work of Daniel Kahneman, Gerd Gigerenzer, and Gary Klein, has moved beyond this dichotomy to reveal a more nuanced picture: intuition and analysis are not competing approaches but complementary cognitive systems, each with predictable strengths and predictable weaknesses, and the art of good decision-making lies in knowing which system to deploy in which situation.
What Is Intuitive Decision-Making?
What is intuitive decision-making? Intuitive decision-making is the process of making fast, pattern-based judgments using accumulated experience. It feels like a "gut feeling" or a hunch, but it is actually the product of unconscious expertise--the brain's ability to recognize patterns and retrieve relevant responses without conscious deliberation.
How Intuition Actually Works
Intuition is not magic, mysticism, or randomness. It is pattern recognition operating below the threshold of conscious awareness. When an experienced firefighter walks into a burning building and suddenly feels that the floor is about to collapse, that feeling is not supernatural--it is the product of thousands of hours of experience during which the firefighter's brain has learned to associate subtle environmental cues (the sound of the fire, the heat pattern, the structural indicators) with imminent structural failure. The firefighter cannot articulate which specific cues triggered the feeling, but the feeling is based on real information processed by real neural circuits.
Gary Klein, a psychologist who has spent decades studying expert decision-making in high-stakes environments, developed the Recognition-Primed Decision (RPD) model to explain how experts use intuition. According to the RPD model:
- The expert perceives the situation and unconsciously matches it against patterns stored in memory from previous experience.
- The pattern match generates a candidate action--the first response that seems right based on similar situations the expert has encountered.
- The expert mentally simulates the action to check whether it will work in the current situation.
- If the simulation succeeds, the expert implements the action without considering alternatives.
- If the simulation fails, the expert modifies the action or considers the next pattern match.
This process is remarkably fast--often completing in seconds--because it skips the time-consuming step of generating and comparing multiple options. The expert does not create a list of possible actions, evaluate each against criteria, and select the best. Instead, the expert recognizes the situation, retrieves an appropriate response, checks it briefly, and acts.
Klein's research found that experienced firefighters, intensive care nurses, military commanders, and chess players all used this recognition-primed approach for the vast majority of their decisions. Formal analysis was reserved for unusual situations where pattern recognition failed.
When Does Intuition Work Well?
When does intuition work well? Intuition works well in familiar domains where the decision-maker has accumulated significant experience, where the environment provides regular feedback, and where the patterns are stable enough that past experience is a reliable guide to current situations.
Conditions that support accurate intuition:
Domain expertise. Intuition is only as good as the experience it is built from. A chess grandmaster's intuition about chess positions is highly reliable because it is built from tens of thousands of games. The same grandmaster's intuition about stock market investments would be no better than a novice's because the expertise does not transfer. Intuition is domain-specific--it works in the domain where the experience was acquired and nowhere else.
Regular, timely feedback. For intuition to develop, the decision-maker must receive feedback about whether their decisions were correct. A doctor who diagnoses patients and then learns the outcomes develops calibrated medical intuition. A stock picker who makes predictions but never tracks whether the predictions were right develops only the feeling of expertise without actual predictive ability. Daniel Kahneman argues that valid intuition requires a "high-validity environment"--one where the patterns are real and detectable--and "adequate opportunity to learn the regularities"--meaning consistent feedback.
Stable, patterned environments. Intuition relies on the assumption that patterns observed in the past will recur in the future. In stable environments (chess, firefighting, surgery), this assumption holds well. In unstable environments (stock markets, political predictions, technological disruption), the assumption is unreliable because the patterns themselves change.
Time pressure. When decisions must be made quickly--in seconds or minutes rather than hours or days--intuition's speed advantage becomes decisive. A firefighter cannot conduct a formal risk analysis while the building is collapsing. An emergency room doctor cannot run a decision tree while the patient is coding. In these situations, the speed of intuitive decision-making is not merely convenient; it is necessary for survival.
Pattern-rich situations. Intuition excels when the situation contains many subtle cues that an experienced person can read but that would be difficult to articulate or formalize. A experienced teacher's sense that a student is struggling before any test scores confirm it. A seasoned manager's feeling that a project is going off the rails before any metrics show problems. These judgments are based on real but hard-to-quantify information.
When Does Intuition Fail?
Intuition fails--sometimes catastrophically--in predictable circumstances:
Unfamiliar domains. When someone trusts their gut in a domain where they lack experience, they are not using intuition--they are using bias. A first-time entrepreneur's "gut feeling" about market demand is not pattern recognition; it is wishful thinking, confirmation bias, and overconfidence combining to produce a feeling of certainty without a foundation of experience.
Low-validity environments. Political pundits, economic forecasters, and stock pickers operate in environments where the relationships between observable cues and outcomes are weak, noisy, or nonexistent. In these environments, the feeling of intuitive certainty is disconnected from actual predictive ability. Philip Tetlock's research found that political experts' predictions were barely better than chance--and that the experts who were most confident in their intuitions were often the least accurate.
Cognitive biases masquerading as intuition. The feeling of intuitive certainty can be produced by cognitive biases as easily as by genuine pattern recognition, and the subjective experience is identical. Confirmation bias (favoring evidence that supports existing beliefs), anchoring (being disproportionately influenced by initial information), availability bias (overweighting recent or vivid examples), and affect heuristic (letting emotional reactions drive judgments) all produce strong feelings of "knowing" that are not based on valid pattern recognition.
Novel situations. By definition, intuition draws on past experience. When a situation is genuinely novel--unlike anything the decision-maker has encountered before--there are no relevant patterns to recognize. An experienced airline pilot's intuition about engine problems is reliable because engines fail in characteristic ways that recur. The same pilot's intuition about a type of emergency that has never occurred before (like the dual engine failure from bird strikes that Captain Sullenberger faced on US Airways Flight 1549) cannot rely on past experience because there is no relevant past experience.
High-stakes decisions with asymmetric consequences. When the cost of being wrong is much higher than the cost of being slow, the speed advantage of intuition is outweighed by the accuracy advantage of analysis. A surgeon's intuition about which procedure to perform should be verified analytically when the wrong choice is irreversible. An investor's intuition about a large bet should be checked against data when the potential loss is catastrophic.
What Is Analytical Decision-Making?
What is analytical decision-making? Analytical decision-making is the process of making deliberate, systematic judgments using explicit reasoning, structured frameworks, and available data. The decision-maker consciously identifies options, defines evaluation criteria, gathers relevant information, evaluates each option against the criteria, and selects the option that best satisfies the criteria.
How Analytical Decision-Making Works
Analytical decision-making follows a structured process:
- Define the decision clearly. What exactly needs to be decided? What are the constraints? What would success look like?
- Identify options. What are the possible choices? Analytical thinking encourages generating multiple options rather than evaluating only the most obvious one.
- Define criteria. What factors matter for this decision? Cost, speed, risk, quality, stakeholder impact? How should these factors be weighted?
- Gather information. What data is available to evaluate each option against each criterion? What data is missing, and how critical is it?
- Evaluate options. How does each option perform against each criterion? Where are the tradeoffs?
- Select and implement. Choose the option that best satisfies the weighted criteria, and implement it with monitoring to verify the outcome.
This process is slow, effortful, and cognitively demanding. It requires sustained attention, working memory capacity, and tolerance for ambiguity and complexity. But it produces decisions that are more defensible, more consistent, and more likely to account for factors that intuition might miss.
When Does Analysis Work Better?
When does analysis work better? Analysis works better for novel situations where past experience is not a reliable guide, for important decisions where the cost of error justifies the time investment, when cognitive biases are likely to be problematic, when stakes are high, and in unfamiliar domains where the decision-maker lacks the experience base that accurate intuition requires.
Novel situations. When you encounter a situation for the first time--entering a new market, implementing an unfamiliar technology, facing an unprecedented crisis--there are no patterns to recognize. Analysis provides a framework for thinking through the situation systematically despite the absence of relevant experience.
Complex decisions with many variables. The human brain can hold approximately four to seven items in working memory simultaneously. Decisions involving more variables than this--a strategic choice that depends on market conditions, competitive dynamics, internal capabilities, financial constraints, regulatory environment, and technological trends--exceed the capacity of intuitive processing. Analytical frameworks (decision matrices, scenario analysis, financial modeling) externalize the complexity, allowing the decision-maker to consider more variables than unaided cognition can handle.
Decisions involving statistics and probability. Human intuition is notoriously poor at statistical reasoning. We overestimate small probabilities (fear of plane crashes), underestimate large probabilities (carelessness about car accidents), and consistently fail at Bayesian updating (adjusting beliefs based on new evidence). Analytical tools--from basic probability calculations to sophisticated statistical models--correct for these intuitive failures.
Decisions that must be justified. In organizational, legal, and regulatory contexts, decisions often must be explained and defended. "My gut told me" is not an acceptable justification for a board of directors, a regulatory agency, or a court of law. Analytical decision-making produces a documented rationale that can be communicated, reviewed, and critiqued.
Decisions that benefit from multiple perspectives. Analytical frameworks facilitate group decision-making by making criteria explicit, evidence visible, and trade-offs transparent. When a team evaluates options using a shared framework, disagreements become productive--focused on evidence and criteria rather than competing intuitions.
When Does Analysis Fail?
Analysis has its own failure modes:
Analysis paralysis. The search for more data, more analysis, and more certainty can prevent decisions from being made at all. In competitive environments, the cost of delayed decisions can exceed the cost of imperfect decisions. A company that spends six months analyzing a market opportunity while a competitor enters the market in two months has made a decision by not deciding.
False precision. Analytical frameworks can produce outputs that appear more precise than the inputs warrant. A financial model projecting revenue to the dollar based on assumptions that are accurate to plus or minus 30 percent creates the illusion of precision. The analytical output looks rigorous, but the rigor is cosmetic--garbage in, precise garbage out.
Ignoring tacit knowledge. Not all relevant information can be captured in spreadsheets and frameworks. An experienced negotiator's sense of the counterparty's emotional state, a veteran product manager's feel for user frustration, a seasoned executive's read of organizational politics--these forms of knowledge are real, relevant, and resistant to formalization. Purely analytical approaches that ignore tacit knowledge discard valuable information.
Overcomplicating simple decisions. Some decisions are simple enough that analysis adds overhead without adding value. Spending an hour analyzing which restaurant to go to for lunch is a poor use of analytical capacity. Spending a week building a decision matrix for a reversible, low-stakes choice wastes resources that should be reserved for decisions where analysis actually matters.
Emotional depletion. Analytical decision-making consumes cognitive resources. Decision fatigue is well-documented: after making many deliberate decisions, the quality of subsequent decisions degrades as cognitive resources are depleted. A person who analyzes every decision throughout the day will make their worst decisions at the end of the day, when fatigue is highest and the remaining decisions may actually be the most important.
Is Intuition Just Guessing?
Is intuition just guessing? For experts with genuine domain experience, no--intuition is pattern recognition from accumulated experience that operates below conscious awareness. But for novices without relevant experience, yes--novice "intuition" is often just bias, emotional reaction, or wishful thinking dressed up as gut feeling.
The critical distinction is between expert intuition and naive intuition:
Expert intuition is built from thousands of hours of experience in a domain with regular feedback and stable patterns. A master electrician's intuition about wiring safety, a veteran detective's intuition about witness credibility, a seasoned investor's intuition about business model viability (in domains they know well)--these intuitions are based on real, learned patterns and are reliably more accurate than chance.
Naive intuition is the feeling of knowing without the experiential foundation that makes the feeling reliable. A first-year medical student's "gut feeling" about a diagnosis, a novice investor's "sense" that a stock will rise, a new manager's "instinct" about an employee's potential--these intuitions are based on biases, stereotypes, limited samples, and emotional reactions rather than validated pattern recognition.
The problem is that expert intuition and naive intuition feel identical from the inside. The subjective experience of certainty is the same whether the certainty is grounded in decades of experience or in nothing at all. This is why "trust your gut" is dangerous advice without the qualifier "in domains where you have genuine expertise and have received regular feedback."
Can You Develop Better Intuition?
Can you develop better intuition? Yes--through deliberate practice with feedback in a specific domain. The recipe is:
Accumulate diverse experience. The more situations you encounter, the more patterns your brain encodes. Experienced emergency room doctors have better diagnostic intuition than residents not because they are smarter but because they have seen more cases.
Seek rapid, honest feedback. Intuition improves only when the decision-maker learns whether their judgments were correct. A weather forecaster who makes daily predictions and is scored on accuracy develops calibrated intuition. A strategic planner who makes annual predictions and never checks them does not.
Practice in realistic conditions. Intuition developed in simulated environments may not transfer to real environments if the simulations lack important features of the real thing. Military training exercises, flight simulators, and medical simulation labs are designed to be realistic enough to develop transferable intuition.
Study and reflect on mistakes. When intuition is wrong, understanding why it was wrong--which cues were misleading, which biases were operating, what information was missing--improves future intuitive performance. Without reflection, mistakes do not produce learning; they just accumulate.
Learn from experts. Observing how experts in your domain perceive situations, what cues they attend to, and how they generate responses can accelerate your own intuition development. Apprenticeship, mentorship, and case-based learning all leverage the expert's intuition to develop the learner's.
How Do Expert Decision-Makers Actually Decide?
How do expert decision-makers decide? The most effective decision-makers do not choose between intuition and analysis--they combine both, using intuition for pattern recognition and option generation and analysis for verification and novel situations.
The Dual-Process Model
Daniel Kahneman's dual-process theory describes two cognitive systems:
System 1 (Intuitive): Fast, automatic, effortless, emotional, associative. Operates continuously, generating impressions, feelings, and inclinations without conscious effort. When you look at a face and instantly know the person is angry, that is System 1.
System 2 (Analytical): Slow, deliberate, effortful, logical, sequential. Activated when System 1 encounters a problem it cannot handle or when the stakes demand careful attention. When you calculate 37 x 24 in your head, that is System 2.
Expert decision-makers develop a sophisticated interplay between these systems:
- System 1 generates an initial read. The expert encounters a situation and immediately has an intuitive response--a diagnosis, a strategy, a course of action.
- System 2 checks the intuitive read. Before acting, the expert pauses to evaluate: Does this make sense? Am I missing something? Are there alternative explanations? Is this a situation where my intuition is reliable?
- If the check passes, the expert acts on the intuitive response, gaining speed from intuition while maintaining quality through analytical verification.
- If the check fails, the expert shifts to deliberate analysis, sacrificing speed for accuracy in a situation where intuition has proven unreliable.
This interplay is not always conscious. With experience, the checking process itself becomes partially intuitive--experts develop a "feeling" for when their intuition is trustworthy and when it should be questioned.
Should You Always Trust Your Gut?
Should you always trust your gut? No--only in domains where you have genuine expertise and where the environment provides reliable feedback. In unfamiliar domains, intuition is often wrong. The practical rule:
Trust your gut when:
- You have extensive experience in this specific domain (thousands of hours)
- The domain has stable, learnable patterns (not random or chaotic)
- You have received regular, honest feedback on your past judgments
- The situation is time-pressured and delay is costly
- The situation is familiar--similar to situations you have encountered many times
Check your gut with analysis when:
- The domain is unfamiliar or you are relatively inexperienced
- The stakes are high and the decision is irreversible
- You feel very confident (overconfidence is a warning sign in uncertain domains)
- You have time to analyze without significant cost
- The decision involves statistics, probabilities, or base rates
- Multiple cognitive biases could be operating (emotional decision, confirmation bias, anchoring)
- Others with relevant expertise disagree with your intuition
Override your gut with analysis when:
- The data clearly contradicts your intuition
- You are in a known low-validity environment (predictions in chaotic systems)
- You have a documented track record of poor intuition in this domain
- The decision must be justified to others
- The cost of being wrong dramatically exceeds the cost of being slow
Practical Integration: A Decision-Making Framework
The following framework helps professionals integrate intuition and analysis in practice:
Step 1: Categorize the Decision
Familiar or novel? If the situation is one you have encountered many times before, intuition is likely to be accurate. If the situation is new, analysis is more appropriate.
Reversible or irreversible? Reversible decisions (can be easily undone) can rely more on intuition--if the gut is wrong, you can correct course. Irreversible decisions (cannot be undone or are very costly to reverse) warrant analytical verification.
Time-pressured or time-available? Under extreme time pressure, intuition may be the only viable option. When time is available, analytical verification improves decision quality.
High-stakes or low-stakes? The higher the stakes, the more analysis is warranted. Betting $100 on an intuition is one thing; betting $10 million is another.
| Situation | Recommended Approach |
|---|---|
| Familiar + Reversible + Time-pressured + Low-stakes | Pure intuition |
| Familiar + Irreversible + Time-available + High-stakes | Intuition-generated, analytically verified |
| Novel + Reversible + Time-available + Low-stakes | Quick analysis (intuition unreliable, but stakes low) |
| Novel + Irreversible + Time-available + High-stakes | Thorough analysis (intuition unreliable and stakes high) |
| Novel + Irreversible + Time-pressured + High-stakes | Expert consultation + best available intuition (worst case--no good option) |
| Familiar + Reversible + Time-available + Low-stakes | Either (decision barely matters) |
Step 2: Generate Options Intuitively
Even when the decision warrants analysis, use intuition to generate options and hypotheses. System 1 is excellent at generating candidate solutions from pattern recognition. A doctor who immediately thinks "this looks like pneumonia" has generated a hypothesis that analytical testing can confirm or refute. The intuitive hypothesis focuses the analysis rather than requiring a blind search through all possibilities.
Step 3: Evaluate Analytically When Warranted
For decisions that warrant analysis, apply structured evaluation:
- What are the options?
- What criteria matter? How should they be weighted?
- What evidence bears on each option?
- What are the risks and potential unintended consequences of each option?
- What would have to be true for each option to be the right choice?
Step 4: Check for Bias
Before finalizing the decision, check for common biases:
- Confirmation bias: Am I selectively attending to evidence that supports my preferred option?
- Anchoring: Am I disproportionately influenced by the first number or piece of information I encountered?
- Availability bias: Am I overweighting recent or vivid examples?
- Sunk cost fallacy: Am I continuing a course of action because of past investment rather than future value?
- Overconfidence: Am I more certain than the evidence warrants?
- Affect heuristic: Am I letting my emotional reaction to the options (like or dislike) substitute for substantive evaluation?
Step 5: Decide and Learn
Make the decision and then track the outcome. The learning loop--deciding, observing outcomes, reflecting on what worked and what did not--is how intuition is calibrated over time. Without this loop, intuition remains uncalibrated and analysis remains disconnected from reality.
Common Mistakes in Balancing Intuition and Analysis
Mistake 1: Using Intuition in Low-Validity Environments
Stock picking, political forecasting, long-range economic prediction, and startup success prediction are all low-validity environments where intuitive confidence and predictive accuracy are poorly correlated. People who feel certain about their stock picks, political forecasts, or startup bets are not demonstrating superior intuition--they are demonstrating overconfidence.
Mistake 2: Using Analysis for Every Decision
People who analyze every decision--including what to have for lunch, which route to take to work, and which email to respond to first--exhaust their analytical capacity on trivial choices, leaving less for the important decisions where analysis actually matters. Effective decision-makers use intuition as a default for low-stakes, familiar, reversible decisions and reserve analytical capacity for high-stakes, novel, or irreversible decisions.
Mistake 3: Treating Intuition as Infallible Because It "Feels Right"
The feeling of certainty that accompanies intuition is not a reliable indicator of accuracy. Cognitive biases produce strong feelings of certainty that are completely unfounded. The confidence of an intuitive judgment tells you that your brain has matched a pattern--it does not tell you whether the pattern match is correct for the current situation.
Mistake 4: Treating Analysis as Infallible Because It Looks Rigorous
A beautifully structured analysis built on flawed assumptions, incomplete data, or biased framing can produce worse decisions than honest intuition. The formal appearance of analysis--spreadsheets, frameworks, presentations--creates an aura of rigor that may not be warranted. "Garbage in, garbage out" applies to analytical frameworks as much as to computer programs.
Mistake 5: Failing to Integrate
Many people default entirely to one mode or the other. Pure intuitors make snap judgments without checking them. Pure analysts deliberate endlessly without leveraging their accumulated experience. The most effective approach is integration: using intuition to generate options and hypotheses quickly, then using analysis to verify and refine them.
Developing Your Decision-Making Practice
Start with awareness. Notice when you are using intuition and when you are using analysis. Most people default to one mode without conscious choice. Simply becoming aware of which system you are using, and whether it is appropriate for the situation, improves decision quality.
Track your decisions. Keep a decision journal. Record the decision, the reasoning (intuitive or analytical), your confidence level, and the eventual outcome. Over time, this journal reveals where your intuition is calibrated (accurate judgments with accurate confidence) and where it is miscalibrated (confident judgments that are frequently wrong).
Build expertise deliberately. In domains where you want reliable intuition, seek diverse experiences, request honest feedback, study your mistakes, and practice in realistic conditions. Intuition is not a talent--it is a skill that develops through deliberate effort in a specific domain.
Challenge your certainty. When you feel most certain about an intuitive judgment, that is precisely the moment to pause and check. Extreme certainty in uncertain domains is a red flag for overconfidence. Ask yourself: "What would I need to see to change my mind?" If the answer is "nothing," you are operating on bias, not evidence.
Match your method to the moment. The goal is not to always use intuition or always use analysis but to develop the judgment to know which approach--or which combination--fits the current situation. This meta-skill--knowing when to think fast and when to think slow--is arguably the most valuable decision-making skill of all, and it develops through the same practice, feedback, and reflection that develop all expertise.
References and Further Reading
Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux. https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow
Klein, G. (1998). Sources of Power: How People Make Decisions. MIT Press. https://mitpress.mit.edu/9780262611466/sources-of-power/
Gigerenzer, G. (2007). Gut Feelings: The Intelligence of the Unconscious. Viking Press. https://en.wikipedia.org/wiki/Gerd_Gigerenzer
Kahneman, D. & Klein, G. (2009). "Conditions for Intuitive Expertise: A Failure to Disagree." American Psychologist, 64(6), 515-526. https://doi.org/10.1037/a0016755
Tetlock, P.E. (2005). Expert Political Judgment: How Good Is It? How Can We Know? Princeton University Press. https://press.princeton.edu/books/paperback/9780691128719/expert-political-judgment
Simon, H.A. (1992). "What Is an Explanation of Behavior?" Psychological Science, 3(3), 150-161. https://doi.org/10.1111/j.1467-9280.1992.tb00017.x
Gladwell, M. (2005). Blink: The Power of Thinking Without Thinking. Little, Brown and Company. https://en.wikipedia.org/wiki/Blink_(book)
Ariely, D. (2008). Predictably Irrational: The Hidden Forces That Shape Our Decisions. Harper. https://en.wikipedia.org/wiki/Predictably_Irrational
Damasio, A. (1994). Descartes' Error: Emotion, Reason, and the Human Brain. Putnam. https://en.wikipedia.org/wiki/Descartes%27_Error
de Groot, A.D. (1965). Thought and Choice in Chess. Mouton. https://doi.org/10.1515/9783110800647
Ericsson, A., Prietula, M.J. & Cokely, E.T. (2007). "The Making of an Expert." Harvard Business Review. https://hbr.org/2007/07/the-making-of-an-expert
Hogarth, R.M. (2001). Educating Intuition. University of Chicago Press. https://press.uchicago.edu/ucp/books/book/chicago/E/bo3624908.html
Tversky, A. & Kahneman, D. (1974). "Judgment Under Uncertainty: Heuristics and Biases." Science, 185(4157), 1124-1131. https://doi.org/10.1126/science.185.4157.1124
Myers, D.G. (2002). Intuition: Its Powers and Perils. Yale University Press. https://yalebooks.yale.edu/book/9780300095319/intuition/
Stanovich, K.E. & West, R.F. (2000). "Individual Differences in Reasoning: Implications for the Rationality Debate." Behavioral and Brain Sciences, 23(5), 645-665. https://doi.org/10.1017/S0140525X00003435