Ask any basketball fan whether a player who has just made five shots in a row is more likely to make the next one, and most will say yes. Ask whether you should pass to the hot shooter. Of course you should. You can see it — the player is locked in, confident, in the zone.

In 1985, three psychologists published a paper that seemed to demolish this intuition. Thomas Gilovich, Robert Vallone, and Amos Tversky analyzed shooting data from the Philadelphia 76ers and found no evidence that making several shots in a row increased the probability of making the next one. The hot hand, their paper concluded, was a cognitive illusion — a pattern-seeking mind imposing narrative on random variation.

The paper became one of the most cited in behavioral psychology, a canonical example of how human intuition systematically misleads us about probability. Then, in 2018, two statisticians found a mathematical error in the original analysis. The debate was suddenly alive again.

The hot hand story is not just about basketball. It is about how we reason about streaks, skill, and luck — and the ways our conclusions in this domain ripple through decisions in finance, business, sports, and daily life.


The Original Study: Gilovich, Vallone, and Tversky (1985)

The 1985 paper "The Hot Hand in Basketball: On the Misperception of Random Sequences" set out to test a specific claim: that basketball players shoot better following made shots than following misses.

The researchers analyzed shooting records from the Philadelphia 76ers' 1980-81 season, free throw data from the Boston Celtics, and a controlled shooting experiment with Cornell University players. Their method was to look at conditional probabilities: given that a player just made a shot, what was their probability of making the next one? How did this compare to their baseline shooting percentage?

Their findings were consistent across all analyses:

  • After making a shot, players did not shoot at a higher rate than their baseline.
  • After missing, players did not shoot at a lower rate.
  • Runs of makes and misses were consistent with what would be expected from a random process with the player's overall shooting percentage.

The paper also found that players, coaches, and fans held strong prior beliefs in the hot hand, and that these beliefs were not diminished by showing them statistical summaries of the data. The hot hand belief was robust to counterevidence — a classic finding about motivated reasoning.

This seemed definitive: the hot hand was a cognitive illusion generated by our tendency to see patterns in random sequences, combined with confirmation bias (we remember the made shots that confirmed the hot hand narrative and forget the misses).

Tversky and his collaborator Daniel Kahneman had established throughout the 1970s and 80s that humans are poor intuitive statisticians. We do not naturally grasp that independent events have independent probabilities. We expect coin flips to alternate more than they actually do; a sequence like HHHHT feels more "typical" than HHHHHH even though both are equally likely outcomes of five fair flips. The hot hand belief fit neatly into this broader story of human probabilistic error.

The Broader Program: Heuristics and Biases

The Gilovich-Vallone-Tversky paper was part of a wider intellectual program that had been reshaping cognitive psychology since the early 1970s. Kahneman and Tversky's heuristics and biases program documented a catalogue of systematic errors in human probabilistic reasoning: the availability heuristic, the representativeness heuristic, anchoring and adjustment, and others.

The hot hand paper added a new entry to this catalogue: the tendency to see patterns in random sequences, which Kahneman and Tversky had already studied under the concept of the law of small numbers — the erroneous intuition that small samples should resemble the statistical properties of the populations they come from. Because we expect randomness to look balanced, we see random clusters as meaningful patterns and random streaks as genuine signals.

The law of small numbers explains why a basketball fan watching a player make four consecutive shots sees a streak and not what a statistician sees: one of the many possible four-shot sequences that occur regularly by chance even among players whose shot-to-shot probabilities are genuinely independent.


The Miller-Sanjurjo Challenge (2018)

In 2018, Joshua Miller and Adam Sanjurjo published a paper in the journal Econometrica that rocked the field: "Surprised by the Hot Hand Fallacy? A Truth in the Law of Small Numbers."

Their argument was technical but consequential. They identified a subtle but real statistical bias in the original Gilovich-Vallone-Tversky analysis.

The bias arises from a counterintuitive property of finite sequences. When you look at a sequence of hits and misses and calculate "what is the shooting percentage immediately following three consecutive hits," you are not calculating a neutral conditional probability. You are selecting a specific subset of observations in a way that systematically underestimates the shooter's baseline.

Here is the intuition: Suppose a player takes only four shots in a game and you want to calculate their shooting percentage after two consecutive makes. The only observations that can precede your comparison shot are shots 1-3. Shot 4 can follow a run of hits in shots 1-3, but shot 1 cannot. This asymmetry means that when you condition on streaks, you are disproportionately sampling from earlier in the sequence — and earlier shots, by the law of small numbers in small samples, tend to show lower percentages than the true baseline.

Miller and Sanjurjo showed that when this bias is corrected, the original 76ers data actually shows a positive hot hand effect — players shoot better after streaks of makes than the original analysis suggested.

They also noted that Gilovich, Vallone, and Tversky's controlled Cornell experiment, when re-analyzed with the correction, shows a statistically significant hot hand effect.

Study Original Conclusion Corrected Conclusion
76ers 1980-81 season No hot hand Positive trend after correction
Boston Celtics free throws No hot hand Largely unchanged (free throws are less context-dependent)
Cornell controlled shooting No hot hand Statistically significant hot hand after correction
3-point contest data (other studies) Mixed Some evidence of positive autocorrelation
NBA shot tracking 2012-2018 Mixed Small but measurable effect in some analyses

The response from the field has been mixed. Some researchers accept the Miller-Sanjurjo correction and conclude that the hot hand is real. Others argue that the effect, even if statistically present, is small enough to be practically irrelevant. Still others note that subsequent research using larger datasets from the NBA shows inconsistent results. The debate is genuinely unsettled.

The Scientific Stakes of the Correction

The Miller-Sanjurjo finding matters beyond basketball for several reasons. First, it demonstrated that a canonical study in cognitive psychology had a material methodological error — and that the error had persisted undetected for over thirty years despite the paper being among the most cited in the field. This is a significant statement about the difficulty of detecting subtle statistical biases and the limits of peer review.

Second, it generated a debate about what the appropriate conclusion is when the corrected analysis shows a small positive effect rather than zero. Gilovich, Vallone, and Tversky were testing whether a large, robust hot hand effect existed. The correction suggests a small effect might be real. Whether that small effect is large enough to be practically meaningful — whether it should change how coaches call plays or how analysts evaluate performance — is a separate question that the statistical correction alone does not answer.

"The bias we identify suggests that what we have been calling the hot hand fallacy may be based on a statistical artifact. Whether the hot hand exists in practice remains an empirical question, but the prior evidence against it was weaker than we thought." — Joshua Miller and Adam Sanjurjo, Econometrica (2018)


The Gambler's Fallacy: The Mirror Image

To understand the hot hand fallacy fully, it helps to contrast it with its mirror image: the gambler's fallacy.

The gambler's fallacy is the belief that in a random process, a run of one outcome makes the opposite outcome more likely. After five heads, "tails is due." After a roulette wheel lands on red five times in a row, black seems more likely next time.

Like the hot hand belief, this is an error. Independent random events have independent probabilities. A fair coin has a 50% chance of heads on any given flip regardless of what came before. The roulette wheel has no memory.

The psychological roots of both errors may be the same: our intuition that random processes should look "balanced" and "mixed" over short sequences. We expect alternation more than random sequences actually produce. This makes long runs feel non-random — and we respond by either predicting more of the same (hot hand) or predicting reversal (gambler's fallacy).

Interestingly, there is evidence that which error predominates depends on whether we think we are looking at a skilled process or a purely mechanical one. In skilled domains — basketball, poker, an athlete's performance — we tend toward hot hand thinking (skill creates momentum). In purely mechanical domains — roulette, lottery — we tend toward gambler's fallacy thinking (the machine will balance out).

The Coin Flip Problem

A simple demonstration of why streaks mislead: if you flip a fair coin 100 times, you should expect runs of the same outcome. By probability theory, a run of six consecutive heads will appear in a 100-flip sequence about half the time. A run of seven will appear roughly a quarter of the time. Long runs are not unusual; they are the expected output of genuinely random processes over long enough sequences.

But when people observe a run of six consecutive heads in real time, they do not think "this is expected output of a fair random process." They think either "this coin must be biased" (hot hand thinking) or "tails is overdue" (gambler's fallacy). The statistical reality — that nothing has changed — is cognitively inaccessible in the moment of observation.

"The gambler's fallacy and the hot hand are not separate errors. They are two manifestations of the same failure to accept that independent random events are genuinely independent." — behavioral finance literature


The Cognitive Architecture Behind Streak Thinking

Pattern Recognition Gone Overboard

The hot hand belief is not random noise in human cognition. It reflects a feature of the human pattern-detection system that was adaptive in ancestral environments: hyperactive agency detection, or the tendency to infer agency and meaning in events that may be random.

Evolutionary accounts (Barrett, 2004; Boyer, 2001) propose that in environments where genuine patterns mattered for survival — animal tracks indicating prey, weather patterns indicating drought — the cost of missing a real pattern (failing to follow the prey track, not preparing for drought) substantially exceeded the cost of seeing a false one. Natural selection would favor a pattern-detection system calibrated to over-detect rather than under-detect.

Applied to sports: the hot hand belief may be the output of a system that evolved to detect genuine skill-performance correlations in competitive social contexts (correctly identifying who is performing best right now), which is applied to a domain (sequential independent events) where it systematically over-fires.

The Role of Narrative

Human cognition is deeply organized around narrative. We do not experience sequences of events as a list of independent data points; we experience them as stories with arcs, momentum, and meaning. A player who makes five shots in a row is not just a collection of probability trials — they are a character in a narrative who is building toward something.

Narrative thinking makes the hot hand belief feel more than just numerically plausible — it feels morally necessary. To deny that a player is "hot" is to refuse the story the events are obviously telling. The conflict between statistical thinking and narrative thinking is one of the deep tensions in human cognition, and the hot hand belief lives squarely at that boundary.

Kahneman's framework in Thinking, Fast and Slow (2011) describes this as the tension between System 1 (fast, automatic, narrative, pattern-seeking) and System 2 (slow, deliberate, statistical, abstract) thinking. Hot hand belief is quintessential System 1 output. The statistical analysis showing it is an illusion requires System 2 effort. System 1 is faster, more compelling, and more emotionally vivid — which is why the statistical argument rarely persuades intuitively, even when accepted intellectually.


When Hot Hand Thinking Is Accurate

If hot hand effects are an illusion in basketball, does the same apply everywhere?

Not necessarily. The hot hand belief is only a fallacy in contexts where sequential events are genuinely independent — where each trial's probability is unaffected by previous outcomes. In contexts where performance is autocorrelated — where past success genuinely affects the conditions for future success — the hot hand is real, not illusory.

Where hot hand effects may be genuine:

Athletic performance: Athletes can enter states of heightened focus, coordination, and confidence that are associated with improved performance. Flow states, as described by psychologist Mihaly Csikszentmihalyi, are real psychological states that do not appear instantaneously and may persist for a period. A pitcher who has successfully executed the same pitch sequence five times may have genuinely better motor control and confidence on the sixth attempt. Research on athletic momentum (Taylor and Demick, 1994; Vallerand et al., 1988) suggests that psychological momentum — the sense of building confidence and energy — is a real state that influences subsequent performance, even if the magnitude of the effect is often smaller than fans believe.

Financial analysis: Some investment strategies show genuine persistence. Fund managers with access to superior information or proprietary processes can outperform persistently, at least for a period. High-quality momentum factors in quantitative investing exploit exactly this kind of autocorrelation in asset returns. The Fama-French-Carhart four-factor model includes a momentum factor precisely because asset price autocorrelation is empirically documented over medium-term horizons, even if it is not persistent enough to be reliably exploitable by most individual investors.

Sales and negotiation: A salesperson in a successful conversation has access to information and rapport that may genuinely increase their probability of closing a deal. Early wins in a negotiation can shift psychological dynamics in ways that make subsequent concessions more likely.

Poker: In games with genuine skill components, a player who has been playing well — reading opponents accurately, making good probability judgments — may be genuinely more likely to continue playing well in the short run than baseline performance would suggest. The caveat is that poker involves sufficient noise that distinguishing a genuine skill streak from a luck streak in real time is extremely difficult.

Where hot hand thinking is most dangerous:

  • Slot machines, lottery, roulette (genuinely independent)
  • Attributing a company's recent financial success to management genius when the industry environment explains most of it
  • Hiring or promoting someone based on a recent streak in a domain with high noise
  • Investing in actively managed funds based on recent performance (most research shows fund performance does not persist meaningfully)
  • Political forecasting — incumbent advantages and party momentum effects are often substantially overstated

Implications for Decision-Making

The hot hand debate has practical implications beyond sports statistics.

Defensive Adjustment: The Tactical Hot Hand

Even if a player's actual shooting percentage is not affected by recent makes, defenders who believe in the hot hand may guard the hot player more closely. This changes the environment: the hot player gets worse looks, while other players get better ones. The hot hand's effect on team offense may be real even if the hot hand effect on individual probability is not.

Research by Sanjurjo and others on NBA defensive adjustment finds evidence that defenders do respond to recent shooting performance — and that this response is often correct and appropriate, not a fallacy. A player perceived as hot draws more defensive attention regardless of their true probability trajectory, which means the hot hand belief produces real strategic consequences in competitive environments even when the underlying shooting probability is unchanged.

This is an underappreciated wrinkle in the hot hand debate: even if the hot hand is statistically illusory, it may be tactically rational to respond as if it is real, because opponents are also responding as if it is real. The coordination problem has its own logic independent of the underlying probabilities.

Fund Manager Performance Attribution

Investors in actively managed funds face a classic hot hand problem: does recent outperformance predict future outperformance? The general answer from the literature is no — fund performance shows very little persistence after controlling for factor exposures and luck. The Carhart (1997) study of mutual fund performance is definitive: funds with above-average performance in one year show no reliable tendency to outperform in the next year once style factors are controlled.

This makes the hot hand fallacy genuinely costly: studies by DALBAR and others consistently show that mutual fund investors underperform the funds they invest in, largely because they buy high (after good performance streaks) and sell low (after poor stretches), the inverse of optimal behavior. The DALBAR Quantitative Analysis of Investor Behavior (2020) found that over twenty years, the average equity fund investor underperformed the S&P 500 by approximately 3.5 percentage points annually — primarily because of hot-hand-motivated performance-chasing.

The scale of this underperformance across millions of individual investors represents one of the most economically significant applications of the hot hand fallacy outside the laboratory.

Streak Management in Sports

Even if the hot hand is partially real, the appropriate response is not obvious. Teams that constantly search for the hot player and funnel the ball to them may be:

  • Correctly exploiting a real autocorrelation
  • Disrupting the offensive system that makes everyone better
  • Creating self-fulfilling streaks (the hot player gets better looks because they are perceived as hot)

Coaches who ignore recent performance entirely may leave real exploitation opportunities on the table. Coaches who over-weight it may create worse overall offense. The optimal response is somewhere between full updating on recent performance and no updating — a calibrated Bayesian adjustment rather than extreme behavior in either direction.

Research by Bocskocsky, Ezekowitz, and Stein (2014) using shot-quality-adjusted data from the NBA found evidence of a small but real hot hand effect even after controlling for shot location, defender proximity, and other quality measures — suggesting that some updating on recent performance is defensible. The magnitude of the effect was smaller than coaches and fans typically believe, but not zero.

Regression to the Mean: The Invisible Corrective

One of the most practically important statistical concepts intertwined with the hot hand is regression to the mean — the tendency for extreme performance, whether above or below average, to be followed by performance closer to the underlying average.

If a fund manager has an outstanding year, regression to the mean predicts a less outstanding subsequent year not because their skill has declined but because extreme performance typically contains a luck component that is not repeatable. If a sports team goes on a ten-game winning streak, regression to the mean predicts more losses ahead — not because momentum has reversed but because streaks represent temporary departures from the underlying average that the system will naturally return to.

People who hire on the basis of streak performance, invest on the basis of recent returns, or promote on the basis of exceptional recent results are systematically ignoring regression to the mean — and predictably disappointed when performance reverts. This reversion is then often attributed to complacency, environmental change, or bad luck, when it was statistically predictable from the outset.


The Broader Lesson About Streaks

The hot hand debate ultimately illuminates a general truth about how humans reason about sequences over time.

We are profoundly uncomfortable with genuine randomness. Random processes produce clusters — runs of the same outcome that feel meaningful — because the law of large numbers only applies to very large samples, not the handful of events we observe in real time. A fair coin will produce runs of five or six heads sometimes. Basketball players will sometimes make seven shots in a row by chance alone.

Our pattern-recognition systems, which evolved to detect real signals in noisy environments, treat these clusters as meaningful. We construct narratives: "he's got it tonight," "this team has momentum," "the market is in an uptrend." These narratives feel explanatory, but in many cases they are post-hoc accounts imposed on random variation.

The disciplines of statistics and probability theory exist precisely because our intuitions fail systematically in this domain. The hot hand debate is a reminder that even professional researchers, using careful methodology, can get this wrong — and that when they correct their methods, the answer changes.

This should inspire genuine epistemic humility about streak-based reasoning: not the abandonment of pattern recognition, which is essential, but the recognition that human pattern recognition is calibrated to over-detect rather than under-detect, because the cost of missing a real pattern historically exceeded the cost of seeing a false one.

The Statistical Literacy Argument

The hot hand story is one of the most useful teaching cases in statistical literacy precisely because the intuition it challenges is so powerful and so universally held. Unlike many statistical errors — which feel abstract and concern populations most people do not directly observe — the hot hand belief concerns something everyone has watched in real time. The subjective experience of watching a player make consecutive shots is vivid and emotionally compelling in a way that a table of conditional probabilities is not.

This is what makes the original Gilovich-Vallone-Tversky paper so important as a teaching artifact, independent of whether the effect they measured is exactly right. It shows that strong, vivid, confident intuitions about observable phenomena can be wrong in systematic and predictable ways — and that the methods of statistical analysis, applied carefully, can reveal this. Thirty-three years later, Miller and Sanjurjo show that the methods themselves require scrutiny. The lesson compounds: neither intuition nor analysis is infallible, and the appropriate response is ongoing calibration of both.


Conclusion

The hot hand story has had three acts. First, the intuition: players run hot, and passing to the hot shooter is smart. Second, the demolition: sophisticated statistical analysis showed this belief is a cognitive illusion, an example of the human mind's failure to correctly model independent random events.

Third, the complication: the original analysis had a statistical flaw, and when corrected, it suggests the hot hand may actually be real — at least in some domains and some magnitudes.

Where this leaves us is not with a simple lesson but with a genuine appreciation for the difficulty of distinguishing real patterns from random noise in finite sequences. The hot hand may be real in some athletic contexts, an artifact of our pattern-seeking in others, and a dangerous error in genuinely random ones like gambling.

The most useful takeaway is not "always trust streaks" or "never trust streaks" but a deeper appreciation for the conditions under which sequential events are and are not independent. In random domains, treat every event as fresh. In skilled domains with genuine feedback loops, modest updating on recent performance is probably appropriate. In financial markets, be deeply skeptical of any strategy that amounts to chasing recent winners.

The hot hand fallacy, it turns out, was itself a little too certain. The question of when and how strongly to update on streaks is harder than the original 1985 paper suggested — and getting it right matters more than we might think.


Key Takeaways

  • The hot hand fallacy is the belief that recent success in an independent random sequence predicts future success — most famously studied in basketball shooting
  • Gilovich, Vallone, and Tversky (1985) found no statistical evidence for the hot hand in multiple basketball datasets, concluding it was a cognitive illusion driven by pattern-seeking and confirmation bias
  • Miller and Sanjurjo (2018) identified a real statistical bias in the original analysis; when corrected, the data shows a small positive hot hand effect — reigniting the scientific debate
  • The gambler's fallacy is the mirror image error: expecting random sequences to reverse after a streak, rather than continue
  • Both errors stem from the same source: the intuition that random processes should look "balanced" in small samples, which causes us to see long runs as non-random
  • Regression to the mean — the tendency for extreme performance to move back toward average — is the statistical mechanism most people ignore when chasing performance streaks
  • In domains with genuine skill autocorrelation (athletic momentum, poker), modest updating on recent performance may be appropriate; in genuinely random domains (casino games) and high-noise ones (fund performance), streak-chasing is reliably costly
  • DALBAR research estimates individual investors underperform their own funds by roughly 3.5% annually, largely due to hot-hand-driven performance chasing

Frequently Asked Questions

What is the hot hand fallacy?

The hot hand fallacy is the mistaken belief that a person who has experienced recent success — like a basketball player making several shots in a row — has a higher probability of continued success due to being 'hot.' The 1985 paper by Gilovich, Vallone, and Tversky found no statistical evidence for the hot hand in basketball, suggesting the belief is an illusion created by our pattern-seeking minds.

Was the hot hand fallacy itself a fallacy?

In 2018, statisticians Joshua Miller and Adam Sanjurjo published a paper showing that the original Gilovich, Vallone, and Tversky analysis contained a subtle statistical bias. When corrected, their data actually showed evidence supporting the hot hand. This finding is contested but has significantly revived the debate about whether hot hand effects are real in sports performance.

What is the difference between the hot hand fallacy and the gambler's fallacy?

The gambler's fallacy is the belief that a run of one outcome makes the opposite outcome more likely — that after several heads, tails is 'due.' The hot hand fallacy is the opposite: the belief that a run of successes makes continued success more likely. Both are forms of the same underlying error: treating independent events as though they are connected by an invisible balancing force.

Where is hot hand thinking actually accurate?

In domains where performance is genuinely autocorrelated — where past success affects the conditions for future success — hot hand effects can be real. Investors with superior information may outperform persistently. Athletes may genuinely enter states of elevated focus and coordination. In contexts with genuine skill and feedback loops, streaks can reflect real effects rather than random clustering.

Where does hot hand thinking hurt decisions?

Hot hand thinking is most dangerous in purely random domains. Attributing streaks to skill in roulette, slot machines, or lottery outcomes leads to overconfident betting. In business, attributing a company's recent performance streak to management genius rather than favorable conditions can lead to overpriced acquisitions and poor capital allocation decisions.