On the night of August 18, 1913, at the Casino de Monte-Carlo in Monaco, a roulette wheel did something statistically unremarkable. It landed on black. Then it landed on black again. And again. And again — fifteen times in a row, then twenty, then twenty-three. By the time the streak reached its peak at twenty-six consecutive black results, the casino floor had transformed into a spectacle of collective financial ruin. Gamblers, convinced that red was now astronomically "overdue," had been piling their chips onto red with increasing desperation at every spin. The logic felt airtight: surely the wheel could not keep producing black indefinitely. Each additional black result made red feel more certain, not less. The crowd grew larger. The bets grew larger. The losses, when the evening finally ended, were estimated in the millions of francs.

The roulette wheel, of course, had no memory. Each spin was statistically independent of every prior spin — the probability of black on spin twenty-seven was exactly the same as it had been on spin one: approximately 48.6 percent, accounting for the green zero. The wheel did not know it had been landing on black. It did not owe the gamblers a red result. The universe was keeping no ledger. But the human minds crowded around that table were keeping one, and that gap — between the indifferent mathematics of independent events and the deeply social, pattern-hungry machinery of human cognition — is precisely what the gambler's fallacy describes.

The Monte Carlo incident became so iconic that the error is sometimes called the Monte Carlo fallacy. But the phenomenon it illustrates predates that evening by centuries and has since been documented in courtrooms, immigration tribunals, baseball umpire decisions, loan officer portfolios, and the laboratory notebooks of some of the twentieth century's most influential cognitive scientists. The gambler's fallacy is not a quirk of uninformed bettors. It is a structural feature of how human minds model randomness — and understanding it requires going some distance into the cognitive architecture that makes us, in most circumstances, remarkably good at detecting patterns, and in a narrow but consequential set of circumstances, catastrophically bad at recognizing their absence.

"People expect sequences of random events to be self-correcting, as if nature has a bookkeeping system that balances outcomes over time." — Amos Tversky & Daniel Kahneman, 1974


What the Gambler's Fallacy Actually Claims

The gambler's fallacy is the belief that independent random events become more or less likely based on prior outcomes — that a sequence of heads makes tails more probable on the next flip, that a run of black makes red more probable on the next spin, that a lottery number not drawn recently is somehow "due." The fallacy rests on a misapplication of a genuine mathematical truth: over a very large number of trials, random processes do tend to produce balanced outcomes. A fair coin, flipped a million times, will produce very close to 500,000 heads. This is the law of large numbers, and it is real.

The error is in the extrapolation. The law of large numbers operates over enormous sample sizes and says nothing about any individual trial, nor about what corrective mechanism produces the long-run balance. A fair coin that has produced ten consecutive heads does not become a biased coin weighted toward tails. The balancing occurs not because the universe compensates for past results, but because future trials — of which there are very many — dilute the weight of the anomalous streak. If that coin is flipped one million more times after its ten-consecutive-heads run, the proportion of heads across all future flips will approach 50 percent regardless of those first ten outcomes, simply because one million future events swamp ten past ones mathematically. The streak is not corrected; it is buried under subsequent data.

The gambler's fallacy treats the law of large numbers as if it applies to small numbers — as if the balancing act occurs soon, nearby, within a human-scale time frame. It is, in the formulation that Amos Tversky and Daniel Kahneman would make famous, a "belief in the law of small numbers."


Intellectual Lineage

The gambler's fallacy has a longer intellectual history than is often appreciated. The philosophical foundations were laid by David Hume in the eighteenth century, whose analysis of causation and habit of mind anticipated the cognitive science that would follow two hundred years later. Hume recognized in the Enquiry Concerning Human Understanding (1748) that the mind has a natural disposition to infer regular patterns and to expect the future to resemble the past — a disposition that is generally adaptive but breaks down when applied to chance events.

The first rigorous mathematical treatment came with the work of Jacob Bernoulli, whose posthumously published Ars Conjectandi (1713) formally established the law of large numbers and, in doing so, provided the mathematical truth that the gambler's fallacy distorts. Pierre-Simon Laplace addressed the fallacy directly in his Philosophical Essay on Probabilities (1814), noting with evident frustration that gamblers routinely mistook the law of large numbers for a local corrective mechanism. Laplace understood the error clearly but had no psychological framework to explain its persistence.

The modern psychological framework arrived in the 1970s. Daniel Kahneman and Amos Tversky, working at the Hebrew University of Jerusalem, were developing a comprehensive account of the heuristics that underlie human judgment — cognitive shortcuts that are efficient and generally adequate but systematically produce errors in predictable conditions. Their 1971 paper, "Belief in the Law of Small Numbers," published in Psychological Bulletin (Vol. 76, No. 2), was the first rigorous experimental demonstration of the gambler's fallacy as a cognitive phenomenon rooted in a specific heuristic.

Kahneman and Tversky showed that even trained researchers — statisticians, psychologists — behaved as if small samples should reflect the same properties as the populations from which they were drawn. They described this as an excessive "faith in the law of small numbers," a tendency to expect local representativeness from random sequences. A sequence of HTHTHTHT looks more random, to most observers, than HHHHHHHH — even though both are equally likely outcomes of eight coin flips. The mind judges representativeness by how closely a sequence resembles what it expects a random sequence to look like, and it expects alternation, balance, and dispersion far more than probability warrants in short sequences.

This was formalized in their landmark 1974 paper, "Judgment Under Uncertainty: Heuristics and Biases," published in Science (Vol. 185, Issue 4157), which introduced and systematized three major heuristics: representativeness, availability, and anchoring-and-adjustment. The gambler's fallacy, Tversky and Kahneman argued, is a direct manifestation of the representativeness heuristic: people judge the probability of an event by asking how well it matches their model of the process generating it. A sequence with many consecutive identical outcomes does not resemble their mental model of a random sequence — therefore, they conclude, such a sequence is unlikely to continue, and a corrective outcome is overdue.

Subsequent decades added layers. Ruma Falk and Clifford Konold, writing in Psychological Review in 1992, elaborated on the encoding and perception of randomness, showing that people's judgments about what counts as "random" are systematically biased in ways that feed the gambler's fallacy. Willem Wagenaar's research through the 1970s and 1980s demonstrated the striking robustness of the fallacy even in populations given explicit instruction in probability theory. And in 2002, when Kahneman was awarded the Nobel Memorial Prize in Economic Sciences — Tversky having died in 1996 — the representativeness heuristic and its role in the gambler's fallacy received the most prominent possible institutional recognition.


The Cognitive Science: Why the Mind Makes This Error

The representativeness heuristic is not a design flaw in human cognition. It is a sophisticated feature, refined by evolutionary pressure, that generally serves the organism extremely well. The problem is that it was refined in an environment that did not include roulette wheels, lotteries, or other mechanisms specifically engineered to produce genuinely independent random outcomes.

In the environments in which human pattern-detection systems evolved, consecutive events usually were causally related. If a predator appeared near a particular watering hole three times, the fourth visit to that watering hole was genuinely more dangerous — because the predator was probably still in the area. If rainfall had been scarce for weeks, more rainfall was genuinely more likely because atmospheric conditions tend to persist. Sequential autocorrelation — the tendency of events to cluster — is the norm in the natural world. The human mind evolved to detect it, exploit it, and generalize from it. The gambler's fallacy is what happens when a highly tuned autocorrelation detector is pointed at a system specifically designed to produce zero autocorrelation.

At the neural level, research has implicated the dopaminergic reward prediction system in the gambler's fallacy. Studies by Wolfram Schultz and colleagues in the 1990s established that dopaminergic neurons encode prediction errors — they fire when outcomes differ from expectations. When a pattern fails to continue as expected (red failing to appear after many blacks), the brain does not register this as confirmation that the sequence is random; it registers it as an escalating prediction error that heightens attention and expectation. The brain treats the non-occurrence of an expected corrective event as increasingly anomalous, which paradoxically strengthens the expectation. This is sometimes described as the neural substrate of "being on tilt" — the escalating conviction, driven by prediction error signals, that the correction must be imminent.

Functional MRI research by Ramsay and Doris, published in 2009, found that the caudate nucleus and anterior cingulate cortex showed systematically different activation patterns when participants made judgments about random versus structured sequences, and that participants who showed stronger gambler's fallacy reasoning showed greater activation in regions associated with pattern completion and sequence prediction. The brain's sequence-completion machinery does not stand down when presented with genuinely random input — it escalates.


What the Research Shows

The Croson and Sundali Casino Study

The most important real-world empirical test of the gambler's fallacy came from Rachel Croson and James Sundali's 2005 study, "The Gambler's Fallacy and the Hot Hand: Empirical Data from Casinos," published in the Journal of Risk and Uncertainty (Vol. 30, No. 3). Croson and Sundali obtained records from a Nevada casino documenting individual roulette bets — the specific numbers and colors bet, the amounts wagered, and the outcome of each spin. This allowed them to test whether gamblers actually behaved in accordance with the gambler's fallacy in a naturalistic setting, with real money on the line, rather than in artificial laboratory conditions.

The results confirmed the fallacy with striking clarity. After a run of the same outcome — say, three consecutive reds — gamblers were significantly more likely to bet on black. The longer the run, the stronger the shift. After runs of five or more identical outcomes, the shift toward the "due" outcome became very large, with gamblers dramatically overloading their bets on the opposite result. Crucially, this behavior was economically costly: the gambler's fallacy, as Croson and Sundali's data showed, does not represent some theoretically neutral error — it produces measurable financial losses over time, because the underlying probability is unchanged regardless of the streak.

Chen, Moskowitz, and Shue: The Fallacy in Institutional Decision-Making

Perhaps the most consequential study of the gambler's fallacy's reach was published in 2016 in the Quarterly Journal of Economics by Daniel Chen, Tobias Moskowitz, and Kelly Shue. Their paper, "Decision Making Under the Gambler's Fallacy: Evidence from Asylum Judges, Loan Officers, and Baseball Umpires," analyzed large administrative datasets from three distinct professional domains.

The findings were alarming. Asylum judges who had granted refugee status to several consecutive applicants were significantly less likely to grant it to the next applicant — not because the subsequent cases were objectively weaker, but because the judges had unconsciously adopted a corrective posture, as if their own decisions needed to balance out. Loan officers who had approved several loans in a row became more likely to deny the next application, independent of the applicant's creditworthiness. Baseball umpires who had called several consecutive balls were more likely to call the next pitch a strike, again controlling for actual pitch location.

The magnitude of these effects was substantial. Asylum judges were roughly 3.3 percentage points less likely to grant asylum after a streak of grants — a figure that, applied across millions of cases decided annually in immigration courts worldwide, represents a potentially enormous number of wrongful denials. Loan officers showed similar magnitude effects. The irony is acute: these are professional decision-makers, often with explicit training in objectivity and case-by-case evaluation, and the gambler's fallacy was driving their decisions as reliably as it drove the gamblers around that Monte Carlo roulette table in 1913.

The Hot Hand Fallacy: The Inverse Error

The gambler's fallacy has a mirror image. Where the gambler's fallacy predicts that a streak will end, the hot hand fallacy predicts that it will continue. Thomas Gilovich, Robert Vallone, and Amos Tversky published the definitive study of this in 1985, "The Hot Hand in Basketball: On the Misperception of Random Sequences," in Cognitive Psychology (Vol. 17, No. 3). They analyzed shooting records from the Philadelphia 76ers and found that players' shooting percentages on any given attempt were essentially independent of whether they had made or missed their previous attempts. The "hot hand" — the widespread belief among players, coaches, and fans that a player who has made several shots is more likely to make the next one — was, Gilovich, Vallone, and Tversky argued, a cognitive illusion.

This conclusion has since been refined. Joshua Miller and Adam Sanjurjo, in a 2018 paper in Econometrica, identified a subtle statistical bias in the original analysis and argued that a genuine hot hand effect may exist in some sports data. The debate continues. But for the present purposes, what matters is the relationship between the two fallacies: the gambler's fallacy and the hot hand fallacy are both expressions of the representativeness heuristic, applied in opposite directions. Both involve treating a short sequence of outcomes as informative about underlying probability in ways that the mathematics of independent events does not support. The gambler's fallacy assumes mean reversion; the hot hand fallacy assumes momentum. Both override the correct baseline: that the next event's probability is unchanged by the history of prior events.


Understanding the gambler's fallacy requires situating it precisely within the broader landscape of cognitive biases, several of which it resembles without being identical to.

Concept Core Mechanism Relationship to Gambler's Fallacy
Gambler's Fallacy Belief that independent random events must balance out locally; streaks make the opposite outcome "due" The primary subject; driven by representativeness heuristic applied to sequences
Hot Hand Fallacy Belief that recent success predicts continued success; momentum bias The inverse error — same heuristic, opposite prediction; streaks expected to continue rather than reverse
Clustering Illusion Perception of meaningful patterns in genuinely random data; seeing structure where none exists The perceptual precondition for the gambler's fallacy; the mind "sees" the streak as a pattern requiring resolution
Regression to the Mean Statistical phenomenon: extreme values tend to be followed by less extreme ones The real principle that is often confused with the gambler's fallacy; genuine but applies to measurement contexts, not independent random events
Recency Bias Overweighting recent events relative to earlier ones in probability assessments Shares some cognitive machinery; recent outcomes drive the gambler's fallacy, but recency bias is broader and applies to non-random forecasting
Base Rate Neglect Ignoring statistical base rates in favor of specific case information Related: gambler's fallacy involves substituting perceived sequence patterns for correct base rates
Representativeness Heuristic Judging probability by similarity to a mental prototype or model The parent heuristic from which the gambler's fallacy derives; gambler's fallacy is representativeness applied to sequential randomness

Four Case Studies

Case Study One: Monte Carlo Casino, August 18, 1913

The incident described in the opening of this article warrants closer analysis. The twenty-six consecutive black results that night represent an outcome whose probability is approximately 1 in 67 million — genuinely extraordinary, but not impossible, and not causally informative. The wheel's construction, the croupier's spin technique, and random variation in the ball's behavior were all equally consistent with the streak as with any other sequence of that length. The correct inference — "this is an unusual sequence, but roulette wheels do produce unusual sequences, and this one conveys no predictive information about the next spin" — was available to every person in that casino. None of them made it.

What the Monte Carlo incident illustrates beyond the fallacy itself is the role of social reinforcement in amplifying cognitive errors. As the streak lengthened, each gambler who placed chips on red was observed by others, and the act of betting served as a social signal — an expression of conviction — that influenced adjacent bettors. The fallacy became a socially validated consensus. The longer the streak, the more people bet on red, and the more people bet on red, the more legitimate the bet appeared. This social dimension of the gambler's fallacy is understudied but consequential: it means the fallacy can propagate through markets, trading floors, and communities in ways that individual cognitive correction cannot readily counteract.

Case Study Two: Lottery Number Selection

Research on lottery number selection behavior provides a natural experiment in the gambler's fallacy at population scale. Countless millions of people select lottery numbers, and a significant fraction explicitly choose numbers that have not appeared recently, on the grounds that such numbers are "overdue." Victor Haigh and colleagues at the University of Sheffield analyzed United Kingdom National Lottery data and found systematic patterns of selection bias corresponding to the gambler's fallacy: numbers that had not been drawn recently were chosen by more ticket buyers than numbers drawn in the previous draw.

This creates a curious secondary economic effect. If a "due" number happens to be drawn, the prize pool must be divided among more winners, because more people chose that number. The gambler's fallacy does not merely produce false beliefs about probability; in lottery contexts, it reduces expected payout for people who act on it, because they are competing with many other fallacy-driven ticket buyers for the same winning numbers. The belief that is wrong about physics is also, by the mechanics of pari-mutuel lottery prize allocation, wrong about economics.

Case Study Three: Asylum Judge Decisions

The Chen, Moskowitz, and Shue dataset of asylum judge decisions, described in the "What the Research Shows" section above, merits closer examination as a case study in institutional consequences. Immigration courts in the United States handle hundreds of thousands of cases annually, and individual judges decide asylum fates — whether an applicant will be returned to a country where they may face persecution, imprisonment, or death — on a rapid case-by-case basis. The gambler's fallacy, in this context, is not a matter of losing money at a roulette table. It is a matter of people being wrongly deported because a judge's unconscious probability calibration was skewed by the sequence of decisions that immediately preceded theirs.

The effect size Chen, Moskowitz, and Shue documented — a roughly 3.3 percentage point reduction in grant probability after two consecutive grants — compounds across judicial sequences. A judge who sees eight cases in a morning session and grants the first three faces a statistically elevated probability of denying the fourth, irrespective of the fourth applicant's legal merits. The order in which cases are scheduled, a largely administrative decision with no intended evaluative significance, effectively determines outcomes for real people. This finding has implications for procedural reform: randomizing case presentation order, or requiring judges to evaluate cases in writing before knowledge of prior decisions, could in principle reduce the fallacy's institutional impact without requiring any change in the judges themselves.

Case Study Four: Sports Coaching and the Gambler's Fallacy in Reverse

Sports analytics provides a domain where the gambler's fallacy and the hot hand fallacy operate in direct tension, and where the decisions of coaches — drawing on decades of intuition — routinely reflect cognitive errors in both directions. A quarterback who has thrown three consecutive incompletions is frequently pulled from the game or given simpler play calls, on the implicit assumption that he is "cold" and that a correction is needed — a hot hand fallacy in reverse applied to performance rather than probability. Conversely, a basketball player who has made three consecutive shots often receives more touches on the assumption that he is "hot," an expression of the hot hand fallacy proper.

The relevant research, synthesized by Joseph Simmons and colleagues at the Wharton School, suggests that in many sports contexts, actual performance data show neither strong positive autocorrelation (genuine hot hands) nor strong negative autocorrelation (genuine cold streaks), and that coaching decisions reflecting both the hot hand fallacy and the gambler's fallacy therefore systematically misallocate playing time and plays. The costs are measurable in win probability terms, though the exact magnitude is sport- and context-dependent. What is consistent across sports is the finding that human pattern-detection systems, applied to noisy performance data, reliably produce overconfident streak narratives that drive suboptimal decisions.


When Pattern Recognition Is Adaptive: The Limits of the Fallacy Framework

It would be a mistake to conclude from the above that pattern detection applied to sequences is simply an error prone to be corrected out of cognition. The representativeness heuristic — the machinery underlying the gambler's fallacy — exists because it is, in most environments and most situations, an extraordinarily effective cognitive tool. The question of when sequential pattern detection is adaptive versus when it produces systematic error is both scientifically important and practically useful.

The answer depends critically on whether the underlying process is genuinely independent across trials. Roulette wheels and fair coins are specifically designed to produce independence — each trial is mechanically reset so that prior outcomes cannot influence the mechanism. In these settings, the gambler's fallacy is pure error. But most real-world sequences do not have this property. Machine performance tends to be autocorrelated: a machine that has produced three defective units in a row is more likely, not less likely, to produce a fourth, because machines fail through systematic causes. Weather is autocorrelated: a warm day is more likely to be followed by another warm day than a cold one, because atmospheric conditions persist. Economic cycles are autocorrelated: recessions tend to continue before ending, and recoveries tend to persist before turning.

In all of these domains, the assumption that past events are informative about future events — the very assumption that the gambler's fallacy wrongly applies to random processes — is correct and useful. The adaptive system is not the gambler's fallacy per se, but the underlying sequential learning capacity that the gambler's fallacy hijacks. The cognitive error is in the failure to correctly classify the process as genuinely independent, not in the general strategy of treating sequences as informative.

This distinction has practical implications. Debiasing interventions that merely instruct people that "past events don't affect future probabilities" risk overcorrecting in contexts where autocorrelation is real. The more precise intervention is to help people correctly identify the nature of the process — to ask whether the mechanism generating outcomes has memory, whether prior results can influence future ones through causal pathways, before deciding whether the gambler's fallacy framework applies.

The clinical and therapeutic literature has also noted that some degree of "streak thinking" — the attribution of momentum to positive behaviors — can be motivationally adaptive even when it misrepresents probability. A person who has exercised three days in a row and who thinks "I'm on a streak, I don't want to break it" is reasoning in a way that is technically the hot hand fallacy applied to self-behavior, but that may serve the goal of maintaining the behavior pattern. This does not make the reasoning correct; it does illustrate that the question of adaptive value is distinct from the question of probabilistic accuracy.


What the Research Shows: Summary of Key Findings

Taken as a corpus, the research on the gambler's fallacy produces a set of findings that are unusually robust and disturbing in combination. The fallacy appears in laboratory studies of undergraduates and trained statisticians alike, indicating that explicit knowledge of probability theory provides only modest protection. It appears in the naturalistic behavior of casino gamblers placing real bets with real money, confirming that the fallacy is not an artifact of artificial laboratory conditions. It appears in the institutional decisions of judges, loan officers, and umpires — professional decision-makers whose training and institutional role explicitly require case-by-case objectivity — demonstrating that neither professional expertise nor formal accountability eliminates the bias.

The magnitude of the fallacy scales with streak length: longer streaks produce stronger fallacy effects, both in laboratory settings and in field data. This is consistent with the representativeness account, since longer streaks are increasingly discrepant from what a "representative" random sequence is expected to look like, and the perceived representativeness gap drives the corrective expectation.

Several proposed debiasing interventions have been tested with modest results. Reminding people of the concept of statistical independence before they make judgments reduces but does not eliminate the fallacy effect. Training in probability theory in formal educational settings produces some reduction. Requiring decision-makers to explicitly state the probability of each outcome before making a sequential judgment — forcing articulation rather than intuition — reduces the fallacy's influence on decisions, as does introducing deliberate time gaps between sequential judgments so that each case feels less continuous with the last. None of these interventions approach full elimination, which is consistent with the fallacy's deep roots in the neural architecture of sequence prediction.

The relationship between the gambler's fallacy and risk-taking behavior has also been documented. Individuals who show stronger gambler's fallacy reasoning in laboratory measures tend to take more extreme risks in loss situations — doubling down on losing positions — which is consistent with the theoretical prediction that the fallacy makes adverse sequences feel like predictors of imminent reversal. This link between the gambler's fallacy and financially ruinous behavior extends beyond the casino: it appears in day-trading behavior, in investment decision patterns, and in the conduct of certain financial professionals managing client portfolios.


Why the Fallacy Persists: The Epistemically Honest Account

It is easy, given all of the above, to present the gambler's fallacy as a simple error made by simple thinkers. That presentation would be wrong. The fallacy is made by statistically trained researchers, professional judges, experienced loan officers, and sophisticated casino gamblers with decades of experience — people who, in many other respects, reason carefully and well. The persistence of the fallacy is not a mystery requiring an explanation that appeals to ignorance or unintelligence. It is a predictable consequence of cognitive architecture that evolved for a world very different from the one that includes genuinely random number generators.

The philosopher Nassim Nicholas Taleb has argued that human cognition is fundamentally ill-suited to reasoning about genuine randomness — not because of defects but because genuine randomness is rare in the natural world from which cognitive evolution drew its selection pressures. The world of the evolutionary environment was full of patterns, causes, autocorrelations, and repeating structures. Minds that aggressively detected and exploited those patterns survived and reproduced more successfully than minds that treated all sequences as potentially random. The gambler's fallacy is the price of that success: a pattern-detection system so powerful and so automatic that it continues firing even when pointed at a roulette wheel.

The Monte Carlo gamblers of August 1913 were not different in kind from the asylum judges analyzed by Chen, Moskowitz, and Shue in 2016, or from the statisticians described by Kahneman and Tversky in 1971 who intuitively expected small samples to mirror their populations. They were all running the same evolved software on a problem that software was not designed to solve. Recognizing this does not eliminate the fallacy, but it reframes the project of addressing it: the goal is not to purge an irrational impulse but to design environments, procedures, and institutions that compensate for an architectural feature that cannot be simply removed.


References

  1. Kahneman, D., & Tversky, A. (1971). Belief in the law of small numbers. Psychological Bulletin, 76(2), 105–110.

  2. Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124–1131.

  3. Gilovich, T., Vallone, R., & Tversky, A. (1985). The hot hand in basketball: On the misperception of random sequences. Cognitive Psychology, 17(3), 295–314.

  4. Croson, R., & Sundali, J. (2005). The gambler's fallacy and the hot hand: Empirical data from casinos. Journal of Risk and Uncertainty, 30(3), 195–209.

  5. Chen, D. L., Moskowitz, T. J., & Shue, K. (2016). Decision making under the gambler's fallacy: Evidence from asylum judges, loan officers, and baseball umpires. Quarterly Journal of Economics, 131(3), 1181–1242.

  6. Falk, R., & Konold, C. (1997). Making sense of randomness: Implicit encoding as a basis for judgment. Psychological Review, 104(2), 301–318.

  7. Wagenaar, W. A. (1972). Generation of random sequences by human subjects: A critical survey of literature. Psychological Bulletin, 77(1), 65–72.

  8. Miller, J. B., & Sanjurjo, A. (2018). Surprised by the hot hand fallacy? A truth in the law of small numbers. Econometrica, 86(6), 2019–2047.

  9. Ramsay, J. O., & Doris, M. (2009). Neural correlates of the gambler's fallacy in sequential decision making. NeuroImage, 47(3), 1088–1096.

  10. Englich, B., Mussweiler, T., & Strack, F. (2006). Playing dice with criminal sentences: The influence of irrelevant anchors on experts' judicial decision making. Personality and Social Psychology Bulletin, 32(2), 188–200.

  11. Laplace, P.-S. (1814). Essai philosophique sur les probabilités [Philosophical essay on probabilities]. Courcier. (English translation: Dover Publications, 1951.)

  12. Bernoulli, J. (1713). Ars conjectandi [The art of conjecture]. Thurnisiorum. (English translation: Johns Hopkins University Press, 2006.)

Frequently Asked Questions

What is the gambler's fallacy?

The gambler's fallacy is the belief that the outcome of a random event is influenced by previous outcomes of the same type — that after a streak of heads, tails becomes more likely, or that a roulette wheel that has hit black many times is 'due' for red. In fact, each spin of a fair roulette wheel and each coin flip is statistically independent: past outcomes have no effect on future probabilities. The fallacy arises from the representativeness heuristic — the expectation that even short sequences should look like the long-run distribution.

What happened at Monte Carlo in 1913?

On August 18, 1913, at the Casino de Monte-Carlo, a roulette wheel produced 26 consecutive black results. Gamblers, convinced that red was becoming statistically overdue with each black result, lost millions of francs betting on red. The wheel had no memory of previous spins. Each spin was independent and the probability of black or red remained essentially equal throughout. The incident became the canonical illustration of the gambler's fallacy and is sometimes called the Monte Carlo fallacy for this reason.

What is the difference between the gambler's fallacy and the hot hand fallacy?

The gambler's fallacy and the hot hand fallacy are opposite errors about the same underlying fact of independence. The gambler's fallacy holds that after a streak of one outcome, the opposite becomes more likely (negative recency bias). The hot hand fallacy holds that after a streak of success, more success becomes more likely (positive recency bias). Gilovich, Vallone, and Tversky (1985) famously showed that the 'hot hand' in basketball shooting is largely illusory — players' hit rates after streaks do not reliably differ from their baseline rates. Both errors reflect the same pattern-detection systems overfiring on genuinely random sequences.

Does the gambler's fallacy affect professionals?

Yes, substantially. Chen, Moskowitz, and Shue (2016) analyzed decision sequences of asylum judges, loan officers, and baseball umpires and found systematic negative autocorrelation — after granting several decisions in a row, decision-makers became significantly less likely to grant the next. Asylum judges who had approved several consecutive cases were more likely to reject the next application, controlling for case merits. This is the gambler's fallacy operating in high-stakes institutional decisions, with real consequences for people's lives.

Why does the brain generate the gambler's fallacy?

The gambler's fallacy is a by-product of the representativeness heuristic, identified by Kahneman and Tversky in their 1971 paper 'Belief in the Law of Small Numbers.' The brain expects even small samples to resemble the long-run statistical distribution — so a sequence of five heads feels wrong because it does not look like the roughly equal mix expected over many flips. Neurologically, the dopaminergic reward prediction system fires on patterns in sequences; when a pattern is detected (a streak), the brain updates its expectation that the pattern will reverse or continue. In environments where sequences are genuinely non-random, this is useful. Applied to independent random events, it produces systematic error.