In the early 1980s, Amos Tversky and Daniel Kahneman devised a scenario that has since become one of the most replicated findings in the history of cognitive psychology. They presented participants with the following description:

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

Then they asked a simple question: which of the following is more probable?

(A) Linda is a bank teller. (B) Linda is a bank teller who is active in the feminist movement.

Approximately 85 to 90 percent of subjects chose option B. This result, published in Tversky and Kahneman's landmark 1983 paper in Psychological Review, was not merely surprising — it was logically impossible. The probability of two events occurring together — Linda being a bank teller and being a feminist activist — can never exceed the probability of either event occurring alone. A conjunction of properties cannot be more probable than one of its components. By choosing B, subjects were committing what Tversky and Kahneman called the conjunction fallacy: preferring the more detailed, "story-like" description because it matched the profile they had already formed of Linda.

That experiment crystallized something that two decades of research had been quietly assembling: the human mind does not, by default, consult probability theory when making judgments under uncertainty. It consults resemblance. When we try to decide how likely something is — whether a person fits a category, whether an event has a particular cause, whether a company will succeed — we do not primarily calculate. We ask ourselves: does this look like the thing I expect? This is the representativeness heuristic, and it is one of the most consequential cognitive shortcuts ever studied.


What the Representativeness Heuristic Actually Is

The term was introduced formally by Tversky and Kahneman in their 1974 paper "Judgment Under Uncertainty: Heuristics and Biases," published in Science (Vol. 185, No. 4157). In that paper, they defined representativeness as the process by which "probabilities are evaluated by the degree to which A is representative of B, that is, by the degree to which A resembles B." When a person is asked whether a stranger is a librarian or a salesperson, they do not calculate base rates for those occupations. They compare the stranger's characteristics — their manner of dress, their speech, their apparent interests — to a mental prototype of each category. The category whose prototype more closely matches the observed description wins the assignment, regardless of how many salespeople vastly outnumber librarians in the population.

This process is not arbitrary, and it is important to understand from the outset that Tversky and Kahneman did not regard heuristics as inherently defective. In many environments, resemblance is a reliable guide to category membership. A person who looks, talks, and acts like a surgeon probably is one. A financial product that resembles previous frauds probably deserves scrutiny. The heuristic becomes a liability not when it is used, but when it is used in conditions where base rates, sample sizes, and statistical independence are relevant — conditions in which intuitive resemblance systematically misleads.

The representativeness heuristic is not a single mechanism but a family of related judgment processes, all sharing the same core logic: probability is inferred from similarity. Its documented manifestations include the conjunction fallacy (the Linda problem), base rate neglect, the gambler's fallacy in reverse (the "hot hand"), and regression blindness — the failure to anticipate that extreme performance will move toward the mean.


Intellectual Lineage

The roots of representativeness research extend back further than the Tversky-Kahneman collaboration. In the 1950s and early 1960s, researchers in the tradition of Egon Brunswik were investigating how humans use "cues" to judge distal properties of the world — how we infer a person's intelligence from their face, or a city's size from its name recognition. Brunswik's "lens model," developed at the University of California, Berkeley, emphasized that organisms must function as probabilistic reasoners operating on imperfect information. This work established the basic empirical framework for studying judgment under uncertainty, though it did not yet have the vocabulary to describe systematic bias.

Herbert Simon's work on "bounded rationality," introduced through a series of papers in the 1950s and later consolidated in his 1957 book Models of Man, provided the theoretical foundation. Simon argued that human cognition is constrained by limited working memory, limited time, and limited computational resources. Faced with these constraints, the mind adopts simplifying strategies — what Simon called "satisficing" — that sacrifice optimality for tractability. The heuristics tradition grew directly from this insight.

Ward Edwards, working at the University of Michigan in the 1960s, conducted some of the first systematic experiments on probabilistic judgment, comparing human performance to Bayesian norms. His findings, particularly the demonstration that people underweight prior probabilities (base rates) relative to likelihoods, directly anticipated the representativeness findings. Edwards called this phenomenon "conservatism" in probability revision, though later work would show that under conditions favoring representativeness, people could be radically non-conservative — ignoring base rates almost entirely.

The decisive turn came when Amos Tversky and Daniel Kahneman began collaborating at the Hebrew University of Jerusalem in the late 1960s. Both were trained in mathematical psychology and deeply versed in the normative Bayesian framework. Their strategy, unusual for its time, was to treat errors not as noise but as signals — to use systematic deviations from rational norms to reverse-engineer the cognitive procedures actually in use. Their first major joint paper on heuristics appeared in Cognitive Psychology in 1971 ("Belief in the Law of Small Numbers"), and the foundational theoretical statement came in the 1974 Science paper, which introduced three heuristics — representativeness, availability, and anchoring-and-adjustment — and launched a research program that would eventually reshape psychology, economics, law, and medicine.

The conjunction fallacy paper ("Extensional Versus Intuitive Reasoning: The Conjunction Fallacy in Probability Judgment," Psychological Review, 1983) represented the climax of the representativeness program, demonstrating not merely that people neglected base rates but that they could be induced to violate the most basic axiom of probability theory — the rule that a subset cannot be more probable than its superset — simply by constructing a description that was representative of the smaller set.


The Cognitive Science: How Representativeness Works

At the mechanistic level, representativeness is now understood to involve the substitution of one question for another. When asked "How probable is it that Linda is a bank teller?" the mind — finding this question computationally demanding — substitutes a simpler question: "How well does Linda match my image of a bank teller?" The answer to the second question (poorly) is then used as an answer to the first, and the process feels seamless. Kahneman later formalized this as "attribute substitution" in his 2003 paper in Psychological Review, co-authored with Shane Frederick, which argued that the representativeness heuristic, the availability heuristic, and several other judgment shortcuts all share this substitution structure.

The prototype theory of mental categories, developed by Eleanor Rosch and colleagues at UC Berkeley in the 1970s, provides the cognitive architecture within which representativeness operates. Rosch showed, across a series of studies in Cognitive Psychology between 1973 and 1978, that people do not represent categories as lists of necessary and sufficient features. Instead, they maintain graded membership structures organized around prototypical exemplars — a robin is a "better bird" than a penguin, even though both are birds. Representativeness judgments exploit this architecture: we ask whether an instance matches the category prototype, rather than whether it meets formal membership criteria.

This is why the Linda problem works. "Bank teller who is a feminist activist" is a richer, more detailed description — and therefore a closer match to the vivid mental prototype that Linda's description evokes. The match to the feminist activist stereotype is strong; the match to the generic bank teller stereotype is weak. Adding "feminist activist" improves the representativeness score, even though it mathematically reduces the probability.

Daniel Kahneman's dual-process framework, articulated most fully in his 2011 book Thinking, Fast and Slow, situates representativeness in System 1 — the fast, automatic, associative mode of cognition that operates on pattern matching. System 2 — deliberate, effortful, rule-based reasoning — is capable of computing probabilities correctly, but it is lazy. It often endorses System 1's representativeness-driven output without engaging fully. This is why the conjunction fallacy persists even among people who know probability theory: System 2 must actively override System 1's answer, and it frequently fails to do so.


What the Research Shows

Base Rate Neglect: The Cab Problem

Tversky and Kahneman's 1982 chapter in Judgment Under Uncertainty: Heuristics and Biases (Cambridge University Press), co-edited with Paul Slovic, included a now-famous problem that demonstrated base rate neglect in stark quantitative terms. A city has two cab companies: Blue, which operates 85 percent of the cabs, and Green, which operates 15 percent. A witness to a late-night accident testifies that the cab involved was Green. Tests show that in the conditions of that night, this witness correctly identifies cab colors 80 percent of the time. What is the probability that the cab was actually Green?

Most people answer around 80 percent — the witness's reliability figure. The correct Bayesian answer is approximately 41 percent. The prior probability that any given cab is Green is only 15 percent, and that low prior substantially erodes the weight of the witness's testimony. People ignore the base rate almost entirely because the witness's identification is representative evidence — it directly addresses the question of which cab was present. Base rates feel abstract and impersonal; the witness's testimony feels specific and informative. So the testimony dominates, and the base rate is discarded.

The Engineer-Lawyer Studies

The original 1974 Science paper included a paradigmatic demonstration of base rate neglect through professional stereotyping. Participants were told that a panel of psychologists had interviewed 100 people — some engineers, some lawyers — and had written brief personality sketches. In one condition, participants were told the panel consisted of 70 engineers and 30 lawyers. In another, 30 engineers and 70 lawyers. Participants then read descriptions like: "Jack is a 45-year-old man. He is married and has four children. He is generally conservative, careful, and ambitious. He shows no interest in political and social issues and spends most of his free time on his home carpentry projects. His friends like and admire him."

The description is stereotypically engineerlike. Participants in both conditions judged Jack to be far more likely an engineer — with nearly identical probability estimates, despite the fact that one group had been explicitly told engineers were the majority and the other had been told lawyers were. The personality description — its representativeness of the "engineer" prototype — swamped the base rate information entirely. When given a neutral description ("Dick is a 30-year-old man. He is married with no children. A man of high ability and high motivation, he promises to be quite successful in his field"), participants in the 70-engineer condition assigned a higher probability of engineering than those in the 70-lawyer condition — finally using the base rate, but only when the description provided no representativeness signal.

The Hot Hand Fallacy

In 1985, Thomas Gilovich, Robert Vallone, and Amos Tversky published "The Hot Hand in Basketball: On the Misperception of Random Sequences" in Cognitive Psychology (Vol. 17). They analyzed the shooting records of the Philadelphia 76ers and Boston Celtics, and found no statistically significant evidence for "streak shooting" — the probability that a player would make a shot was not meaningfully higher after a streak of successful shots than after a streak of misses. Yet players, coaches, and spectators overwhelmingly believed in the hot hand. This belief follows directly from representativeness: a skilled player on a streak is representative of what exceptional performance looks like. A sequence of makes looks "designed," purposive, meaningful. The fact that basketball shooting is substantially stochastic — that even elite players' shot outcomes contain enormous random variance — is invisible to intuition trained on pattern matching. Subsequent decades of research have refined this finding, with some later work suggesting small but real autocorrelation in some conditions, but the core finding that people dramatically overestimate the hot hand remains robust and explicable through representativeness.

Regression to the Mean Blindness

One of the most important and underappreciated consequences of representativeness is systematic failure to anticipate regression to the mean. When performance is extreme — extraordinarily good or extraordinarily bad — the next observation will typically be less extreme, not because of any causal intervention but because extreme outcomes are partly caused by luck, and luck does not typically repeat. But representativeness commits people to narratives: a company that performed brilliantly last quarter represents the prototype of "well-run, strategy-driven success," so the mind expects continued brilliance. A student who performed terribly on one exam represents the prototype of "low ability," so the mind expects continued failure. Both predictions ignore the statistical reality that extreme outcomes are jointly determined by ability and chance, and that chance is uncorrelated across observations.

Kahneman and Tversky addressed regression directly in their 1973 paper "On the Psychology of Prediction," published in Psychological Review (Vol. 80). They showed that when people make predictions from descriptions, they treat the predictions as translations of the input — the better the input looks, the better the predicted outcome — with no regression toward the average. This produces systematic overconfidence in extreme predictions and a blindness to the mean-reverting character of most real-world performance data.


Four Named Case Studies

Case Study 1: Linda and the Conjunction Fallacy

The original Linda problem remains the cleanest demonstration of representativeness producing a logically impossible judgment. When Tversky and Kahneman first published the conjunction fallacy in 1983 in Psychological Review (Vol. 90, No. 4, "Extensional Versus Intuitive Reasoning"), they tested numerous versions of the problem to rule out alternative explanations. They presented the options in different orders, used different wording, and even ran experiments with trained statisticians. Even graduate students in decision theory, explicitly aware of the probability rules, frequently chose the conjunction. Tversky and Kahneman called this result "perhaps the most striking demonstration of the difficulty of interpreting probabilities correctly," and it has held up across cultures, languages, and decades of replication. The reason it works so reliably is that Linda's description was deliberately crafted to be highly representative of a feminist activist and not particularly representative of a bank teller. Once the representativeness match is established, the mind finds it nearly impossible to set aside resemblance and apply formal probability rules.

Case Study 2: Medical Diagnosis and Base Rate Neglect

The clinical consequences of representativeness have been studied extensively in medical contexts. David Eddy's work in the 1980s, published in JAMA and summarized in his 1982 book Probabilistic Reasoning in Clinical Medicine, showed that physicians systematically overestimated the probability that a patient with a positive mammogram result had breast cancer, ignoring the low base rate of the disease in the population. The mammogram problem has a structure identical to the cab problem: even a fairly accurate test produces many false positives when the condition being tested is rare, because the base rate low prior overwhelms the likelihood ratio. But physicians, confronted with a positive result that is representative of disease, found it natural to infer disease — much as people find it natural to infer Green cab from a reliable witness. The clinical stakes are severe: overdiagnosis driven by representativeness reasoning contributes to unnecessary treatment, patient anxiety, and healthcare costs. Later work by Giora Keinan (1987, Journal of Personality and Social Psychology) and Eldar Shafir and colleagues extended these findings across clinical domains, showing that physicians under time pressure and cognitive load showed even stronger base rate neglect.

Case Study 3: Glamour Stocks and Investment Narratives

Financial markets offer a particularly consequential arena for representativeness errors. Josef Lakonishok, Andrei Shleifer, and Robert Vishny published a landmark analysis in the Journal of Finance in 1994 ("Contrarian Investment, Extrapolation, and Risk") demonstrating that "glamour stocks" — companies with high price-to-earnings and price-to-book ratios, typically companies with impressive recent growth records — systematically underperformed "value stocks" over the following five years. The reason, they argued, was that investors were extrapolating past performance: a company with a strong recent growth narrative resembles a prototype of "successful, well-run company," and investors bid its price up beyond what fundamentals justify. They were, in Lakonishok's framing, confusing the representativeness of the growth story with the probability of continued growth — ignoring the statistical reality that above-average corporate performance reverts toward the mean. A companion finding from Werner De Bondt and Richard Thaler, published in the Journal of Finance in 1985 ("Does the Stock Market Overreact?"), showed that stocks that had performed worst over a three-to-five-year period subsequently outperformed the market, while previous winners underperformed — precisely the pattern predicted by regression to the mean and inconsistent with the representativeness-driven narrative that winners keep winning and losers keep losing.

Case Study 4: Jury Decisions and Criminal Stereotyping

The criminal justice system is structured to resist representativeness reasoning — the requirement of proof beyond reasonable doubt is, in part, a normative bulwark against judging guilt by resemblance. But psychological research demonstrates that the bulwark is imperfect. Work by Galen Bodenhausen and Robert Wyer (1985, Journal of Experimental Social Psychology, "Effects of Stereotypes in Decision-Making and Information-Processing Strategies") showed that mock jurors who were told a defendant's name was "Carlos Ramirez" — a name that activated criminal stereotypes associated with Hispanic males for the American participants — rendered more guilty verdicts on assault charges than those given a generic name, despite identical case files. The mechanism is representativeness: when a defendant matches the prototype of "the kind of person who commits this crime," guilt feels more probable, and the threshold for conviction is effectively lowered. Daniel Kahneman and Amos Tversky, in their discussion of the representativeness heuristic in the 1974 Science paper, explicitly raised the danger of this reasoning in legal contexts, noting that judging probability by resemblance creates systematic vulnerability to stereotyping. Subsequent work by Gary Wells and colleagues on eyewitness identification demonstrated that witnesses themselves use representativeness reasoning — identifying suspects who most closely match a prototype of "criminal appearance" rather than accurately retrieved facial memories.


When Representativeness Is Adaptive

The critique of the representativeness heuristic reached its most sophisticated form in the work of Gerd Gigerenzen and colleagues at the Max Planck Institute for Human Development. In a series of papers and books — including Simple Heuristics That Make Us Smart (1999, Oxford University Press), co-authored with Peter Todd and the ABC Research Group — Gigerenzer argued that Tversky and Kahneman's research systematically undervalued the ecological validity of heuristics. Their experiments, he contended, were designed to make heuristics fail: they presented problems in single-event probability formats, used artificial descriptions, and compared human performance to normative standards derived from a different task than the one organisms actually face.

Gigerenzer's alternative framework, "ecological rationality," holds that a heuristic should be evaluated not by how well it matches formal probability theory but by how well it performs in the environments where it evolved. Representativeness reasoning performs very well in many natural environments. When we judge whether an unfamiliar mushroom is poisonous by comparing it to our prototype of dangerous mushrooms, we are using representativeness in conditions where it is likely reliable — the features that made mushrooms dangerous in ancestral environments are correlated with the features we use to represent "dangerous mushroom." When a physician diagnoses pneumonia by pattern-matching a patient's presentation to a prototype of pneumonia cases, they are often faster and no less accurate than a formal Bayesian calculation would be, because their prototype encodes real statistical regularities accumulated over years of clinical experience.

Gigerenzen's critique also targeted the format of Tversky and Kahneman's experiments. In a 1995 paper in Psychological Review ("How to Improve Bayesian Reasoning Without Instruction: Frequency Formats"), he showed that base rate neglect largely disappeared when problems were presented as natural frequencies rather than single-event probabilities. If the mammogram problem is stated as: "10 out of every 1,000 women have breast cancer; 8 of those 10 will test positive; of the 990 without cancer, about 99 will also test positive," most people compute the correct conditional probability — roughly 7.5 percent. The conjunction fallacy also weakens under some reformulations. Gigerenzer argued this demonstrated that the problem lay not in human cognition but in the mismatch between artificial single-event framing and the frequency-based format in which the mind naturally processes uncertainty.

This critique has merit, and it has substantially shaped the contemporary understanding of representativeness. The consensus position is that representativeness is not a cognitive disease but a cognitive tool that is well-calibrated for some environments and poorly calibrated for others. The tool is well-calibrated when the features used for prototype comparison are genuinely diagnostic of category membership — when the world is structured such that resemblance tracks reality. It is poorly calibrated when base rates diverge sharply from representativeness-driven intuitions, when conjunctions are involved, when sample sizes are small, or when regression to the mean is at work. The error is not in using the heuristic but in failing to recognize when its conditions of reliable use are not met.


Representativeness is embedded in a network of related cognitive phenomena. The table below clarifies the distinctions.

Concept Core Mechanism Key Difference from Representativeness
Availability Heuristic Judging probability by ease of recall or mental imagery Uses memory accessibility rather than prototype resemblance as the probability signal
Anchoring-and-Adjustment Over-reliance on an initial numerical value; insufficient adjustment away from it Distortion arises from a specific numeric anchor, not from category prototype matching
Confirmation Bias Selectively seeking and interpreting evidence to confirm existing beliefs About information search and interpretation, not initial probability estimation from resemblance
Base Rate Neglect Underweighting prior probabilities in favor of specific case information A consequence of representativeness reasoning; base rates feel less representative than case-specific detail
Conjunction Fallacy Rating the probability of A-and-B as higher than A alone A consequence of representativeness; the conjunction is rated higher because it is more representative of a prototype
Gambler's Fallacy Believing that after a run of one outcome, the other is "due" Shares the logic of expecting outcomes to be representative of underlying probabilities, but applied to sequences
Stereotyping Judging individuals by group characteristics rather than individual evidence Representativeness provides the cognitive mechanism; stereotyping is its social manifestation

Legacy and Ongoing Debates

The representativeness heuristic research has had an influence that extends far beyond academic psychology. In medicine, work by Jerome Groopman and others on diagnostic error — notably Groopman's 2007 book How Doctors Think — draws explicitly on the heuristics literature to explain why physicians commit premature closure: once a patient's presentation matches a prototype, the diagnosis feels settled, and disconfirming evidence is discounted. The representativeness framework now informs medical education programs at major institutions.

In law, the literature on eyewitness testimony, jury decision-making, and prosecutorial charging decisions has been substantially shaped by the representativeness tradition. The Innocence Project's analyses of wrongful convictions have documented cases where defendants were convicted partly because they matched jury prototypes of guilt — cases whose structure mirrors the experimental findings from Bodenhausen and Wyer, and from Tversky and Kahneman, with real consequences.

In finance, behavioral economics as a discipline emerged partly from the heuristics research, and representativeness reasoning is now a standard explanatory concept in accounts of market anomalies. The value premium, momentum trading, and overreaction to earnings surprises all fit cleanly within a representativeness framework.

The debate between the Kahneman-Tversky tradition and the Gigerenzen school remains productive and unresolved. The most accurate summary is that both are right about different things: Tversky and Kahneman correctly identified real and consequential failures of probabilistic reasoning; Gigerenzer correctly identified that those failures are environmentally contingent and that heuristics can be ecologically rational. The synthesis, which neither camp has fully embraced, is that representativeness is a competent heuristic in the environments that shaped it, applied by a mind that does not always recognize when it has moved to environments where the heuristic fails.

What the Linda problem ultimately demonstrates is not that the human mind is broken. It demonstrates something more interesting: that the mind has a theory of what evidence is, and that theory is not always Bayesian. When evidence is detailed, vivid, causally coherent, and narratively rich — when it represents something — it feels more probable than sparse, statistical, structurally valid evidence. This is a feature of a mind built to navigate a world structured by patterns, categories, and social types. It becomes a bug when the world presents us with problems that require formal probability, statistical base rates, and the logic of conjunctions. The representativeness heuristic, in the end, is neither villain nor hero. It is an extraordinarily competent tool operating, sometimes, on problems for which it was not designed.


References

  1. Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124–1131.

  2. Tversky, A., & Kahneman, D. (1983). Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. Psychological Review, 90(4), 293–315.

  3. Kahneman, D., & Tversky, A. (1973). On the psychology of prediction. Psychological Review, 80(4), 237–251.

  4. Gilovich, T., Vallone, R., & Tversky, A. (1985). The hot hand in basketball: On the misperception of random sequences. Cognitive Psychology, 17(3), 295–314.

  5. Gigerenzer, G. (1995). How to improve Bayesian reasoning without instruction: Frequency formats. Psychological Review, 102(4), 684–704.

  6. Gigerenzer, G., Todd, P. M., & the ABC Research Group. (1999). Simple Heuristics That Make Us Smart. Oxford University Press.

  7. Lakonishok, J., Shleifer, A., & Vishny, R. W. (1994). Contrarian investment, extrapolation, and risk. Journal of Finance, 49(5), 1541–1578.

  8. De Bondt, W. F. M., & Thaler, R. (1985). Does the stock market overreact? Journal of Finance, 40(3), 793–805.

  9. Kahneman, D., & Frederick, S. (2002). Representativeness revisited: Attribute substitution in intuitive judgment. In T. Gilovich, D. Griffin, & D. Kahneman (Eds.), Heuristics and Biases: The Psychology of Intuitive Judgment (pp. 49–81). Cambridge University Press.

  10. Bodenhausen, G. V., & Wyer, R. S. (1985). Effects of stereotypes on decision making and information-processing strategies. Journal of Experimental Social Psychology, 21(5), 458–475.

  11. Eddy, D. M. (1982). Probabilistic Reasoning in Clinical Medicine: Problems and Opportunities. In D. Kahneman, P. Slovic, & A. Tversky (Eds.), Judgment Under Uncertainty: Heuristics and Biases (pp. 249–267). Cambridge University Press.

  12. Rosch, E. (1975). Cognitive representations of semantic categories. Journal of Experimental Psychology: General, 104(3), 192–233.

Frequently Asked Questions

What is the representativeness heuristic?

The representativeness heuristic is the cognitive tendency to judge the probability of an event by how closely it resembles a prototype or stereotype, rather than by its actual statistical frequency. Identified by Tversky and Kahneman in their 1974 Science paper, it explains why people ignore base rates, commit conjunction fallacies, and systematically misjudge probabilities when a scenario matches a vivid narrative or familiar pattern.

What is the Linda Problem and what does it demonstrate?

The Linda Problem, presented in Tversky and Kahneman's 1983 Psychological Review paper, describes Linda as a 31-year-old philosophy graduate, outspoken, and concerned with discrimination and social justice. Subjects are asked whether she is more likely to be (A) a bank teller, or (B) a bank teller active in the feminist movement. 85-90% choose B — the conjunction. This is the conjunction fallacy: P(A and B) can never exceed P(A). The feminist bank teller cannot be more probable than the bank teller. But the description matches the feminist stereotype so well that similarity overrides probability logic.

What is base rate neglect?

Base rate neglect is the tendency to ignore prior probability information when evaluating a specific case. In Kahneman and Tversky's cab problem, witnesses identify a green cab in an accident, and subjects must estimate the probability it was actually green. When told cabs are 85% blue and 15% green, and the witness is 80% accurate, the correct Bayesian answer is about 41% — most likely blue. But subjects anchor on the witness description and give estimates near 80%, ignoring the crucial base rate that most cabs are blue.

How does the representativeness heuristic affect investing?

Investors assess stock quality by how much a company resembles a successful growth story — recent earnings momentum, exciting narrative, charismatic leadership. Lakonishok, Shleifer, and Vishny's 1994 research showed that 'glamour stocks' with compelling growth stories were systematically overpriced relative to 'value stocks' with less appealing narratives, producing predictably worse long-run returns. The representativeness heuristic causes investors to price the narrative rather than the fundamentals.

When does the representativeness heuristic work well?

Gerd Gigerenzen's ecological rationality framework argues that heuristics including representativeness are well-matched to many natural environments where category membership genuinely predicts outcomes. A physician who recognizes a symptom cluster as representative of a specific diagnosis is using representativeness productively. The heuristic fails specifically when the environment violates the assumption that resemblance tracks frequency — as in low base-rate events, statistical problems, and domains where compelling narratives systematically diverge from statistical reality.