On March 10, 2000, the NASDAQ Composite Index closed at 5,048.62 — a figure that represented the apex of the greatest speculative bubble in American financial history to that point. In the preceding five years, the index had risen approximately 400 percent. Individual investors, many of them newly minted participants in online brokerage accounts that had only become widely accessible in the late 1990s, had watched technology stocks produce annual returns of 85 percent, then 40 percent, then 85 percent again. The pattern felt like a fact about the world rather than a statistical accident. Households that had never owned equity before began allocating retirement savings to technology funds. A 1999 survey by the Investment Company Institute found that the share of U.S. household financial assets held in equity mutual funds had risen from 6 percent in 1990 to 23 percent. By the peak, inflows to technology and growth funds were running at record levels — investors were piling in precisely as prices reached their maximum elevation.

What those investors were trusting was not analysis. They were trusting the recent past. The most recent returns they had observed were spectacular, and the most recent trajectory was sharply upward. From that evidence, many concluded that technology investing was simply a superior strategy — that the new era of the internet economy had permanently altered the risk-return relationship in equities. Their logic was understandable. It was also catastrophically wrong.

Between March 2000 and October 2002, the NASDAQ fell 78 percent. Roughly $5 trillion in market capitalization was destroyed. And in a pattern that eerily mirrored the buying behavior at the peak, investors redeemed equity mutual funds in record volumes through 2002 — selling at or near the trough, crystallizing their losses, and exiting equities just before the slow recovery began. The buying at the peak and the selling at the bottom were not caused by new fundamental information. They were caused by the same cognitive mechanism applied twice: both the exuberant purchase in 1999 and the panicked sale in 2002 were driven by an excessive weighting of recent experience at the expense of longer-run evidence.

This is recency bias. It is among the most consequential, most frequently observed, and most thoroughly documented distortions in human judgment.


What Recency Bias Actually Is

Recency bias — also described in the literature as the recency effect, recency heuristic, or recency weighting — refers to the systematic tendency to assign disproportionate weight to recent information when forming beliefs, making predictions, or reaching decisions. The bias operates across memory, perception, and judgment: recent events feel more representative of ongoing reality than they statistically are, recent trends feel more likely to continue than base-rate evidence would justify, and recent performance — of people, markets, teams, or strategies — is treated as more predictive of future performance than it typically proves to be.

The phenomenon must be carefully distinguished from several related but distinct cognitive tendencies that are often conflated with it.

Concept Definition Key Difference from Recency Bias
Recency Bias Disproportionate weighting of recent events when forming beliefs or predictions The distortion arises from temporal proximity — recent events are systematically overweighted relative to their evidential value
Availability Heuristic Judging probability by how easily examples come to mind Availability is driven by salience, vividness, and ease of recall — which often correlates with recency but is a distinct mechanism
Anchoring Bias Excessive reliance on a first piece of numerical information encountered Anchoring privileges the first datum, not the most recent; recency bias privileges the latest datum
Trend-Following Deliberate strategy of buying recent winners and selling recent losers Trend-following is an intentional, rule-based approach; recency bias is an unintentional cognitive distortion that contaminates supposedly independent analysis
Primacy Effect Disproportionate weighting of information encountered first in a sequence The mirror image of recency bias; together they constitute the serial position effect
Hot Hand Fallacy Belief that a person or entity on a winning streak has elevated probability of continued success Overlaps with recency bias but specifically concerns the belief in momentum in random sequences
Narrative Fallacy Constructing coherent causal stories from random or loosely related events Recency bias often supplies the raw material — the recent events — that the narrative fallacy then weaves into a false causal story

The distinction between recency bias and genuine trend-following deserves particular attention, because it is both conceptually important and practically consequential. A trend-following trader who buys an upward-moving asset does so within a defined rule set, with explicit stop-loss criteria, a specific time horizon, and a framework for when the trend is considered broken. The recency-biased investor, by contrast, has no such structure: he buys because prices have been rising and it therefore feels like they should continue to rise, without a coherent theory of momentum or a defined exit condition. One is a disciplined strategy that acknowledges it is exploiting a market phenomenon; the other is an unexamined emotional response to recent sensory data. The line between them is blurred in practice — many trend-followers are partly rationalizing recency bias — but the conceptual distinction clarifies what recency bias actually is: not wrong-headed pattern recognition per se, but the undisciplined application of recent pattern recognition without adjustment for the statistical properties of the underlying process.


The Cognitive Science of Recency

The scientific study of recency effects begins not with financial markets but with memory. In 1885, Hermann Ebbinghaus published Uber das Gedachtnis (On Memory), a systematic experimental investigation of his own memory conducted over several years, which remains one of the foundational texts of experimental psychology. Among his discoveries was the serial position effect: when asked to recall a list of items, people remember items at the beginning of the list (the primacy effect) and items at the end (the recency effect) better than items in the middle. The recency advantage arises because the final items in a sequence are still active in working memory — or, in Ebbinghaus's terminology, have not yet been subject to the interference and decay that diminishes the middle items. The primacy advantage arises because early items receive more rehearsal time and are more thoroughly encoded into long-term memory.

This basic architecture of memory — where recency creates a privileged channel into awareness — has been elaborated by decades of subsequent neurological research. The hippocampus, a seahorse-shaped structure in the medial temporal lobe that plays a central role in the encoding of new episodic memories, shows elevated activity for recently encoded events. More recent memories have stronger synaptic traces and are more easily cued into conscious recall. This is not a flaw in the system; it is adaptive. For most of evolutionary history, recent events were more relevant to survival than distant ones. The predator that attacked the water source yesterday is more pressing intelligence than the one that attacked three seasons ago.

The problem arises when this architectural preference for recency is applied uncritically to domains where the temporal distance of information is not a reliable guide to its evidential value. In financial markets, whether a company earned strong profits in the last quarter is less informative about its long-run value than its average earnings over a decade. In evaluating an employee's annual performance, the project that finished last month is no more representative of the year's work than the project that finished in February. In assessing a quarterback draft prospect, the performance in the final pre-draft combine workout is no more representative of NFL potential than the average across three years of college play. But in each of these cases, recency bias causes recent information to crowd out the longer record.

Daniel Kahneman's peak-end rule, developed with Barbara Fredrickson and colleagues in the 1990s and described most fully in Kahneman, Fredrickson, Schreiber, and Redelmeier's 1993 paper in Psychological Science, identifies a specific variant of recency weighting in the domain of experiential evaluation. The peak-end rule describes the finding that people evaluate experiences not by integrating across all moments of the experience — not by averaging, in any meaningful sense — but by weighting primarily two moments: the moment of greatest intensity (the peak, positive or negative) and the final moment (the end). The total duration of the experience has remarkably little influence on retrospective evaluation, a phenomenon Kahneman called "duration neglect."

The peak-end rule was demonstrated most vividly in experiments involving cold pressor pain. Participants who held their hand in painfully cold water for sixty seconds had worse retrospective evaluations of the experience than participants who held their hand in the same cold water for sixty seconds and then continued for thirty more seconds at a slightly less painful temperature. The longer experience involved more total pain by any objective measure, yet was remembered as less aversive — because it ended at a less intense point. The ending colonizes the memory of the whole. This is recency bias operating at the level of embodied experience, not merely abstract judgment.


Intellectual Lineage

The formal theoretical treatment of recency effects in judgment and decision-making emerges most clearly from the heuristics-and-biases research program inaugurated by Kahneman and Tversky in the early 1970s. Their 1974 paper in Science, "Judgment Under Uncertainty: Heuristics and Biases," established the framework in which simple cognitive shortcuts — heuristics — produce systematic and predictable errors. While anchoring-and-adjustment received the most attention in that paper, the availability heuristic they described was closely connected to recency: availability is partly a function of recency, because recent events are more readily brought to mind.

The application of recency concepts to financial markets was formalized by Werner De Bondt and Richard Thaler in their 1985 paper "Does the Stock Market Overreact?", published in the Journal of Finance. De Bondt and Thaler constructed portfolios of stocks based on their performance over the preceding three to five years, designating the top performers "winners" and the bottom performers "losers." They then tracked subsequent performance. The result was striking and theoretically significant: the loser portfolios dramatically outperformed the winner portfolios over the following three to five years. Loser stocks, on average, earned about 19.6 percent more than winner stocks over a subsequent three-year period. The market, De Bondt and Thaler argued, was systematically overreacting to recent trends — extrapolating past performance forward as if it were a reliable guide to future performance, then correcting when that extrapolation proved wrong. This was recency bias operating at the aggregate level of market prices.

De Bondt and Thaler's paper was controversial precisely because it challenged the efficient market hypothesis, which at that time dominated financial economics. Eugene Fama and Kenneth French subsequently argued that much of the winner-loser effect could be explained by differences in systematic risk exposure. The debate was productive: it forced both sides to be more precise about what they meant by "overreaction" and whether the pattern could be fully explained by rational risk compensation. The preponderance of subsequent research supported some version of De Bondt and Thaler's conclusion — that markets exhibit mean reversion driven at least partly by investor overreaction to recent trends.

William Goetzmann and Nadav Peles contributed a different angle in their 1997 paper "Cognitive Dissonance and Mutual Fund Investors," published in The Journal of Financial Research. They examined the relationship between investors' beliefs about past fund performance and the actual historical record. Their central finding was that investors systematically misremembered past returns in self-serving ways, inflating their recollection of the performance of funds they had chosen to hold. But embedded in their data was a recency effect as well: investors' beliefs about long-run fund performance were disproportionately shaped by the most recent year's returns, even when they were directly asked to evaluate longer periods. Recent returns dominated not just judgment but memory reconstruction.

The neuroscientific grounding of these phenomena was substantially advanced by Joseph LeDoux's work on the amygdala and emotional memory. LeDoux's research, summarized in his 1996 book The Emotional Brain, demonstrated that emotionally charged events — particularly threatening or frightening ones — are preferentially encoded and retrieved, and that the amygdala's involvement in memory consolidation means that emotional memories bypass some of the normal decay processes. This is directly relevant to recency bias in financial panics: the visceral terror of watching a portfolio lose 40 percent of its value in six months is not merely an intellectual record of a price decline. It is a high-arousal emotional memory that the amygdala has stamped with priority encoding. When investors sell at market bottoms, they are partly responding to recent market data and partly responding to the emotionally saturated memory of recent losses — a memory that feels more urgent and compelling than the dry historical record of prior market recoveries.


What the Research Shows

The empirical literature on recency bias is rich and consistent across domains. Several key studies establish the phenomenon's reach and magnitude.

The Return-Chasing Investor

Brad Barber and Terrance Odean, in a series of papers using discount brokerage data covering approximately 66,000 households from 1991 to 1996, established that individual investors demonstrably chase returns. Their 2000 paper in the Journal of Finance, "Trading Is Hazardous to Your Wealth," found that the most active traders — those whose behavior was most likely to reflect reactive responses to recent performance — earned annualized returns of 11.4 percent, compared to a market return of 17.9 percent over the same period. The underperformance was driven primarily by the pattern of buying recent winners at elevated prices and selling recent losers at depressed prices. In a complementary analysis, Barber and Odean tracked what investors sold versus what they bought and found that recently strong performers dominated the buy side regardless of underlying valuation. The investors were not analyzing companies; they were extrapolating recent price trajectories.

De Bondt and Thaler's Overreaction Evidence

The winner-loser study described above remains one of the most-cited papers in behavioral finance, with over 4,000 academic citations. Its central finding — that recent losers outperform recent winners over subsequent three-to-five-year horizons — has been replicated in multiple national markets. A 1995 replication by Chopra, Lakonishok, and Ritter in the Journal of Finance confirmed the reversal pattern in U.S. data and found that the effect was more pronounced for smaller stocks, consistent with the hypothesis that information asymmetry exacerbates recency-driven mispricing. International replications by Baytas and Cakici (1999) in the Journal of International Financial Markets, Institutions and Money found similar reversal patterns in seven of eight developed markets examined.

The Sports Draft and Recency in Evaluation

The NFL combine — the annual evaluation event at which prospective professional football players perform standardized athletic tests in front of scouts and coaches — has been studied as a laboratory for recency effects in professional evaluation. Cade Massey and Richard Thaler, in their 2013 paper "The Loser's Curse: Decision Making and Market Design in the NFL Draft" published in Management Science, examined the relationship between draft position and subsequent NFL performance. They found systematic overvaluation of early draft picks, which they attributed partly to the salience and recency of observable recent performance. A more targeted analysis of combine evaluation shows that performance on the final days of the combine — the sessions most temporally proximate to the draft selection — receives outsized weight relative to full college career records, even when full career data are more statistically predictive of professional success.

Annual Performance Reviews and the Recency Hump

Organizational psychology has documented recency bias in performance evaluations with consistent clarity. Kevin Murphy and Joseph Cleveland's comprehensive 1995 review of performance appraisal research, summarized in their book Understanding Performance Appraisal: Social, Organizational, and Goal-Based Perspectives, identified recency as one of the most persistent rating errors: supervisors weight the final months of a performance period — typically the two to three months immediately preceding the formal review — disproportionately relative to earlier months. In controlled experiments, identical annual performance profiles that differ only in the ordering of good and bad months produce significantly different ratings when the good months are placed at the end versus the beginning. The same behavior occurring in November receives a higher contribution to the annual rating than the same behavior occurring in February, not because of any difference in the behavior's value, but because November is more recent to the reviewer's active memory when the December review occurs.


Four Case Studies

Case Study One: The Dot-Com Peak, 1999-2000

The behavioral dynamics of the dot-com bubble represent recency bias operating at a civilizational scale. By late 1999, the five-year return on the NASDAQ was so extraordinary that it had entered popular culture: taxi drivers discussed technology stocks, personal finance magazines published cover stories on the first-time investor millionaires produced by Cisco, Qualcomm, and Amazon. The American Association of Individual Investors' sentiment survey registered bullish readings of 75 percent in January 2000 — among the highest in its history. Investment in technology mutual funds during 1999 and early 2000 ran at approximately $40 billion per quarter according to Investment Company Institute data — record inflows concentrated in the sector that had produced the most spectacular recent returns.

The cognitive mechanism was straightforward: investors had observed three, four, five years of extraordinary gains. The most available evidence in their memory was recent evidence. The most emotionally salient evidence was recent evidence. And recent evidence said, with apparent clarity, that technology investing was an enrichment machine. The longer historical record — which included technology booms and busts, sector-specific bubbles, the general tendency of extraordinary returns to mean-revert — was available but cognitively inaccessible, buried under the weight of vivid recent experience. By the time the correction began in March 2000, the pattern of overreaction was baked in.

Case Study Two: The 2008 Crisis and the Selling at the Bottom

The 2008-2009 financial crisis produced a mirror image of the dot-com pattern. Between September 2008 and March 2009, the S&P 500 fell approximately 45 percent. Household wealth destruction was on an order not seen since the 1930s. In this environment, the recent evidence was devastating and inescapable: Lehman Brothers had collapsed, storied financial institutions had required government rescue, unemployment was rising sharply, and media coverage was saturated with financial catastrophe.

Investment Company Institute data shows that equity mutual fund redemptions ran at approximately $150 billion in the fourth quarter of 2008 and continued through the first quarter of 2009 — the period coinciding almost precisely with the market trough. The trough occurred on March 9, 2009, when the S&P 500 reached 676.53. From that point, the index more than doubled over the following eighteen months. The investors who redeemed their equity holdings near the bottom, locking in losses of 40 or 45 percent, did so not because new fundamental information had emerged suggesting permanently impaired corporate earnings. They did so because recent experience — viscerally recent, emotionally charged, amygdala-encoded recent experience — made further decline feel more probable than it was. They were weighting the last six months at the expense of the prior sixty years of market recovery history.

Case Study Three: NFL Draft Combine Recency

The NFL draft evaluation process is one of the most data-intensive talent assessment systems in professional sports, and it illustrates how recency bias penetrates even environments where systematic measurement is readily available. The combine itself is a multi-day event held in Indianapolis each February, at which roughly 330 college players perform standardized tests: the 40-yard dash, vertical jump, bench press repetitions, and position-specific drills. The event is the final major data point before draft selections are made in late April.

Multiple analyses of draft outcomes have found that combine performance in the final days of the event — the on-field drills and position-specific workouts — influences draft position beyond what a rational weighting of that data relative to full college career statistics would justify. A quarterback who runs a slightly faster 40-yard dash at the combine than scouts expected, relative to his combine peers, receives a measurable draft position premium that exceeds what the same marginal speed advantage would merit relative to three years of college performance data. The temporal proximity of the combine to the draft decision amplifies its weight. This has documented consequences: players who overperform at the combine relative to their college record are systematically overdrafted, while players who underperform at the combine relative to a strong college record are systematically underdrafted.

Case Study Four: Recency in Monetary Policy Expectations

Central bank watchers and professional forecasters are among the more sophisticated consumers of economic data available, yet even they demonstrate systematic recency bias. A study by Campbell and Sharpe (2009), published in the Journal of Money, Credit and Banking, examined professional forecasters' predictions of Federal Reserve policy actions. They found that forecasters systematically over-extrapolated recent trends in interest rates: when rates had been rising, forecasters predicted rates would rise further than they subsequently did; when rates had been falling, they predicted further declines. This pattern, consistent with recency bias, was most pronounced at turning points in the rate cycle — precisely when the ability to recognize mean reversion mattered most. The forecasters were not unsophisticated; they were professional economists with access to the same historical data as anyone else. But recent rate movements consistently dominated their expectations in ways the historical base rates did not support.


When Recency Weighting Is Adaptive

It would be a mistake to treat recency bias as simply an error to be eliminated. The cognitive architecture that produces recency weighting exists for good reasons, and in many domains it remains genuinely useful.

For biological organisms operating in environments where conditions change over time, recent information is frequently more accurate than older information. The food sources, predators, social hierarchies, and weather patterns of recent months are more relevant to survival decisions than the average conditions of the past decade. A bias toward recency in these contexts is not a bias in any pejorative sense — it is an appropriate adaptive weighting given the structure of the environment. The problem arises when humans carry this adaptive weighting into environments where conditions are more stable, more mean-reverting, or where recent observations are high-noise samples from a stable underlying distribution.

In rapidly changing technological domains, recency weighting is often appropriate. The performance of an artificial intelligence model last week is a more accurate guide to its current capabilities than its performance two years ago, because the technology has changed fundamentally. The recent track record of a new drug that has only been in clinical use for eight months is genuinely the only relevant evidence — there is no longer track record to weight. In these contexts, a person who insists on weighting historical precedents equally with recent evidence may be making systematic errors of the opposite sign from recency bias — anchoring too heavily to outdated reference points.

The key analytical question is not whether to weight recent information, but whether the process generating the outcomes of interest is stationary or changing. In a stationary process — one where the underlying distribution of outcomes is stable over time — recent observations are high-noise samples that deserve less weight than the average over the full available history. In a non-stationary process — one where the underlying distribution is genuinely shifting — recent observations deserve more weight because they reflect the new regime more accurately than historical data from a different regime.

The tragedy of recency bias is that it is applied indiscriminately, without this assessment. Financial markets are broadly stationary in their long-run return distributions — the expected equity risk premium has not permanently changed because stocks rose 80 percent in the last three years, and has not permanently changed because they fell 40 percent in the last six months. Human cognitive architecture does not naturally perform this stationarity assessment. It defaults to treating recent evidence as the most reliable evidence regardless of whether the underlying process is changing or stable.


Mechanisms: Memory, Emotion, and Retrieval

The cognitive mechanisms underlying recency bias involve at least three partially distinct systems. Understanding their interaction clarifies why the bias is so difficult to counter.

Working memory provides the first mechanism. At the moment of judgment, information that is currently active in working memory — which by definition includes recently encountered information — receives priority in the computational process. This is the Ebbinghaus recency effect transposed from list-learning into real-time decision-making: what is most recently experienced is most readily available in the cognitive workspace where decisions are constructed.

Long-term memory retrieval provides the second. Even when information has moved out of working memory into long-term storage, recent events produce stronger memory traces. The hippocampus encodes recent experiences with greater synaptic strength than distant ones, all else equal, meaning that attempts to deliberately recall past events produce a sample that overrepresents the recent past. An investor trying to mentally inventory past market cycles will find that the 2008 crisis looms larger in recall than the 1987 crash or the 2001 recession simply because it is more recent, regardless of its objective relevance.

Emotional encoding provides the third and perhaps most powerful mechanism. Events that carry emotional charge — financial losses, public embarrassments, physical dangers, professional failures — are processed by the amygdala in a way that enhances their encoding and retrieval. Recent losses in a financial portfolio are not merely recent; they are often emotionally saturated. They feel urgent, viscerally present, more real than the statistical abstractions of long-run historical returns. When an investor's portfolio is down 40 percent, she is not deciding with reference to abstract probability distributions. She is deciding with a nervous system that has been running elevated stress hormones for six months, and that has stamped recent losses with priority encoding. The combination of recency and emotional salience produces a cognitive environment in which the recent distribution of outcomes feels far more predictive than any rational assessment of the evidence would support.


Debiasing: What Actually Works

The research on reducing recency bias suggests that awareness alone is insufficient. Knowing that recency bias exists does not reliably cause people to correct for it in their own judgments. More effective interventions share a common structure: they force exposure to the longer historical record and require active engagement with that record rather than passive exposure.

Pre-mortems, a technique popularized by Gary Klein and endorsed by Kahneman in Thinking, Fast and Slow (2011), ask decision-makers to imagine that a future decision has failed and to generate plausible accounts of why. This exercise forces retrieval of historical precedents for failure that recency bias would otherwise suppress, and populates working memory with non-recent information that competes with the recent trend.

Base-rate education — providing decision-makers with explicit statistical summaries of historical outcomes before presenting recent case information — has been shown by Kahneman and Tversky and many subsequent researchers to partially counter the dominance of recent case-specific data. The key word is "partially": recent cases typically continue to receive more weight than their statistical value warrants even after base rates are presented. But the effect is reduced when base rates are made vivid and concrete rather than abstract.

Structured process disciplines — such as investment policy statements that define asset allocation rules in advance, hiring rubrics that define evaluation criteria before candidates are interviewed, or performance review systems that require supervisors to log quarterly performance assessments rather than relying on end-of-year recall — reduce recency bias by building non-recent information into the formal evaluation structure. These interventions work not by correcting cognitive processing but by redesigning the information environment so that non-recent evidence is as accessible and concrete as recent evidence.


The Deeper Problem

What makes recency bias genuinely dangerous is not merely that it produces errors but that it produces systematic, predictable, directional errors — the kind that compound over time rather than canceling out. A random error process eventually self-corrects. But the investor who buys at peaks and sells at troughs because she is weighting recent returns, then does the same thing in the next cycle and the cycle after that, is not making random errors around the correct value. She is making the same error repeatedly, in the same direction, with the same timing, with compounding consequences for wealth accumulation.

The dot-com investors who loaded up on technology stocks in late 1999 were drawing on the same cognitive mechanism that caused the same people — or their children — to sell equities in March 2009. Recency bias is not a one-time error of judgment. It is a recurring structural feature of how human memory and judgment interact with information that unfolds over time. The recent past always feels more representative of the present than it is. The most recent data point always feels more predictive than the historical distribution. And the most recent emotional experience — of gain, of loss, of triumph, of humiliation — always feels more certain, more real, more urgent than the abstract statistical record of what usually happens.

That is why recency bias is not merely an interesting quirk in the psychology literature. It is the cognitive architecture that drove $5 trillion of wealth destruction in the dot-com crash, $8 trillion in the 2008 crisis, and countless smaller-scale disasters in career decisions, medical diagnoses, and interpersonal judgments. The mind is not designed to see the recent past clearly. It is designed to see the recent past vividly, urgently, and with inflated certainty — and then to mistake that vividness for accuracy.


References

  1. Ebbinghaus, H. (1885). Uber das Gedachtnis: Untersuchungen zur experimentellen Psychologie. Duncker & Humblot. (Translated as Memory: A Contribution to Experimental Psychology, 1913.)

  2. Kahneman, D., Fredrickson, B. L., Schreiber, C. A., & Redelmeier, D. A. (1993). When more pain is preferred to less: Adding a better end. Psychological Science, 4(6), 401-405.

  3. De Bondt, W. F. M., & Thaler, R. H. (1985). Does the stock market overreact? Journal of Finance, 40(3), 793-805.

  4. Barber, B. M., & Odean, T. (2000). Trading is hazardous to your wealth: The common stock investment performance of individual investors. Journal of Finance, 55(2), 773-806.

  5. Goetzmann, W. N., & Peles, N. (1997). Cognitive dissonance and mutual fund investors. Journal of Financial Research, 20(2), 145-158.

  6. Kahneman, D., & Tversky, A. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124-1131.

  7. Massey, C., & Thaler, R. H. (2013). The loser's curse: Decision making and market design in the NFL draft. Management Science, 59(7), 1479-1495.

  8. LeDoux, J. (1996). The Emotional Brain: The Mysterious Underpinnings of Emotional Life. Simon & Schuster.

  9. Campbell, S. D., & Sharpe, S. A. (2009). Anchoring bias in consensus forecasts and its effect on market prices. Journal of Financial and Quantitative Analysis, 44(2), 369-390.

  10. Murphy, K. R., & Cleveland, J. N. (1995). Understanding Performance Appraisal: Social, Organizational, and Goal-Based Perspectives. Sage Publications.

  11. Chopra, N., Lakonishok, J., & Ritter, J. R. (1992). Measuring abnormal performance: Do stocks overreact? Journal of Financial Economics, 31(2), 235-268.

  12. Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.

Frequently Asked Questions

What is recency bias?

Recency bias is the tendency to overweight recent events relative to earlier events when forming judgments about probability, performance, or future outcomes. Rooted in the Ebbinghaus serial position effect (1885) — the finding that items encountered most recently are remembered most clearly — it causes people to treat the recent past as more representative of the future than a longer historical record would warrant.

How does recency bias affect investing?

Recency bias in investing causes investors to chase returns — buying after strong recent performance and selling after recent losses. Barber and Odean's analysis of 66,000 household brokerage accounts found that investors systematically moved into assets that had performed well recently and out of assets that had fallen, regardless of forward-looking fundamentals. This produces the classic pattern of buying near market peaks and selling near troughs — the inverse of rational behavior.

What did De Bondt and Thaler find about recency bias?

Richard De Bondt and Richard Thaler's 1985 paper 'Does the Stock Market Overreact?' found that stocks which had been extreme losers over the past three to five years significantly outperformed extreme winners over the subsequent three to five years. This 'winner-loser reversal' showed that the market systematically overreacted to recent trends — pricing recent losers too low and recent winners too high — consistent with investors extrapolating recent performance too strongly into the future.

What is the peak-end rule and how does it relate to recency bias?

The peak-end rule, described by Kahneman and Fredrickson (1993), is the finding that people evaluate experiences not by their average intensity but by two points: the peak (most intense moment) and the end. This is a specific form of recency bias — the ending of an experience disproportionately determines how the entire experience is remembered and evaluated. A painful medical procedure with a mild ending is remembered as less painful than a shorter procedure that ended more sharply.

When is recency weighting rational?

Recency weighting is rational when the underlying process is non-stationary — when recent data genuinely reflects a changed state of the world rather than noise around a stable mean. In genuinely changing environments (a company undergoing management change, a market entering a structural shift, a person whose skills are actively improving), recent information deserves more weight than distant history. The bias occurs when recency weighting is applied to stationary processes where long-run base rates are the better predictor.