In the summer of 1974, Daniel Kahneman and Amos Tversky published a paper in Science that did not make especially obvious claims. "Judgment Under Uncertainty: Heuristics and Biases" reported experiments showing that people use mental shortcuts when estimating probabilities, and that these shortcuts produce systematic errors. The paper was written clearly, the experiments were simple enough to be replicated with a class of students and a questionnaire, and the findings seemed, on reflection, almost obvious. Of course people judge probability by how easily examples come to mind. Of course anchors affect numerical estimates. Of course we mistake representativeness for probability.
What was not obvious -- what took another decade to become apparent -- was that this program of research would fundamentally alter the theoretical foundations of economics, produce two Nobel Prizes, reshape public policy in dozens of countries, and generate a field that would become one of the most influential in social science by the turn of the century. The key was not the individual findings, which could each be dismissed as laboratory curiosities unlikely to matter in consequential real-world decisions, but the cumulative picture they assembled: that human judgment and choice depart from rational-actor models not randomly but systematically, in ways that are predictable, consequential, and theoretically significant.
"The focus on cases where the machinery of intuition fails us does not denigrate human intelligence, any more than the study of illusions impugns the workings of the visual system." -- Daniel Kahneman, Thinking, Fast and Slow (2011)
Key Definitions
Rational actor model: The standard economic assumption that individuals have stable, well-defined preferences; process information correctly; and make choices that maximize their expected utility.
Heuristic: A cognitive shortcut or rule of thumb that simplifies complex judgment and decision problems, typically producing good-enough answers quickly but sometimes generating systematic errors.
Bias: A systematic, predictable deviation from the optimal judgment or choice that the rational actor model would produce.
Loss aversion: The tendency to weight losses more heavily than equivalent gains; the centerpiece of prospect theory, typically estimated as a ratio of approximately 2:1.
Choice architecture: The design of the environment in which people make decisions, including the arrangement of options, default settings, and framing of information.
Nudge: A choice architecture intervention that predictably alters behavior in a predictable direction without restricting options or significantly changing economic incentives.
Before Behavioral Economics: The Rational Actor
Herbert Simon and Bounded Rationality
The critique of the rational actor model predates the behavioral economics research program. Herbert Simon, a polymath who made foundational contributions to economics, computer science, cognitive psychology, and organization theory, published "A Behavioral Model of Rational Choice" in 1955, arguing that human rationality is bounded by cognitive limitations, incomplete information, and time constraints. Simon was awarded the Nobel Memorial Prize in Economic Sciences in 1978, in large part for this contribution.
Simon's alternative to global rationality was satisficing: the tendency of decision-makers to set an aspiration level and search through alternatives until one is found that meets the threshold, rather than exhaustively evaluating all alternatives to find the optimum. Satisficing is not irrational -- given the costs of computation and information gathering, stopping search at an acceptable alternative is often the most efficient strategy. But it departs systematically from the behavior of the utility-maximizing agent assumed in neoclassical theory.
Simon's framework was influential in organization theory and management science, where it informed work on how organizations search for solutions to problems, how they form standard operating procedures, and how they adapt to changing environments. Its influence on economics was more limited during Simon's lifetime: mainstream economists found the optimization framework too analytically convenient to abandon even in the face of Simon's critique. Kahneman and Tversky's contribution was to produce specific, replicable experimental demonstrations of rational-actor violations that were harder for economists to ignore.
The Standard Model and Its Assumptions
The rational actor model, as formalized in expected utility theory by John von Neumann and Oskar Morgenstern (1944) and in the theory of consumer choice by Paul Samuelson and others, makes several assumptions that behavioral economics has systematically challenged. First, preferences are assumed to be complete (agents can rank all alternatives), transitive (if A is preferred to B and B to C, then A is preferred to C), and stable over time. Second, agents are assumed to process probabilities in accordance with Bayes' theorem, updating their beliefs correctly as new evidence arrives. Third, agents are assumed to evaluate final outcomes rather than changes from reference points: what matters is the level of wealth, not whether it represents a gain or loss from some prior state.
These assumptions generate clean, tractable models and sharp predictions. They also face systematic challenges from experimental evidence. Context affects choices that should be context-independent: the same choice problem framed differently produces different choices. Reference points matter: the same objective outcome produces different utilities depending on whether it feels like a gain or a loss. Time inconsistency is widespread: people make plans that their future selves deviate from in predictable ways, violating the standard model's assumption of dynamically consistent preferences. These patterns are not random; they are systematic enough to model, and behavioral economics has developed an alternative theoretical framework built on more psychologically realistic foundations.
Heuristics and Biases: The Kahneman-Tversky Program
The 1974 Paper and Its Three Heuristics
The availability heuristic, representativeness heuristic, and anchoring-and-adjustment heuristic that Kahneman and Tversky identified in 1974 have each generated large research literatures and have been documented in consequential real-world settings.
Availability operates in medical diagnosis (physicians overestimate the probability of diseases that are salient from recent experience), legal judgment (jurors overweight vivid testimony relative to base-rate statistical evidence), financial markets (investors overweight recent price movements), and risk perception (the public dramatically overestimates the risk of dramatic, heavily reported causes of death). The mechanism is that retrieval ease is a proxy for frequency, and retrieval ease is affected by factors unrelated to actual frequency: vividness, recency, emotional salience, and media coverage.
Representativeness operates in probability assessment through the base-rate neglect and conjunction fallacy. The Linda problem, designed by Kahneman and Tversky, presented subjects with a description of Linda as a bright, socially concerned student who participated in anti-nuclear demonstrations, then asked whether it was more probable that Linda was a bank teller or that she was a bank teller active in the feminist movement. The majority of subjects chose the conjunction, demonstrating that the representativeness of the description to the stereotype overrides the basic probability principle that a conjunction cannot be more probable than either of its components.
Anchoring produces effects in numerical estimation, salary negotiations, legal damages, and real estate pricing. In one famous demonstration, subjects who spun a wheel that stopped at 10 or 65, then estimated the percentage of African countries in the United Nations, gave median estimates of 25 and 45 respectively -- dramatically different estimates influenced by the completely arbitrary wheel outcome. The implication for negotiations is that whoever sets the first number in a negotiation -- the anchor -- gains a significant advantage, because counteroffers tend to be insufficiently adjusted from the anchor.
Framing Effects and the Asian Disease Problem
The framing effect -- the finding that logically equivalent descriptions of the same choice produce systematically different decisions -- is one of behavioral economics' most important and counterintuitive contributions. Kahneman and Tversky's Asian disease problem presented the same public health decision in two frames. In the positive frame, subjects chose between a program that would save 200 people for certain and a program with a one-third probability of saving 600 people and a two-thirds probability of saving no one. In the negative frame, subjects chose between a program that would result in 400 deaths for certain and a program with a two-thirds probability of 600 deaths and a one-third probability of no deaths. The two frames describe identical outcomes -- 200 saved is equivalent to 400 dead when 600 total are at risk -- but subjects showed a strong preference for the certain option in the positive frame and the risky option in the negative frame.
The framing effect reflects a general principle from prospect theory: people are risk-averse in the gains domain (preferring the certain small gain to the risky large gain) and risk-seeking in the losses domain (preferring the risky chance to avoid all losses to the certain smaller loss). Framing influences which domain is activated, determining risk preference for the same objective choice.
This has profound implications for how information is communicated in medical, legal, and policy contexts. The same medical treatment presented as having a 90 percent survival rate versus a 10 percent mortality rate produces different acceptance rates among patients and physicians, despite describing identical outcomes. The same policy presented as preventing losses versus achieving gains produces different support.
Prospect Theory
Kahneman and Tversky's "Prospect Theory: An Analysis of Decision Under Risk" (Econometrica, 1979) is described by economists as the most cited paper in the discipline and as the foundational document of behavioral economics. It is a mathematical theory -- a formal model of choice under risk -- built on psychologically realistic foundations, intended to replace expected utility theory as a descriptive account of actual decision-making.
The theory's value function is defined on gains and losses relative to a reference point rather than on final wealth levels. It is concave in the gains domain (reflecting diminishing marginal sensitivity) and convex in the losses domain, and it is steeper in the losses domain than in the gains domain: the loss of one hundred dollars causes approximately twice the disutility of the gain of one hundred dollars. This loss aversion -- the asymmetry between the pain of losses and the pleasure of gains -- is prospect theory's most influential single prediction.
The theory's probability weighting function captures non-linear probability processing. People do not weight outcomes by their objective probabilities; instead, they overweight small probabilities and underweight moderate and large probabilities. This produces the simultaneous preference for insurance (overweighting the small probability of catastrophic loss) and lottery tickets (overweighting the small probability of a large gain) that expected utility theory cannot accommodate without assuming extreme risk aversion for small stakes.
Loss aversion has been invoked to explain many behavioral phenomena: the endowment effect (people demand more to give up an object they own than they would pay to acquire it), status quo bias (people prefer existing arrangements to changes of equivalent expected value), reluctance to realize financial losses (investors hold losing stocks too long, hoping to break even), and the disposition effect in asset markets (selling winners too early and holding losers too long).
Thaler, Mental Accounting, and the Endowment Effect
Richard Thaler extended prospect theory into consumer psychology and market behavior through the concept of mental accounting, developed in a series of papers in the 1980s. Mental accounting describes the implicit cognitive system by which people organize, evaluate, and keep track of their financial activities, maintaining separate mental accounts for different expenditure categories and making decisions that violate the fungibility of money that standard economic theory assumes.
Examples of mental accounting violations are familiar from everyday experience. People treat a windfall differently from regular income, spending a tax refund on luxuries they would not purchase from their salary even though the money is objectively equivalent. People maintain separate mental accounts for "vacation funds" and "emergency savings" even when borrowing to replenish the vacation fund while earning lower returns on the emergency savings. People pay for gym memberships they do not use because the sunk cost continues to generate psychological commitment. And the pain of a loss is greater when it depletes a mental account than when it merely reduces the account's balance.
The endowment effect -- the finding that people value objects they own more highly than identical objects they do not own -- was documented by Thaler, Kahneman, and their colleagues in experiments where subjects who were given coffee mugs demanded approximately twice as much to sell them as other subjects were willing to pay to acquire them. The effect has been replicated in many settings and has implications for how ownership affects market behavior, negotiation, and policy.
Thaler was awarded the Nobel Memorial Prize in Economic Sciences in 2017, with the prize committee citing his contributions to behavioral economics including mental accounting, the endowment effect, and nudge theory. His Nobel lecture provided a lucid account of the field's development and its relationship to standard economics.
Nudge Theory and Policy Applications
Libertarian Paternalism and Choice Architecture
Thaler and Sunstein's "Nudge" (2008) argued that nudges could achieve paternalistic goals -- helping people make better choices for their own welfare -- without restricting freedom of choice, satisfying both conservative concerns about government interference with market choices and liberal concerns about individual welfare. The philosophical foundation was libertarian paternalism: preserving freedom while using behavioral insights to steer choices in better directions.
The most influential nudge application is automatic enrollment in retirement savings plans. Traditional opt-in plans required employees to affirmatively elect to contribute; participation rates were typically 40 to 60 percent. When plans were redesigned with automatic enrollment at a default contribution rate (typically 3 percent of salary), with employees free to change or opt out, participation rates rose to 85 to 90 percent. The change in behavior was produced entirely by a shift in the default option -- by exploiting the status quo bias and inertia that lead people to accept whatever they are initially enrolled in.
The UK's Behavioural Insights Team, established in 2010, conducted hundreds of randomized controlled trials testing behavioral interventions in tax compliance, welfare uptake, energy conservation, organ donation, and health behavior. Their EAST framework (Easy, Attractive, Social, Timely) summarized principles for effective behavioral intervention. The team demonstrated measurable effects in several areas: a letter to late tax filers that mentioned that most of their neighbors had already paid increased payment rates; simplifying the job search requirements for unemployment benefits increased compliance; and personalizing reminder letters improved attendance at medical appointments.
The Replication Crisis and Behavioral Economics
The behavioral economics field was significantly affected by the broader replication crisis in psychology that became visible after 2011. A 2016 study by Camerer and colleagues attempted to replicate 18 behavioral economics experiments published in the American Economic Review and the Quarterly Journal of Economics, finding that 61 percent replicated successfully -- a higher rate than comparable psychology studies but with substantially smaller effect sizes on average.
The most publicized failures were in social priming, where subtle environmental cues were claimed to influence complex behavior: money priming (exposure to money-related cues activates individualistic behavior), flag priming (exposure to the American flag shifts political attitudes), and related effects failed to replicate under better-powered pre-registered conditions. The ego depletion effect -- the claim that self-control is a depletable resource -- which had generated extensive applications in willpower research and health behavior, produced near-zero average effects in a large multilab replication.
Core prospect theory findings have generally survived pre-registered replications better than the field's social priming extensions, but the crisis has prompted more careful attention to external validity -- whether effects found in laboratory experiments with university students generalize to consequential real-world decisions -- and to effect size, since small laboratory effects may not justify policy interventions.
The crisis has also stimulated reflection about the science. Behavioral economics, like psychology more broadly, had incentive structures that rewarded novel, surprising findings published in high-prestige journals, creating systematic pressure toward false positives. Moving toward pre-registration of hypotheses and analysis plans, open data and materials, and registered replication reports has begun to change the incentive structure, though the transition is incomplete.
System 1, System 2, and the Popularization of Behavioral Insights
Kahneman's 2011 book "Thinking, Fast and Slow" synthesized the research program for a general audience through the organizing framework of System 1 (fast, automatic, associative, emotional) and System 2 (slow, deliberate, logical, effortful) thinking. The framework, adapted from psychologists Keith Stanovich and Richard West, provided an accessible account of why heuristics and biases arise: System 1 is doing most of the cognitive work most of the time, and its speed and efficiency come at the cost of the systematic errors documented in the research program.
The book became an international bestseller and introduced behavioral economics concepts -- loss aversion, anchoring, framing, the availability heuristic, cognitive ease -- to millions of readers. It also contributed to a genre of popular social science books that applied behavioral findings to everyday decisions, negotiations, management, and policy, producing a literature of varying quality and reliability that considerably outpaced the research base.
The popularization has had mixed effects on the field. It has increased demand for behavioral insights in policy and business contexts, funding research programs and creating careers. It has also created pressure to produce simple, generalizable findings suitable for popular communication, which may not align well with the careful, conditional findings that rigorous science produces. The gap between the confident popular claims made on behalf of behavioral economics and the more modest, context-dependent findings of the actual research literature has become a recurring source of criticism.
See Also
- What Is Prospect Theory?
- What Is Cognitive Bias?
- What Is International Trade Theory?
- What Is Philosophy of Science?
References
- Kahneman, D., & Tversky, A. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124-1131.
- Tversky, A., & Kahneman, D. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2), 263-291.
- Simon, H. A. (1955). A behavioral model of rational choice. Quarterly Journal of Economics, 69(1), 99-118.
- Thaler, R. H. (1980). Toward a positive theory of consumer choice. Journal of Economic Behavior and Organization, 1(1), 39-60.
- Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving Decisions About Health, Wealth, and Happiness. Yale University Press.
- Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.
- Camerer, C. F., Dreber, A., Forsell, E., et al. (2016). Evaluating replicability of laboratory experiments in economics. Science, 351(6280), 1433-1436.
- Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716.
- Thaler, R. H., & Benartzi, S. (2004). Save more tomorrow: Using behavioral economics to increase employee saving. Journal of Political Economy, 112(S1), S164-S187.
- Halpern, D. (2015). Inside the Nudge Unit: How Small Changes Can Make a Big Difference. W. H. Allen.
- Gigerenzer, G. (2015). Risk Savvy: How to Make Good Decisions. Viking.
Frequently Asked Questions
What is behavioral economics and how does it differ from standard economics?
Behavioral economics is the field that studies how actual human decision-making departs from the rational actor model assumed in standard neoclassical economics. The rational actor model -- sometimes called Homo economicus -- assumes that individuals have well-defined, stable preferences; that they process all available information correctly; that they maximize their expected utility; and that their choices are internally consistent. These assumptions produce clean, tractable models that have enormous analytical power, but they abstract away from a great deal of how people actually think and choose.Behavioral economics draws on experimental psychology to document systematic, predictable ways in which human judgment and choice depart from these rational benchmarks. The departures are not random noise but structured patterns: people consistently overweight small probabilities, underweight moderate and large probabilities, evaluate outcomes relative to reference points rather than in absolute terms, lose more to losses than they gain from equivalent gains, prefer the status quo, give undue weight to vivid and recent examples when estimating probabilities, and make different choices depending on how options are framed, even when the underlying values are identical.The field differs from standard economics not by abandoning the goal of systematic modeling but by building models that incorporate psychologically realistic assumptions about cognitive processes and preferences. This sometimes means accepting models that are more complex and less parsimonious than their rational-actor counterparts, though behavioral economists have also argued that simpler models calibrated on how people actually behave make better empirical predictions than more complex models built on the fiction of perfect rationality. The applied arm of the field concerns how institutions -- markets, firms, governments -- can be designed to help people make better choices given their actual cognitive limitations.
What is bounded rationality and who developed the concept?
Bounded rationality is a concept introduced by Herbert Simon in a 1955 paper titled 'A Behavioral Model of Rational Choice,' published in the Quarterly Journal of Economics, and developed further across his career. Simon, who won the Nobel Prize in Economics in 1978, argued that human rationality is bounded by three constraints: the cognitive limitations of the human mind, the incomplete information available to the decision-maker, and the finite amount of time available to make decisions.Simon's key insight was that the assumption of global rationality -- optimizing over all possible alternatives with perfect information -- was not merely an abstraction but a psychologically impossible standard. Real decision-makers do not enumerate all alternatives and select the one with highest expected utility. Instead, they use simplified decision procedures that are computationally feasible given cognitive limitations. Simon called this satisficing: setting an aspiration level and searching through alternatives until one is found that meets the threshold, rather than continuing to search for the global optimum.Satisficing is not irrational. Given the costs of information gathering and computation, stopping search at a satisfactory alternative is often more efficient than continuing to seek the optimum, which may be unknowable in any case. Simon's work was thus a critique of the standard model not on the grounds that people are irrational but on the grounds that the standard model's conception of rationality was based on a false model of human cognitive capacities.Simon's framework influenced both behavioral economics and artificial intelligence research, where his ideas about heuristic search informed early AI systems. Within economics, his work opened the door to asking how actual cognitive processes shape economic behavior, a question that Kahneman, Tversky, Thaler, and subsequent behavioral economists pursued with extensive experimental evidence.
What are heuristics and biases and what did Kahneman and Tversky discover?
Daniel Kahneman and Amos Tversky published their landmark paper 'Judgment Under Uncertainty: Heuristics and Biases' in the journal Science in 1974, launching a research program that would transform both psychology and economics. The paper documented that people rely on a limited number of cognitive shortcuts -- heuristics -- when making judgments under uncertainty, and that these heuristics produce systematic, predictable errors -- biases.The availability heuristic involves judging the frequency or probability of an event by how easily examples come to mind. Events that are vivid, recent, or emotionally salient are more easily retrieved from memory, so they are judged more probable than their actual frequency warrants. This explains why people overestimate the frequency of dramatic causes of death (plane crashes, shark attacks) and underestimate the frequency of mundane ones (heart disease, car accidents on familiar routes).The representativeness heuristic involves judging probability by similarity to a prototype. When asked whether a person with a given description is more likely to be a librarian or a farmer, people judge by how well the description matches their mental image of each occupation, ignoring base rates -- the fact that there are far more farmers than librarians. This produces the conjunction fallacy: people judge a specific scenario (Linda is a bank teller and active in the feminist movement) as more probable than a general one (Linda is a bank teller), because the specific scenario better matches the prototype, even though basic probability theory requires the conjunction to be at most as probable as either constituent.Anchoring describes the tendency to make judgments that are insufficiently adjusted from an initial value, even when that value is arbitrary or irrelevant. In a famous experiment, subjects who were asked to spin a wheel that randomly stopped at 10 or 65, then estimate the percentage of African countries in the United Nations, gave estimates that were significantly influenced by the arbitrary number the wheel had produced. These findings established that judgment under uncertainty is systematically biased in ways that matter for economic, medical, legal, and policy decisions.
What is prospect theory and why is it considered the most important paper in behavioral economics?
Kahneman and Tversky's 'Prospect Theory: An Analysis of Decision Under Risk,' published in Econometrica in 1979, is routinely described as the most cited paper in economics and the foundational document of behavioral economics. The paper had the unusual distinction of being published in economics' most prestigious technical journal by authors who were psychologists at Hebrew University, using the methodology of controlled experiments on small samples of subjects rather than econometric analysis of market data.Prospect theory is a descriptive theory of choice under risk -- a model of how people actually make decisions involving uncertain outcomes, as opposed to expected utility theory, which describes how rational agents should decide. Three features of the theory are central.First, people evaluate outcomes relative to a reference point rather than in terms of final wealth. The same objective wealth level feels like a gain or a loss depending on the reference point -- typically the current situation or an expectation about the outcome. This reference dependence means that the same absolute outcome can produce very different utilities depending on where you started.Second, the value function -- the relationship between outcomes and subjective value -- is concave in the gains domain (diminishing marginal value: the difference between gaining nothing and gaining one hundred dollars feels larger than the difference between gaining nine hundred and gaining one thousand dollars) and convex in the losses domain (diminishing sensitivity to losses: losing one hundred dollars hurts more than losing the hundred-and-first dollar). Most importantly, the function is steeper in the losses domain than the gains domain: the pain of losing one hundred dollars is approximately twice the pleasure of gaining one hundred dollars. This loss aversion is prospect theory's most influential prediction.Third, people weight probabilities non-linearly: they overweight small probabilities (which explains the simultaneous appeal of insurance and lottery tickets) and underweight moderate and large probabilities. The 1979 paper demonstrated these patterns through the famous Allais paradox and related choice problems that violated expected utility theory's predictions.
What is the nudge framework and how has it been applied to public policy?
Richard Thaler and Cass Sunstein's 2008 book 'Nudge: Improving Decisions About Health, Wealth, and Happiness' synthesized behavioral economics research into a policy framework they called libertarian paternalism and popularized the concept of choice architecture. The core idea is that the way choices are structured -- the default options, the order in which alternatives are presented, the framing of outcomes, the salience of information -- powerfully influences what people choose, and that designers of choice environments can use this influence to steer people toward better outcomes without removing their freedom to choose otherwise.A nudge is a change in choice architecture that predictably alters behavior without forbidding any options or significantly changing economic incentives. Classic examples include automatic enrollment in retirement savings plans with opt-out provisions (exploiting inertia and status quo bias to increase savings rates), placing healthier foods at eye level in school cafeterias (exploiting attention and availability), sending tax compliance letters that mention the high percentage of neighbors who have already paid (exploiting social norms), and simplifying financial aid forms to reduce the dropout rate among college applicants (exploiting complexity aversion).The framework was institutionalized in government. The Obama administration's Office of Information and Regulatory Affairs explicitly incorporated behavioral insights, and the UK's Behavioural Insights Team -- sometimes called the 'Nudge Unit' -- was established in 2010 as the first government institution dedicated to applying behavioral science to policy. It has generated hundreds of randomized controlled trials testing behavioral interventions in areas from tax compliance to energy conservation to organ donation.Critics raise several concerns. Nudges can be paternalistic in ways that are difficult to contest because they operate below the level of conscious deliberation. The line between helpful nudge and manipulative shove is not always clear, and the same techniques used to encourage retirement saving can be used to exploit consumers. The field's replication problems (see below) raise questions about how reliably nudge effects generalize across populations and contexts. And some scholars argue that nudges address symptoms of social problems rather than structural causes.
What is the replication crisis in behavioral economics?
The replication crisis refers to the finding, documented extensively from roughly 2011 onward, that a substantial fraction of published findings in psychology and behavioral economics fail to replicate when other researchers attempt to reproduce them using pre-registered protocols with independent samples. The crisis has been particularly acute in social psychology and, to a somewhat lesser degree, in behavioral economics, and it has prompted deep reflection about research methodology, publication incentives, and the evidentiary basis of the field's policy applications.The most systematic evidence comes from the Reproducibility Project: Psychology, published in Science in 2015 by Brian Nosek and colleagues, which attempted to replicate 100 published psychological findings and found that only 39 percent produced results of similar magnitude and significance to the original. A 2016 study by Camerer and colleagues specifically targeting behavioral economics experiments found a somewhat higher replication rate (61 percent), but with substantially smaller effect sizes on average than the originals.Several factors contributed to the problem. Publication bias -- the tendency of journals to publish significant positive results and reject null results -- creates a literature that systematically overstates effect sizes. Researcher degrees of freedom -- the many choices available to researchers in data collection, processing, and analysis -- allow motivated reasoning to shape results without explicit fraud. Small sample sizes common in laboratory experiments mean that published effects may reflect noise rather than genuine phenomena. And the incentive structure of academic careers rewards novel, surprising findings rather than careful replication.Several important behavioral findings have failed to replicate under pre-registered conditions. The ego depletion effect -- the finding that willpower is a depletable resource -- produced a near-zero average effect in a large multilab replication. The money priming effect -- that exposure to money-related cues activates self-sufficient, individualistic behavior -- has not replicated reliably. Some nudge effects have proven highly sensitive to context and population in ways that limit their generalizability. Core prospect theory findings, tested in more controlled and adequately powered studies, have generally replicated, providing more confidence in the foundational work than in some of the field's extensions.