Origins of Behavioral Economics

For most of the twentieth century, economics rested on a foundational assumption that most economists knew was false but treated as close enough to true to be useful: that human beings are rational agents who consistently maximize their utility. This assumption, embedded in the mathematical models that defined modern economics, held that people have stable, well-ordered preferences; that they process all available information accurately; that they discount future rewards at consistent rates; and that they make choices that reliably advance their own interests as they themselves define them. The assumption was not naive. Economists understood that real people sometimes make mistakes. The argument was that these mistakes were random, unsystematic, and would cancel out in aggregate, leaving the rational actor model as a useful approximation of collective behavior.

Behavioral economics demolished this defense. Beginning in the 1950s with Herbert Simon's concept of bounded rationality and accelerating through the 1970s with Daniel Kahneman and Amos Tversky's revolutionary work on cognitive biases and heuristics, a growing body of research demonstrated that human departures from rationality are not random at all. They are systematic, predictable, and persistent. People consistently overweight losses relative to equivalent gains. They anchor judgments on irrelevant numbers. They evaluate probabilities using mental shortcuts that produce reliable errors. They make different choices depending on how identical options are framed. These are not occasional lapses by otherwise rational agents; they are fundamental features of human cognition that any realistic model of economic behavior must incorporate.

The story of how this insight emerged, met fierce resistance from the economics establishment, and eventually transformed both economic theory and public policy is one of the most important intellectual developments of the past century. It touches questions that matter to everyone who makes decisions, which is to say everyone: Why do we consistently make choices that work against our own interests? Can the environments in which we make choices be designed to help us choose better? And what does it mean for democratic governance when citizens and policymakers alike are subject to predictable cognitive biases?


The Rational Actor and the World It Built

To understand what behavioral economics challenged, you must first understand the edifice it challenged. The rational actor model (also called Homo economicus) was not merely a simplifying assumption in economic textbooks. It was the foundation upon which an enormous intellectual and institutional structure was built: welfare economics, cost-benefit analysis, efficient market theory, rational expectations macroeconomics, law and economics, public choice theory, and much of game theory. Each of these intellectual frameworks assumed, in one form or another, that economic agents are rational utility maximizers.

The Mathematical Formalization of Rationality

The rational actor model achieved its mature mathematical form in the 1940s and 1950s through the work of John von Neumann and Oskar Morgenstern, who published Theory of Games and Economic Behavior in 1944, and Leonard Savage, who published The Foundations of Statistics in 1954. Von Neumann and Morgenstern showed that if an agent's preferences satisfy certain axioms (completeness, transitivity, continuity, and independence), then the agent behaves as if maximizing a utility function. Savage extended this framework to decision-making under uncertainty, showing that rational agents should behave as if they assign subjective probabilities to uncertain events and maximize expected utility.

These axioms seemed almost self-evidently reasonable. Transitivity, for example, merely requires that if you prefer A to B and B to C, you should prefer A to C. Independence requires that if you prefer A to B, you should continue to prefer A to B even when both options are combined with the same third option. Completeness requires that for any two options, you either prefer one, prefer the other, or are indifferent. Violations of these axioms seemed like obvious errors that rational people would correct upon reflection.

The power of this framework was immense. If people behave as utility maximizers, then market outcomes can be analyzed using optimization mathematics. Consumer behavior can be predicted from utility functions and budget constraints. Firms can be modeled as profit maximizers. Equilibrium prices reflect all available information. Government intervention in markets can be evaluated by comparing the costs of market failure against the costs of regulatory distortion. An entire architecture of economic reasoning followed from the rational actor assumption, and this architecture produced genuine insights about markets, prices, trade, and policy.

The Chicago School and Market Efficiency

The rational actor model found its strongest institutional home at the University of Chicago, where economists like Milton Friedman, George Stigler, Gary Becker, and Eugene Fama built influential research programs on the foundation of rational choice. Friedman's famous 1953 essay "The Methodology of Positive Economics" argued that the realism of a model's assumptions is irrelevant; what matters is whether the model generates accurate predictions. Even if individual people are not perfectly rational, Friedman argued, markets behave as if they are, because competition eliminates irrational actors (who lose money and exit the market) and amplifies rational ones (who make money and gain influence).

Fama's efficient market hypothesis extended this reasoning to financial markets, arguing that market prices fully reflect all available information. If a stock is underpriced, rational investors will buy it, driving the price up. If it is overpriced, they will sell it, driving the price down. The result is that market prices are always "correct" in the sense that they reflect the best available estimate of an asset's true value. This hypothesis had enormous practical consequences: it implied that active money management cannot consistently outperform index funds, that financial regulation is largely unnecessary because markets are self-correcting, and that asset price bubbles are theoretically impossible (or at least unidentifiable in real time).

The Chicago School's influence extended far beyond economics. The law and economics movement, pioneered by Richard Posner and Ronald Coase, applied rational actor analysis to legal questions, arguing that legal rules should be evaluated based on their efficiency in facilitating voluntary exchange among rational agents. Public choice theory, developed by James Buchanan and Gordon Tullock, applied rational actor analysis to politics, modeling politicians, bureaucrats, and voters as self-interested utility maximizers. These intellectual movements reshaped law, regulation, and public policy across the Western world, particularly in the United States and United Kingdom from the 1970s onward.


Herbert Simon: The First Crack in the Edifice

The first serious challenge to the rational actor model came not from a psychologist but from a political scientist and organizational theorist. Herbert Simon, working at Carnegie Mellon University, proposed in the 1950s that human decision-makers do not optimize; they "satisfice."

Bounded Rationality

Simon's concept of bounded rationality recognized three fundamental constraints on human decision-making that the rational actor model ignored.

First, cognitive limitations: the human brain has finite processing capacity, limited working memory, and imperfect computational abilities. Calculating the expected utility of every possible option in a complex decision problem may be mathematically possible but is cognitively impossible for real human beings operating in real time.

Second, information limitations: real decision-makers never have access to all relevant information. Information must be sought, and searching for information is costly in time, effort, and money. Rational agents in the textbook sense have access to complete information or at least know the probability distributions of uncertain variables. Real human beings operate with fragmentary, ambiguous, and often contradictory information.

Third, time limitations: real decisions must be made within time constraints that prevent exhaustive analysis. A manager deciding how to respond to a competitor's move cannot spend months calculating optimal responses. A consumer choosing between products at a grocery store cannot research every option in the category. A surgeon facing an unexpected complication during an operation cannot pause to run a decision analysis.

Given these constraints, Simon argued, people do not maximize utility. Instead, they use simplified decision rules (which Simon called heuristics) and settle for options that are "good enough" rather than optimal. Simon coined the term "satisficing" (a combination of "satisfy" and "suffice") to describe this behavior: decision-makers set aspiration levels for each criterion, search through options until they find one that meets all criteria at acceptable levels, and select that option without further search. This behavior is not irrational; it is an efficient adaptation to the cognitive, informational, and temporal constraints that real decision-makers face.

Simon's Challenge to Economics

Simon's work posed a fundamental challenge to economic theory because it suggested that the rational actor model was not merely a simplification but a distortion. If people satisfice rather than optimize, then many of the conclusions derived from optimization models may be wrong. Market equilibria derived from utility maximization may not accurately describe market behavior. Policy recommendations based on rational actor assumptions may be misguided.

The economics profession largely deflected Simon's challenge. Friedman's methodological defense (that unrealistic assumptions are acceptable if they generate good predictions) provided a convenient shield. Many economists acknowledged that Simon had a point about individual decision-making but argued that market competition would still produce aggregate outcomes close to the rational actor prediction. Simon received the Nobel Prize in Economics in 1978, but his ideas remained marginal to mainstream economics for decades. It would take a different line of research, coming from psychology rather than organizational theory, to crack the rational actor model decisively.


Kahneman and Tversky: The Heuristics and Biases Revolution

The intellectual partnership between Daniel Kahneman and Amos Tversky, which began at the Hebrew University of Jerusalem in 1969 and continued until Tversky's death in 1996, produced what is arguably the most important body of work in the social sciences of the past fifty years. Their research did not merely identify specific ways in which people deviate from rationality; it identified the systematic cognitive mechanisms that produce those deviations and demonstrated that these mechanisms operate universally across diverse populations and contexts.

The Heuristics Program

Kahneman and Tversky's initial research program focused on heuristics, the mental shortcuts that people use to make judgments under uncertainty. Three heuristics proved particularly important.

The availability heuristic leads people to estimate the frequency or probability of events based on how easily examples come to mind. Events that are vivid, recent, or emotionally striking are easily recalled and therefore judged as more common than they actually are. This explains why people consistently overestimate the probability of dramatic causes of death (airplane crashes, terrorist attacks, shark attacks) and underestimate the probability of mundane causes (heart disease, diabetes, car accidents). The availability heuristic is not stupid; in many environments, events that are easily recalled are in fact more common. But when media coverage, personal experience, or emotional salience systematically distort what is easily recalled, the heuristic produces systematic errors.

The representativeness heuristic leads people to judge probabilities based on similarity to a prototype rather than on base rates. In one of Kahneman and Tversky's most famous experiments, participants were told that a person named "Steve" was described by a neighbor as "very shy and withdrawn, invariably helpful but with little interest in people or in the world of reality. A meek and tidy soul, he has a need for order and structure." Participants overwhelmingly judged Steve as more likely to be a librarian than a farmer, even though farmers outnumber librarians by a factor of more than twenty in the United States. The description was "representative" of the librarian stereotype, and participants substituted representativeness for probability, ignoring base rate information entirely.

The anchoring heuristic leads people to make estimates by starting from an initial value (the anchor) and adjusting from it, typically insufficiently. In a striking demonstration, Kahneman and Tversky had participants spin a rigged wheel of fortune that always landed on either 10 or 65, then asked them to estimate the percentage of African nations in the United Nations. Participants who saw the wheel land on 10 gave average estimates around 25%, while those who saw it land on 65 gave average estimates around 45%. An obviously arbitrary and irrelevant number influenced their estimates by a factor of nearly two. The anchoring effect has been demonstrated in contexts ranging from judicial sentencing (judges are influenced by the prosecutor's sentence recommendation even when it is obviously extreme) to real estate pricing (buyers are influenced by the listing price even when it is obviously inflated) to salary negotiations (the first number mentioned anchors the negotiation range).

Prospect Theory: A New Model of Decision Under Risk

Kahneman and Tversky's most influential theoretical contribution was prospect theory, published in 1979 in Econometrica, the leading economics journal. Prospect theory was not merely a critique of expected utility theory; it was a comprehensive alternative model of how people actually make decisions involving risk and uncertainty.

Prospect theory identified three fundamental features of actual human decision-making that expected utility theory could not accommodate.

Reference dependence: People evaluate outcomes as gains or losses relative to a reference point (usually their current situation) rather than as absolute levels of wealth. This means that a person who gains $1,000 and a person who loses $1,000 are not simply $2,000 apart in wealth; they are experiencing psychologically different situations. The first is experiencing a gain; the second is experiencing a loss. And these experiences are not symmetric.

Loss aversion: Losses loom larger than equivalent gains. Kahneman and Tversky estimated that, on average, the pain of losing $100 is roughly twice as intense as the pleasure of gaining $100. This asymmetry has enormous consequences for behavior. It explains why people are reluctant to sell investments at a loss (even when holding the investment is irrational), why negotiations often stall (because each side perceives concessions as losses that loom larger than the gains from agreement), and why people are willing to pay more to avoid a loss than to acquire an equivalent gain.

Probability weighting: People do not weight probabilities linearly. They overweight small probabilities and underweight large probabilities. This explains both insurance purchasing (overweighting the small probability of a catastrophic loss) and lottery ticket purchasing (overweighting the small probability of a windfall gain). Under expected utility theory, the same person should not both buy insurance (risk aversion) and buy lottery tickets (risk seeking). Under prospect theory, both behaviors follow naturally from the shape of the probability weighting function.

Prospect theory was published in Econometrica partly because Kahneman and Tversky deliberately framed their work in the language of economics, using axioms, mathematical notation, and direct comparison with expected utility theory. This strategic decision, aimed at the economics audience rather than the psychology audience, proved crucial for the eventual impact of their work on economic theory.


Richard Thaler and the Institutionalization of Behavioral Economics

While Kahneman and Tversky provided the theoretical and empirical foundations, it was Richard Thaler, an economist at the University of Chicago (ironically, the institutional home of rational actor economics), who did more than anyone else to transform behavioral economics from a psychological research program into a recognized branch of economics with institutional standing, policy influence, and a named Nobel Prize.

The Anomalies Program

Thaler's most influential early contribution was his "Anomalies" column in the Journal of Economic Perspectives, which ran from 1987 to 2006. Each column documented a well-established empirical finding that violated the predictions of standard economic theory. Topics included the endowment effect (people value things more highly simply because they own them), mental accounting (people treat money differently depending on arbitrary categorization, such as spending a tax refund frivolously while carefully budgeting earned income), the equity premium puzzle (stocks have historically outperformed bonds by far more than rational risk models can explain), and dozens of other anomalies.

The cumulative effect of the Anomalies columns was devastating to the rational actor model's claim to descriptive accuracy. Each individual anomaly could potentially be explained away as a special case or a measurement artifact. But the sheer volume of anomalies, each documented through rigorous experiments and field data, made it increasingly difficult to maintain that human behavior is approximately rational. Thaler's strategy was not to make a single dramatic argument against the rational actor model but to erode its credibility through an accumulation of evidence so vast that maintaining the model required ignoring more and more of the available data.

The Endowment Effect and Mental Accounting

Two of Thaler's contributions deserve special attention for their practical importance.

The endowment effect, demonstrated experimentally by Thaler together with Kahneman and Jack Knetsch in a series of studies in the late 1980s, showed that people demand significantly more to give up something they own than they would pay to acquire the same item. In one classic experiment, students given a coffee mug demanded an average of $7.12 to sell it, while students without a mug were willing to pay an average of only $2.87 to buy one. The difference cannot be explained by transaction costs, income effects, or strategic behavior; it reflects a genuine psychological asymmetry between the pain of losing something you have and the pleasure of gaining something you don't have. The endowment effect has profound implications for markets, negotiations, and policy design, because it means that the status quo has a built-in psychological advantage that is not captured by standard economic models.

Mental accounting describes the cognitive processes by which people categorize, evaluate, and keep track of financial activities. Standard economic theory assumes that money is fungible, meaning a dollar is a dollar regardless of its source or intended use. Mental accounting violates fungibility systematically. People create mental accounts for different categories of spending (groceries, entertainment, savings, vacation) and treat money differently depending on which account it belongs to. They are more willing to spend a tax refund ($1,000 "found money") than to spend $1,000 from their savings account, even though the money is economically identical. They are more willing to drive across town to save $10 on a $20 item than to save $10 on a $500 item, even though $10 is $10 regardless of the base price.

Mental accounting matters because it means that the framing of financial decisions, how gains and losses are categorized, how options are bundled or unbundled, how payments are timed, affects behavior in ways that are not predicted by standard models. This insight has been widely applied in marketing (bundling, pricing just below round numbers, "free shipping" with higher item prices), compensation design (the psychological impact of bonuses versus salary increases), and policy (the effect of tax refunds versus equivalent reductions in withholding).


Nudge Theory and the Policy Revolution

The practical culmination of behavioral economics came with the publication of Nudge: Improving Decisions About Health, Wealth, and Happiness by Thaler and legal scholar Cass Sunstein in 2008. The book proposed that insights from behavioral economics could be used to design "choice architectures," the environments in which people make decisions, in ways that guide them toward better outcomes without restricting their freedom of choice.

The Logic of Nudging

A nudge is any aspect of the choice architecture that predictably alters people's behavior without forbidding any options or significantly changing their economic incentives. Nudges work by leveraging the same cognitive tendencies that cause people to make mistakes, but redirecting those tendencies toward better outcomes.

Default options are the most powerful nudge. Research has consistently shown that people are far more likely to accept the default option than to actively opt out, even when opting out is easy and costless. This tendency reflects a combination of status quo bias, procrastination, and implicit endorsement (people assume the default is recommended). The practical implications are enormous. When retirement savings plans use opt-in enrollment (employees must actively sign up), participation rates are typically 40-60%. When the same plans use opt-out enrollment (employees are automatically enrolled unless they actively choose to opt out), participation rates rise to 90% or higher. The only difference is the default; the options, costs, and benefits are identical.

Framing effects can be used to present information in ways that promote better decisions. Telling patients that a surgical procedure has a "90% survival rate" produces different decisions than telling them it has a "10% mortality rate," even though the information is identical. Labeling food with calorie counts, presenting energy use relative to neighbors, and displaying retirement savings as monthly income rather than lump sums all leverage framing to guide decisions.

Social norms can be leveraged by informing people about what others do. In a famous field experiment conducted by Robert Cialdini's team, hotel towel reuse increased significantly when signs said "Most guests in this room reuse their towels" compared to standard environmental appeals. Telling taxpayers that "most people in your area have already paid their taxes" increases timely payment rates. These interventions work because people are strongly influenced by perceived social norms, a tendency that can be redirected toward socially beneficial behavior.

Behavioral Insights in Government

Thaler and Sunstein's framework was adopted by governments worldwide. The United Kingdom established the Behavioural Insights Team (BIT, popularly known as the "Nudge Unit") in 2010, originally housed within the Cabinet Office. BIT conducted randomized controlled trials testing behavioral interventions across a range of policy domains. Its early successes included increasing tax collection by sending letters that informed late payers what percentage of their neighbors had already paid (increasing payment rates by 5 percentage points), increasing organ donor registration by changing the registration website's design, and increasing employment among job seekers by restructuring the job center experience around commitment and planning.

The Obama administration created the Social and Behavioral Sciences Team (SBST) in 2015, applying behavioral insights to federal programs. Australia, Canada, Singapore, the Netherlands, and many other countries established similar units. The World Bank created a "Mind, Behavior, and Development Unit" to apply behavioral insights to development policy. The proliferation of these units represented a remarkable transformation: insights from laboratory psychology experiments were being implemented at the scale of national governments, affecting millions of people's decisions about savings, health, education, and compliance with the law.

How Has Behavioral Economics Influenced Policy?

The policy influence extends far beyond nudge units. Behavioral economics has reshaped thinking about retirement policy (automatic enrollment in savings plans, escalation programs that automatically increase contribution rates), healthcare (simplifying insurance enrollment, using default prescriptions for generic drugs, redesigning medical consent forms), consumer protection (mandatory cooling-off periods for major purchases, standardized disclosure formats for financial products), energy policy (home energy reports comparing usage to neighbors, default enrollment in green energy programs), and education (simplifying financial aid applications, sending reminder texts about enrollment deadlines).

The scale of these interventions is worth emphasizing. Automatic enrollment in 401(k) retirement plans, the single most impactful nudge, has been estimated to increase total retirement savings in the United States by hundreds of billions of dollars over the next few decades. The U.K. Behavioural Insights Team's tax letter interventions have accelerated billions of pounds in tax payments. These are not marginal effects; they represent significant shifts in economic behavior achieved through relatively inexpensive changes to choice architecture.


The Resistance: How Mainstream Economics Fought Back

Behavioral economics did not triumph without fierce opposition. The economics establishment, particularly the Chicago School, mounted a sustained defense of the rational actor model that lasted decades and, in some quarters, continues today.

Friedman's Methodological Defense

Milton Friedman's argument that unrealistic assumptions are acceptable if they generate good predictions remained the primary defense of the rational actor model throughout the debates. Chicago economists argued that even if laboratory experiments revealed cognitive biases in individual decision-making, market behavior could still be approximately rational because markets provide feedback that corrects errors, competition eliminates irrational actors, and arbitrage opportunities eliminate mispricings.

This defense had some merit. Markets do provide feedback, and competition does punish some forms of irrationality. But the defense also had serious weaknesses that behavioral economists systematically exploited. Many important economic decisions (choosing a career, buying a house, saving for retirement, selecting a health insurance plan) are made too infrequently for market feedback to correct errors. Arbitrage has limits: correcting mispricings requires capital, and in extreme cases (like the 2008 financial crisis), rational arbitrageurs can be overwhelmed by irrational trends before prices revert. And the claim that aggregate behavior is rational even when individuals are not requires that individual errors cancel out, which the heuristics and biases research showed is precisely not the case, because cognitive biases push errors in predictable directions rather than random ones.

The Replication Crisis

Behavioral economics also faced challenges from within. The broader replication crisis in psychology, which erupted around 2010 when attempts to reproduce classic psychology experiments produced alarmingly high failure rates, raised questions about the reliability of some behavioral economics findings. Several prominent findings, including ego depletion (the idea that self-control is a depletable resource) and social priming (the idea that subtle environmental cues unconsciously influence behavior), failed to replicate in larger, more rigorous studies.

These replication failures were not catastrophic for behavioral economics as a whole. The core findings, including loss aversion, anchoring, the availability heuristic, the endowment effect, framing effects, and the power of defaults, have replicated consistently across dozens of studies in multiple countries. But the replication crisis did lead to greater methodological rigor, including pre-registration of studies, larger sample sizes, and more emphasis on replication. It also tempered some of the more extravagant claims made by behavioral economics popularizers and encouraged more careful distinction between well-established findings and preliminary results.

Paternalism Concerns

A separate line of criticism attacked nudge theory on philosophical rather than empirical grounds. Critics argued that nudging is paternalistic, because choice architects are making judgments about what constitutes a "better" decision for other people. Libertarian critics argued that even "libertarian paternalism" (Thaler and Sunstein's term for nudging that preserves choice) involves a presumption that experts know better than individuals what is good for them. Progressive critics argued that nudging can be used to serve corporate or government interests rather than citizen welfare, and that focusing on individual choice architecture distracts from structural reforms that would address the root causes of poor outcomes.

These criticisms have merit. The design of defaults, frames, and choice architectures inevitably involves value judgments about what outcomes are desirable, and these judgments are made by technocrats rather than by the people affected. The line between nudging (which preserves choice) and manipulation (which undermines it) is not always clear. And the focus on individual decision-making can distract from structural factors like poverty, inequality, and institutional design that may matter more than cognitive biases for many policy outcomes.


The Current State and Future Directions

Behavioral economics in the 2020s is no longer a rebel challenging the establishment; it is part of the establishment. Kahneman received the Nobel Prize in Economics in 2002 (Tversky, who died in 1996, could not share it). Thaler received the Nobel in 2017. Behavioral economics courses are taught at virtually every major university. Behavioral insights teams operate in governments across the world. The concepts of cognitive biases, nudges, and behavioral interventions have entered the popular vocabulary.

Yet significant questions remain unresolved. The field continues to debate the boundary between useful heuristics (quick-and-dirty decision rules that work well in many environments) and harmful biases (systematic errors that damage decision quality). Gerd Gigerenzer and his colleagues at the Max Planck Institute have argued that many of the heuristics identified by Kahneman and Tversky are actually effective decision strategies when evaluated in realistic environments rather than in contrived laboratory settings. The "fast and frugal heuristics" research program shows that simple decision rules can outperform complex optimization in uncertain environments with limited information, suggesting that boundedly rational behavior may be more rational than it appears.

The integration of behavioral economics with neuroscience through the emerging field of neuroeconomics promises to ground behavioral findings in neural mechanisms, potentially resolving debates about the nature and universality of cognitive biases. The application of behavioral insights to increasingly complex domains, including climate change, artificial intelligence governance, and pandemic response, tests the scalability and robustness of the field's insights. And the growing sophistication of digital choice architectures (recommendation algorithms, personalized pricing, targeted advertising) raises urgent questions about who controls the nudging and in whose interest it operates.

Era Key Development Key Figures Core Insight
1738 Expected utility theory Daniel Bernoulli Diminishing marginal utility explains risk aversion
1944 Game theory and axiomatic utility Von Neumann, Morgenstern Rational choice can be formalized mathematically
1950s Bounded rationality Herbert Simon Cognitive limits force satisficing over optimizing
1970s Heuristics and biases Kahneman, Tversky Systematic cognitive shortcuts produce predictable errors
1979 Prospect theory Kahneman, Tversky Loss aversion, reference dependence, probability weighting
1980s-90s Anomalies program Richard Thaler Accumulated evidence against rational actor assumptions
2008 Nudge theory Thaler, Sunstein Choice architecture can guide better decisions
2010s Government behavioral units BIT, SBST, etc. Behavioral insights applied at policy scale

The journey from the confident rationalism of mid-twentieth-century economics to the nuanced understanding of human decision-making that behavioral economics provides represents one of the great intellectual achievements of our era. It has not only changed how scholars understand economic behavior; it has changed how governments design policies, how companies design products, how doctors present treatment options, and how individuals understand their own minds. The recognition that human beings are not the rational calculators of economic textbooks, but are instead complex, predictable, and deeply human in their reasoning, is an insight whose consequences are still unfolding.


References and Further Reading

  1. Kahneman, D. & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2), 263-291. https://doi.org/10.2307/1914185

  2. Thaler, R. H. & Sunstein, C. R. (2008). Nudge: Improving Decisions About Health, Wealth, and Happiness. Yale University Press. https://yalebooks.yale.edu/book/9780300122237/nudge/

  3. Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux. https://us.macmillan.com/books/9780374533557/thinkingfastandslow

  4. Simon, H. A. (1955). A behavioral model of rational choice. Quarterly Journal of Economics, 69(1), 99-118. https://doi.org/10.2307/1884852

  5. Thaler, R. H. (2015). Misbehaving: The Making of Behavioral Economics. W.W. Norton. https://wwnorton.com/books/Misbehaving/

  6. Tversky, A. & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124-1131. https://doi.org/10.1126/science.185.4157.1124

  7. Friedman, M. (1953). The methodology of positive economics. In Essays in Positive Economics. University of Chicago Press. https://press.uchicago.edu/ucp/books/book/chicago/E/bo25773835.html

  8. Gigerenzer, G. (2008). Rationality for Mortals: How People Cope with Uncertainty. Oxford University Press. https://global.oup.com/academic/product/rationality-for-mortals-9780195329490

  9. Behavioural Insights Team. (2015). The Behavioural Insights Team Update Report 2013-2015. https://www.bi.team/publications/

  10. Thaler, R. H. (1999). Mental accounting matters. Journal of Behavioral Decision Making, 12(3), 183-206. https://doi.org/10.1002/(SICI)1099-0771(199909)12:3<183::AID-BDM318>3.0.CO;2-F

  11. Kahneman, D., Knetsch, J. L., & Thaler, R. H. (1990). Experimental tests of the endowment effect and the Coase theorem. Journal of Political Economy, 98(6), 1325-1348. https://doi.org/10.1086/261737

  12. Sunstein, C. R. (2014). Why Nudge? The Politics of Libertarian Paternalism. Yale University Press. https://yalebooks.yale.edu/book/9780300197860/why-nudge/

  13. Lewis, M. (2017). The Undoing Project: A Friendship That Changed Our Minds. W.W. Norton. https://wwnorton.com/books/The-Undoing-Project/

  14. Fama, E. F. (1970). Efficient capital markets: A review of theory and empirical work. Journal of Finance, 25(2), 383-417. https://doi.org/10.2307/2325486

  15. Von Neumann, J. & Morgenstern, O. (1944). Theory of Games and Economic Behavior. Princeton University Press. https://press.princeton.edu/books/paperback/9780691130613/theory-of-games-and-economic-behavior