In 1968, biologist Garrett Hardin published a short paper in the journal Science that would reshape the way social scientists thought about shared resources for decades. "The Tragedy of the Commons" argued, with the elegance of a mathematical proof, that shared resources are inevitably destroyed by the accumulation of individually rational choices. Every individual user gains when they take more from a common resource, while the costs of overuse are distributed across all users. Therefore, rational individuals will always extract more than is collectively sustainable, and the commons will collapse. The logic was airtight. The conclusion was bleak. The only solutions were privatization or coercion.

In 1968, Elinor Ostrom was a political scientist at Indiana University who had already begun studying how actual communities manage actual shared resources. She would spend the next two decades doing something that was then unusual in economics and political science: going to look at what was really happening in fishing communities in coastal Maine and Turkey, in irrigation systems that had functioned for centuries in Spain and the Philippines, in Swiss alpine meadows shared by mountain villages for generations, in groundwater basins in California. What she found was that Hardin's tragedy was not inevitable. Communities had developed sophisticated, self-designed institutions for managing shared resources — institutions that were neither privatized property nor top-down government regulation, but something else entirely: collective self-governance, built from local knowledge and enforced through social norms and mutual monitoring.

Ostrom's findings, published as "Governing the Commons" in 1990, earned her the Nobel Prize in Economic Sciences in 2009 — the first woman to receive it. They did not merely add nuance to Hardin; they challenged the entire theoretical framework that made the tragedy seem logically necessary. The tragedy of the commons, Ostrom showed, is a failure of governance, not a law of nature.

"What we have ignored is what citizens can do and the importance of real involvement of the people involved, versus just having outsiders coming in and changing things." — Elinor Ostrom, Nobel Prize Interview (2009)


Collective Action Problem Description Real-World Example
Public goods problem Rational actors free-ride on others' contributions Funding public broadcasting; open-source software
Commons tragedy Shared resource degraded by individual overuse Overfishing; groundwater depletion
Prisoner's dilemma Mutual defection is rational but leaves all worse off Arms races; price wars
Coordination problem Multiple equilibria; actors need to coordinate on one Driving on same side of road; language standards
Assurance game Would contribute if others do; uncertainty prevents action Vaccination; joining a strike

Key Definitions

Collective action problem — A situation in which a group of individuals would benefit from cooperation, but each individual has an incentive to free ride on the contributions of others, such that the individually rational outcome is collectively suboptimal or catastrophic.

Free rider problem — The tendency for individuals to benefit from a collective good without contributing to its provision. Free riding is individually rational (you get the benefit without the cost) but collectively destructive (if everyone free rides, the collective good is not produced).

Public good — A good that is non-excludable (you cannot prevent people from using it once it is provided) and non-rival (one person's use does not reduce what is available to others). National defense, clean air, and basic scientific knowledge are examples. Public goods are vulnerable to free riding because providers cannot charge for use.

Common pool resource — A good that is non-excludable (difficult to prevent anyone from using) but rival (one person's use diminishes availability for others). Fisheries, aquifers, and forests are examples. Common pool resources are vulnerable to overuse because users cannot be excluded but each unit consumed depletes the stock.

Club good — A good that is excludable (users can be charged or denied access) but non-rival up to a congestion point. Toll roads, private clubs, and streaming services are examples.

Private good — A good that is both excludable and rival. Most ordinary market goods are private goods.

Prisoner's dilemma — A game-theoretic model in which two players each choose to cooperate or defect. Each player does better by defecting regardless of what the other does, but both players do worse when both defect than when both cooperate. The prisoner's dilemma structure underlies many collective action problems.

Iterated game — A game that is played repeatedly between the same players. In iterated games, cooperation can emerge because players have incentives to maintain reputations and avoid triggering retaliation that will harm them in future rounds.

Social capital — The networks of trust, norms, and reciprocity that enable collective action and social cooperation. Robert Putnam distinguished between bonding social capital (within groups) and bridging social capital (between groups).


Mancur Olson and the Logic of Collective Action

Before Hardin's commons, the foundational text in the social science of collective action was Mancur Olson's 'The Logic of Collective Action,' published in 1965. Olson's argument was more general than Hardin's and in some ways more disturbing: groups do not automatically act in their collective interest, even when members share clear common interests and are fully aware of them.

The logic is the free rider problem applied to interest groups and collective goods. Suppose you are a member of a large group that would all benefit from some collective good — lower tariffs, cleaner air, a public park. The good, once provided, is available to all members whether or not they contributed. From any individual's perspective, the rational strategy is to let others bear the costs of organizing and lobbying while still enjoying the benefit. If everyone reasons this way, the collective good is never provided, even though everyone would benefit.

Olson observed that small groups face this problem less severely than large ones. In a small group, each member's contribution makes a visible difference to the outcome, monitoring defectors is easy, and social pressure is effective. In a large group — a million consumers, all taxpayers, the global population — any individual's contribution is negligible, defection is anonymous, and social pressure cannot reach everyone. This explains a puzzle in democratic politics: why concentrated, small interests (like an industry association representing a few hundred firms) consistently win political battles against diffuse, large interests (like consumers or taxpayers), even when the diffuse interest is much larger in aggregate terms. The industry can solve its collective action problem; the consumers cannot.

Olson identified selective incentives as the main way large groups overcome free riding: providing private benefits to members that non-members cannot access. Labor unions provide legal representation and job protection. Professional associations provide certifications, networking, and information. The environmental organization provides a magazine and a tote bag. The selective benefit is not the group's main purpose, but it is often what makes membership individually rational.


The Tragedy of the Commons Revisited

Garrett Hardin's 1968 article was published in Science — the flagship journal of American science — and reached an enormous audience. Its core argument is worth examining precisely, because its influence was based as much on its rhetorical power as on its empirical accuracy.

Hardin asked readers to imagine a pasture open to all, used by a community of herders. Each herder benefits from adding another animal to the pasture: the individual gains a full animal's profit, while the cost — incremental degradation of the shared pasture — is shared among all. Every herder, reasoning this way, adds animals without restraint. The pasture is destroyed. "Therein is the tragedy," Hardin wrote: "each man is locked into a system that compels him to increase his herd without limit — in a world that is limited."

The model is analytically clean, and it does capture a real dynamic. Open-access fisheries, where anyone can fish without limit, have historically collapsed. The Atlantic cod fishery, once the most productive in the world, was destroyed by overexploitation in the late twentieth century precisely because access was too open and governance too weak. The atmosphere, as a dumping ground for greenhouse gases, is a global commons subject to Hardin's tragedy.

But Hardin's model made a crucial conflation that Ostrom spent her career documenting: the commons in his story is an open-access resource, with no governance whatsoever — no rules about who can use it, how much, or when. Actual historical commons were not open access. The English village commons, the Swiss Allmend, the Japanese iriai — these were not open to everyone; they were governed by detailed customary rules specifying who had rights, what could be taken, when, and how violations were sanctioned. The commons in Hardin's story was missing the institutions that defined actual commons.

The title of Ostrom's book, "Governing the Commons," made the point in three words: the commons does not have to be tragic if it is governed. The question is not commons versus private property or government regulation; the question is what governance arrangements make sustainable management possible.


Ostrom's Design Principles

Ostrom's research across dozens of case studies produced a set of design principles — conditions that characterize successful self-governing institutions for common pool resources. These are empirical generalizations, not a blueprint, but they have proven robust across many contexts.

Clearly Defined Boundaries

Successful commons institutions know clearly who is entitled to use the resource and who is not. This boundary is not necessarily geographic; it is social. The fishing community that has sustainably managed the local fishery for generations has typically established, through formal or informal means, which families or households have harvesting rights. Outsiders — whether other fishing communities, commercial operators, or state authorities — are excluded or regulated. Without clear boundaries, the institution cannot function because enforcement requires knowing who is accountable to whom.

Rules Fit to Local Conditions

Rules are proportional to local ecological and social conditions rather than imported from general blueprints. Irrigation rules that work in the semi-arid Valencia huerta of Spain would be inappropriate for a wet Philippine rice system. Rules about how much can be harvested depend on local stock levels and seasonal patterns. One-size-fits-all rules imposed from outside fail because they cannot incorporate the local knowledge that makes rules practically workable.

Collective Choice Arrangements

Those who are governed by the rules participate meaningfully in making and modifying them. This is not merely democratic principle but practical necessity: the people managing the resource have the most relevant information about its condition and the effects of different rules. When rules are imposed from outside without user input, they tend not to fit local conditions and tend not to be seen as legitimate, reducing voluntary compliance. When users help design the rules, they are more likely to consider them fair and to enforce them on each other.

Monitoring and Graduated Sanctions

Effective governance requires monitoring — someone has to be watching whether rules are being followed. In Ostrom's case studies, this monitoring was often done by the users themselves, not by external authorities: fishers monitoring other fishers, irrigators watching other irrigators. This mutual monitoring is efficient (local users have information outsiders lack) and socially embedding (it builds the norms of mutual observation that deter violation).

Sanctions should be graduated — light for first offenses, escalating for repeat violations. Heavy punishments for minor first infractions feel unjust and generate resistance. Light escalating sanctions signal that the community is monitoring and serious without treating minor violations as existential threats.


Game Theory and the Evolution of Cooperation

The prisoner's dilemma is the canonical model of collective action failure. Two players each choose to cooperate or defect. If both cooperate, both receive a moderate reward. If both defect, both receive a low payoff. If one defects while the other cooperates, the defector receives a high reward and the cooperator receives the worst outcome. In a single game between rational players who will never meet again, defection is the dominant strategy — you do better by defecting regardless of what the other player does.

Robert Axelrod, a political scientist at the University of Michigan, ran a landmark computer tournament in the early 1980s in which researchers were invited to submit strategies for playing iterated prisoner's dilemma — the same game repeated many times between the same players. The winning strategy, across multiple tournament rounds, was submitted by psychologist Anatol Rapoport: tit-for-tat. Cooperate on the first move. After that, do whatever the other player did on the previous move: cooperate if they cooperated, defect if they defected. Return to cooperation as soon as the other player does.

Axelrod's analysis, published in 'The Evolution of Cooperation' (1984), showed why tit-for-tat performs well. It is cooperative (it never defects first). It is retaliatory (defection is punished immediately). It is forgiving (past defection does not permanently poison the relationship). It is clear and interpretable (other strategies can predict its behavior). In populations of diverse strategies, tit-for-tat can invade and spread even from a small initial presence, because clusters of cooperators do well against each other and only poorly against exploiters.

The general lesson is that cooperation can evolve and sustain itself when players interact repeatedly, recognize each other, and can observe and remember each other's history. These conditions are exactly what Ostrom's self-governing communities create: ongoing interaction among identified participants with shared history and social accountability.


Collective Action in Practice: Climate, Vaccines, and Labor

Climate Change as Collective Action Problem

Climate change is perhaps the largest and most intractable collective action problem in human history. Each nation — and within nations, each firm and individual — benefits from emitting greenhouse gases (through the economic activity they enable) while the costs of climate change are distributed across all nations and all future generations. The free rider problem operates at every scale: internationally, nations have incentives to free ride on other nations' emissions reductions; domestically, individuals have incentives to free ride on public decarbonization efforts.

International climate agreements like the Paris Agreement are attempts to coordinate action on this collective problem. They face the fundamental challenges Olson identified: the group is large, enforcement is weak, and defection (not meeting emissions targets) is largely unpunished. The most hopeful recent developments — renewable energy cost declines that make clean energy economically attractive independent of climate goals — reduce the collective action problem by making cooperation individually rational rather than requiring sustained self-sacrifice.

Vaccination and Herd Immunity

Vaccination is a collective action problem with a specific structure: once a sufficient share of a population is vaccinated (achieving herd immunity), even the unvaccinated are protected, because the pathogen cannot find enough susceptible hosts to spread. This makes vaccination a public good. The individually rational strategy — absent social pressure, requirements, or altruism — is to remain unvaccinated and free ride on the herd immunity created by others' vaccinations, avoiding any risk from the vaccine itself. When too many people free ride, herd immunity collapses and the disease spreads.

Vaccine mandates, school enrollment requirements, and social norms around vaccination are mechanisms for solving this collective action problem by changing the individual payoff calculation. Their effectiveness depends on enforcement and on the degree of social norm internalization — the extent to which people vaccinate because they consider it their social obligation rather than merely because they are required to.

Labor Organizing

Labor unions are textbook collective action organizations. Workers would collectively benefit from higher wages and better working conditions. But each individual worker faces an incentive to free ride: if the union wins better terms, all workers benefit regardless of whether they joined or paid dues. This free rider problem is why union membership is often made compulsory as a condition of employment (the union shop) and why the legal framework for organizing is a crucial determinant of union density. The National Labor Relations Act of 1935 (the Wagner Act) in the United States created legal protections for organizing that helped solve the collective action problem; subsequent legislation limiting union security arrangements (such as Taft-Hartley in 1947) made it harder.

For more on the underlying mechanisms of human social cooperation, see /explainers/how-it-works/why-humans-cooperate. For the philosophical dimensions of justice in collective settings, see /culture/ethics-values-society-culture/what-is-justice. For collective action and climate justice specifically, see /culture/ethics-values-society-culture/what-is-climate-justice.


Digital Coordination and Its Limits

The internet and social media have dramatically reduced the transaction costs of organizing — the costs of finding others with shared interests, communicating, and coordinating. This should, in principle, make collective action easier. The evidence is mixed.

Digital tools have enabled genuinely impressive collective action: rapid mobilization for disaster relief, coordination of large protests, whistleblower networks, and Wikipedia (one of the largest voluntary collective efforts in history). The Arab Spring protests of 2010-2011, in which social media played a significant organizational role, seemed to show that digital tools could overcome the collective action problems that authoritarian governments used to suppress dissent.

Zeynep Tufekci, in 'Twitter and Tear Gas' (2017), offered a more nuanced analysis. Social media lowers the barrier to initial mobilization — it becomes possible to assemble large crowds quickly. But this ease of assembly can be a weakness as well as a strength: movements that mobilize quickly without building durable organizational capacity can also dissipate quickly when repression or division arises. The civil rights movement of the 1950s and 1960s built organizational strength over years of unglamorous work; social media movements can go from zero to massive and back to zero without building the institutional infrastructure that sustains long-term change.

The collective action of disinformation is a reminder that digital coordination can be used against collective interests as well as for them. Coordinated networks of accounts can manipulate platform algorithms, shift public perception, and undermine the epistemic commons of shared facts — a collective action problem in which the "bad actors" solve their own coordination problem at the expense of everyone else.


References

  • Olson, Mancur. The Logic of Collective Action: Public Goods and the Theory of Groups. Harvard University Press, 1965.
  • Hardin, Garrett. "The Tragedy of the Commons." Science 162, no. 3859 (1968): 1243-1248. https://doi.org/10.1126/science.162.3859.1243
  • Ostrom, Elinor. Governing the Commons: The Evolution of Institutions for Collective Action. Cambridge University Press, 1990.
  • Axelrod, Robert. The Evolution of Cooperation. Basic Books, 1984.
  • Putnam, Robert D. Bowling Alone: The Collapse and Revival of American Community. Simon and Schuster, 2000.
  • Granovetter, Mark. "Threshold Models of Collective Behavior." American Journal of Sociology 83, no. 6 (1978): 1420-1443. https://doi.org/10.1086/226707
  • Tufekci, Zeynep. Twitter and Tear Gas: The Power and Fragility of Networked Protest. Yale University Press, 2017.
  • Ostrom, Elinor. "Beyond Markets and States: Polycentric Governance of Complex Economic Systems." Nobel Prize Lecture, December 8, 2009. https://www.nobelprize.org/prizes/economic-sciences/2009/ostrom/lecture/
  • Kaul, Inge, Isabelle Grunberg, and Marc Stern, eds. Global Public Goods: International Cooperation in the 21st Century. Oxford University Press, 1999.
  • Dietz, Thomas, Elinor Ostrom, and Paul C. Stern. "The Struggle to Govern the Commons." Science 302, no. 5652 (2003): 1907-1912. https://doi.org/10.1126/science.1091015

Frequently Asked Questions

What is the collective action problem?

The collective action problem arises whenever a group of people would benefit from cooperating to produce or protect something, but each individual has an incentive to let others bear the costs while still enjoying the benefits. The problem is not that people are purely selfish — it is that the structure of incentives systematically undermines cooperation even when everyone would prefer that cooperation succeed.The logic is precise. Consider a group of fishers sharing a lake. If all fish moderately, the lake remains productive indefinitely and all benefit. But from any individual fisher's perspective, restraint is costly (fewer fish today) while the benefit — a healthier lake — is shared with everyone else. If one person restrains and others do not, the restrained person bears a private cost while everyone captures the gain. The individually rational strategy is to fish without restraint and hope others restrain themselves. But when everyone reasons this way, everyone overfishes and the lake collapses — an outcome that is collectively worse than cooperation.The economist Mancur Olson formalized this structure in 'The Logic of Collective Action' (1965). He showed that groups do not automatically act in their collective interest even when members share common interests and are fully aware of them. Larger groups face worse collective action problems than smaller ones, because the individual's contribution is a smaller share of the total, and because monitoring and enforcement are harder. Olson's analysis explained why small, concentrated interests (like an industry lobby) often defeat large, diffuse interests (like consumers) in political processes: the industry can more easily solve its collective action problem.Solutions to collective action problems generally involve changing the incentive structure through regulation, privatization, social norms, or the self-governing institutions that Elinor Ostrom spent her career documenting.

What is the tragedy of the commons?

The tragedy of the commons is a theoretical model, introduced by biologist Garrett Hardin in a 1968 article in Science, describing how shared resources can be destroyed by individually rational but collectively self-defeating behavior. Hardin asked readers to imagine a pasture open to all. Each herder benefits from adding another animal to the pasture: the benefit of an additional animal accrues entirely to the individual herder, while the cost — the additional grazing pressure on the shared pasture — is distributed across all herders. Each herder, reasoning this way, adds animals without limit, and the pasture is destroyed.Hardin's model was enormously influential, shaping policy debates about fisheries, grazing rights, pollution, and eventually global commons like the atmosphere. His conclusion — that the commons is inevitably tragic — pointed toward two solutions: privatization (assign property rights so that individuals internalize costs) or government regulation (coerce compliance with sustainable use limits).Hardin's framing had significant problems, however. His 'commons' was actually an open-access resource — available to anyone without restriction. Actual historical commons were typically governed by community rules about who had access and how much they could take. English medieval commons, Swiss alpine pastures, and Japanese fishing communities all developed elaborate institutions to manage shared resources sustainably over centuries. The 'tragedy' described conditions of no governance, not the commons as a form of social organization.Elinor Ostrom's empirical research, which won the Nobel Prize in Economics in 2009, directly challenged Hardin's conclusion. Ostrom found case after case of communities successfully governing common pool resources without either privatization or top-down regulation — through self-designed institutions, mutual monitoring, and graduated sanctions. The tragedy of the commons, she showed, is not inevitable: it is a failure of governance, not a law of nature.

How do communities solve collective action problems?

Communities solve collective action problems through a diverse repertoire of mechanisms that change the incentive structure, increase monitoring, build trust, and reduce the costs of coordination.Elinor Ostrom's research across dozens of case studies — Swiss alpine meadows, Japanese fishing villages, irrigation systems in Spain and the Philippines — identified eight design principles characteristic of successful self-governing institutions for common pool resources. Well-functioning institutions define clearly who is entitled to use the resource. They have rules proportional to local conditions — one-size-fits-all rules imposed from outside tend to fail. Critically, the people most affected by the rules participate in making and modifying them. Effective monitoring ensures that rule violations can be detected — often by the users themselves rather than external officials. Graduated sanctions start small and escalate with repeat violations, allowing communities to respond proportionally rather than applying heavy punishment that may seem unjust and generate resistance. Accessible conflict-resolution mechanisms allow disputes to be settled cheaply. External government authorities recognize the community's right to organize its own affairs. And for larger systems, nested governance structures coordinate across scales.Beyond Ostrom's institutional framework, communities use social norms and reputation mechanisms to solve collective action problems. When individuals know they will have repeated interactions with the same people — when the shadow of the future is long — cooperation becomes individually rational because defection damages one's reputation and triggers retaliation. Robert Axelrod's computer tournament experiments ('The Evolution of Cooperation,' 1984) showed that in repeated prisoner's dilemma games, simple reciprocity strategies ('tit-for-tat') outperform more aggressive strategies and sustain cooperation over time.Social capital — the networks of trust, norms, and reciprocity documented by Robert Putnam — is both a product and a precondition of collective action. Communities with dense social networks and high trust are better able to solve collective action problems, and successfully solving collective action problems builds the trust and norms that make future cooperation easier.

What did Elinor Ostrom discover?

Elinor Ostrom was an American political scientist whose empirical research on how communities manage shared natural resources earned her the Nobel Prize in Economic Sciences in 2009 — the first woman to receive this award. Her work fundamentally challenged the conventional wisdom, dominant since Hardin's 1968 paper and Olson's 1965 book, that common pool resources inevitably collapse without either private property rights or government control.Ostrom did what economists rarely did: she went to look at actual cases. She studied irrigation systems in Spain and the Philippines that had been functioning sustainably for centuries, fishing communities in Maine and Turkey, Swiss alpine meadows shared by mountain villages, Japanese fisheries, and groundwater basins in California. What she found was that communities had developed sophisticated institutions for managing shared resources — institutions that were neither private property nor government regulation but something more complex: self-organized, community-designed governance systems with their own rules, monitoring arrangements, and sanctioning mechanisms.Her major contribution, 'Governing the Commons' (1990), synthesized these findings into a theory of polycentric governance — the idea that governance problems are best solved through multiple overlapping institutions at multiple scales, rather than by top-down central regulation or market mechanisms alone. She showed that communities could design effective governance institutions when they had the autonomy to do so, when rules fit local conditions, and when users had genuine participation in rule-making.Ostrom also contributed the conceptual distinction between different types of goods — private goods, public goods, common pool resources, and club goods — and showed why different types require different governance approaches. Her framework moved the field beyond the false dichotomy of market versus state and toward a richer understanding of how communities actually solve collective problems.

Why do some collective action attempts succeed and others fail?

The success or failure of collective action depends on a combination of group characteristics, institutional design, and the structural features of the problem itself.Group size and homogeneity matter significantly. Smaller groups can monitor members more easily and maintain higher levels of social pressure. Larger groups face more severe free rider problems because any individual's contribution is a smaller fraction of the total and because enforcement is harder. Heterogeneous groups — with members who have different stakes, values, or resources — find it harder to agree on rules and distribution of costs and benefits, making collective action more difficult.The structure of the underlying problem shapes the difficulty of coordination. Some collective action problems have the structure of a chicken game or an assurance game, in which coordination is easier once some people commit. Others have the structure of a pure prisoner's dilemma, where defection is always individually better regardless of what others do, making cooperation harder to sustain. Threshold effects — documented by sociologist Mark Granovetter in his 1978 threshold model — mean that collective action can rapidly cascade once a critical mass commits, or rapidly collapse if that threshold is not reached. This explains why protests, bank runs, and political rebellions can appear suddenly and be difficult to predict.Institutional design features identified by Ostrom are empirically associated with success: clear boundaries, proportional rules, collective choice, monitoring, graduated sanctions, and conflict resolution mechanisms. Absence of these features — particularly when rules are imposed without community input or when violations are not monitored — predicts failure.External conditions matter too. Government recognition of community rights and autonomy enables self-governance; governments that undermine local institutions without providing effective alternatives often produce resource collapse. The degree of uncertainty about resource dynamics also matters — high uncertainty makes it harder to design proportional rules and monitor compliance.

What does game theory say about cooperation?

Game theory is the mathematical study of strategic interaction — situations where one person's best choice depends on what others do. Its most famous model for analyzing cooperation is the prisoner's dilemma: two players each choose to cooperate or defect, with each doing better by defecting regardless of what the other does, but both doing worse when both defect than when both cooperate. The Nash equilibrium is mutual defection, even though mutual cooperation would be better for both.The prisoner's dilemma is often presented as explaining why cooperation is impossible, but this conclusion depends critically on the assumption that the game is played only once. In a one-shot game between strangers, defection is rational. In a repeated game between players who expect to interact again, cooperation can emerge and sustain itself.Robert Axelrod's influential computer tournament in the early 1980s asked researchers to submit strategies for playing the repeated prisoner's dilemma against each other. The winning strategy, across multiple tournaments, was 'tit-for-tat': cooperate on the first move, then do whatever the other player did on the previous move. Tit-for-tat is cooperative (starts by cooperating), retaliatory (punishes defection immediately), forgiving (returns to cooperation once the other player cooperates again), and clear (easy to understand). Axelrod's experiments demonstrated that cooperation can evolve from selfish agents when interactions are repeated and strategies are transparent.Game theory also explains how social norms enforce cooperation. When defection from norms triggers punishment — including costly punishment by third parties who are not directly affected — cooperation can be sustained even in large groups where direct reciprocity cannot reach. Experimental economics has documented 'altruistic punishment' in ultimatum games: people will pay a cost to punish others for unfair behavior even when they will never interact again. This suggests that human psychology is adapted for sustaining cooperation in social groups, not just for maximizing immediate individual payoffs.