In 2009, Toby Ord, a philosophy lecturer at Oxford University, sat down to do some arithmetic. He calculated that if he donated 10% of his career earnings to highly effective charities — not a crushing sacrifice for an academic on a reasonable salary — he could fund treatment for approximately 80,000 people suffering from trachoma, the bacterial infection that causes blindness and is curable for a few dollars per person. Eighty thousand people. Not eighty thousand people helped in some vague and unmeasurable way, but eighty thousand people prevented from going blind. He was twenty-nine years old, had never given much away, and the calculation stopped him.
The question he asked himself was simple: if you could prevent 80,000 people from going blind, shouldn't you? He founded a pledge organization called Giving What We Can, committed to donating at least 10% of his income for the rest of his life, and began recruiting others to make the same promise. Within a few years, he had connected with the philosopher Peter Singer, the Oxford researcher William MacAskill, and a growing community of people who believed that charitable giving, like everything else, should be subject to evidence, reasoning, and the attempt to do the most good possible. The movement that grew from this circle would eventually call itself effective altruism.
The movement's story is not simply a story of moral seriousness and philosophical ingenuity. In November 2022, the most prominent and celebrated figure in effective altruism, a young cryptocurrency billionaire named Sam Bankman-Fried, was revealed to have committed what prosecutors described as one of the largest financial frauds in American history. The questions that followed — about the ethics of consequentialism, the dangers of certainty in one's own calculations of the good, and what happens when an ethics of doing the most good meets the corrupting effects of power and money — have not yet been fully answered.
"The question is not whether I am doing enough. The question is whether I am doing as much as I morally should." — Peter Singer, Famine, Affluence, and Morality (1972)
| EA Cause Area | Core Argument | Leading Organizations |
|---|---|---|
| Global health and poverty | Huge gains possible at low cost in poor countries | GiveWell; Against Malaria Foundation |
| Animal welfare | Factory farming causes enormous suffering | Animal Charity Evaluators |
| Existential risk | Catastrophic risks could end all future value | Future of Humanity Institute; MIRI |
| Biosecurity | Engineered pandemics as catastrophic risk | Johns Hopkins CHS; Nucleic Acid Observatory |
| AI safety | Misaligned AI as potential existential threat | Center for Human-Compatible AI |
Key Definitions
Effective altruism (EA): A philosophical and practical project that uses evidence and careful reasoning to identify and act on the most effective ways to benefit others; associated with a community of researchers, philanthropists, and practitioners organized around this goal.
Cause prioritization: The practice of systematically comparing different cause areas (global health, animal welfare, existential risk, etc.) using criteria such as scale (how many are affected), neglectedness (how little attention it receives), and tractability (how addressable it is), to identify where additional effort will do the most good.
GiveWell: The rigorous charity evaluator that assesses charities using evidence-based methods and recommends only those meeting high standards of cost-effectiveness, typically a few percent of organizations evaluated.
Earning to give: The EA strategy of pursuing high-income careers (in finance, tech, etc.) for the purpose of donating a large share of income to highly effective charities, rather than working directly in the social sector.
Longtermism: The view that improving the long-term future is a dominant moral priority, given the potentially vast number of people who will exist in the future and the disproportionate importance of reducing existential risks.
Existential risk: Risks that could cause human extinction or permanent civilizational collapse — including engineered pandemics, advanced artificial intelligence misalignment, and nuclear war — which longtermists argue deserve priority because of the enormous number of future lives at stake.
Giving What We Can: The pledge organization founded by Toby Ord in 2009, whose members commit to giving at least 10% of their income to highly effective charities.
Against Malaria Foundation (AMF): A consistently top-rated GiveWell charity that distributes insecticide-treated bed nets in sub-Saharan Africa; regularly cited as among the most cost-effective life-saving interventions available.
GiveDirectly: An EA-endorsed charity that provides unconditional cash transfers directly to very poor households in sub-Saharan Africa, allowing recipients to spend according to their own priorities.
Peter Singer's Argument
The intellectual foundation of effective altruism is Peter Singer's 1972 essay "Famine, Affluence, and Morality," written in response to the catastrophic famine in East Bengal that accompanied the Bangladesh Liberation War and killed millions. Singer's argument begins with a moral premise that almost everyone accepts: suffering and death from lack of food, shelter, and medical care are bad. From this he constructs a principle: "If it is in our power to prevent something bad from happening, without thereby sacrificing anything of comparable moral importance, we ought, morally, to do it."
To make this principle vivid, Singer invents a thought experiment that has become one of the most discussed in contemporary ethics. Suppose you are walking to work and you pass a shallow pond in which a small child is drowning. You can wade in and save the child at the cost of ruining your expensive new shoes and being late to work. Virtually everyone agrees: you are morally required to save the child. Your inconvenience is not of comparable moral importance to the child's death.
Now, Singer asks: what morally relevant difference is there between that child and a child dying of malaria in Malawi? Both deaths are preventable. Both are tragedies. The primary difference is distance — the drowning child is in front of you; the Malawi child is thousands of miles away. But why should distance matter morally? It is not as if your action actually costs more, or as if the child matters less, because of the physical distance between you. If distance is not morally relevant, Singer concludes, then you are as obligated to prevent the Malawi child's death as the drowning child's — and you can do so, at the cost of a few thousand dollars donated to an effective malaria prevention charity, approximately the cost of those shoes multiplied by many times.
Singer's conclusion is demanding: people who have money to spare above their basic needs are morally required to give substantial amounts to effective charities until giving more would sacrifice something of comparable moral importance to what they are preventing. This implies giving far more than typical charitable giving norms suggest. Singer himself gives approximately 25% of his income to effective charities.
The argument's power lies in its logical structure. You cannot accept the premises and reject the conclusion without identifying a relevant moral difference between the cases — and Singer argues persuasively that proximity, identifiability, and nationality are not morally relevant differences. Critics have argued that the argument proves too much (it demands a level of self-sacrifice that is psychologically unsustainable), proves too little (it addresses individual giving but not structural change), or rests on an oversimplified consequentialism. But its influence on the effective altruism movement has been foundational.
William MacAskill and the Founding of EA
William MacAskill, a Scottish philosopher, arrived at Oxford as a graduate student in 2007 and encountered Peter Singer's argument while simultaneously reading about careers and impact. He became convinced that the most important question a person could ask was not "what am I good at?" or "what do I enjoy?" but "what career path will allow me to have the most positive impact on the world?"
MacAskill collaborated with Toby Ord to found 80,000 Hours — a nonprofit that provides career advice aimed at directing talented people toward the most impactful career paths — and became the most prominent intellectual architect of the effective altruism movement. His 2015 book "Doing Good Better" made the EA framework accessible to a general audience and became the movement's canonical introduction.
The EA framework MacAskill developed centers on three criteria for cause prioritization: scale (how many people are affected, and how seriously?), neglectedness (how much attention and resources is the cause already receiving?), and tractability (how much progress can be made with additional effort?). These three criteria, applied systematically, are intended to identify cause areas that are large in impact, undersupported relative to their importance, and responsive to additional resources.
Applied in the early EA movement, this framework pointed strongly toward global health interventions: the scale of preventable death and suffering from malaria, diarrheal disease, and malnutrition in low-income countries is enormous; these causes are relatively neglected compared to health in wealthy countries; and the evidence base for cost-effective interventions is strong. GiveWell's evaluations operationalized this reasoning, identifying specific charities with strong evidence and high cost-effectiveness.
GiveWell and the Evidence Infrastructure
GiveWell was founded in 2006 by Holden Karnofsky and Elie Hassenfeld, two former hedge fund analysts who were frustrated by the absence of rigorous evidence in the charitable sector and decided to create the kind of analytical infrastructure that would allow evidence-based giving. Their methodology involves deep due diligence: reviewing a charity's programs, evidence base, financial management, and room for additional funding, and synthesizing this into a cost-effectiveness estimate — typically in terms of dollars per life saved or per unit of health improvement.
The findings from GiveWell's research have been consistently striking. The organization has found that only a small fraction of charities — typically 2-3% of those investigated — meet their standards for recommendation. Most charities either have insufficient evidence that their programs work, or implement programs for which the evidence is weak or negative, or have organizational issues that undermine their impact. GiveWell's discovery that most charitable giving is not evidence-based was itself one of the most important contributions of the EA movement.
GiveWell's consistent top charities have included the Against Malaria Foundation, which distributes long-lasting insecticide-treated bed nets for malaria prevention; GiveDirectly, which provides unconditional cash transfers; the Malaria Consortium; and Helen Keller International's vitamin A supplementation programs. The cost-effectiveness estimates for AMF have typically been in the range of $3,000-$5,000 per life saved — a figure that is both extraordinarily favorable compared to health spending in wealthy countries (where the cost per quality-adjusted life year often exceeds $100,000) and the subject of methodological debate among development economists.
The cash transfer model represented a particular philosophical commitment within EA: GiveDirectly's approach embodies the belief that poor people are better judges of their own needs than external organizations, and that giving them money directly respects their agency. Randomized controlled trials of GiveDirectly's programs have found significant positive effects on consumption, food security, and psychological wellbeing, with no evidence of the negative effects (increased alcohol consumption, reduced work incentives) that critics of cash transfer programs sometimes assert.
Earning to Give
The "earning to give" strategy — choosing a high-income career specifically in order to donate a large share of earnings to highly effective charities — was one of the more controversial EA ideas to gain mainstream attention. The underlying logic is straightforward: if the goal is to do the most good, and if the limiting factor in global health interventions is funding, then a person who earns $500,000 per year in finance and gives $200,000 of it to AMF may save more lives than a person who works directly for a development organization at $50,000 per year. The indirect path through financial success may produce more impact than the direct path through service.
The strategy attracted criticism on several grounds. It seemed to grant ethical legitimacy to participation in financial industries whose social value is contested, to treat human impact as fungible in a way that ignored the importance of personal engagement and relationships, and to provide cover for people who wanted to pursue conventional high-status careers without guilt. EA proponents responded that the criticism often rested on the intuition that direct engagement is more virtuous than indirect financial support — an intuition that, on reflection, seems to reflect feelings about the agent rather than consequences for the recipient.
In practice, earning to give has been de-emphasized somewhat in recent EA discourse, partly because the strategy works only if the career path genuinely generates excess money to give (not all lucrative careers meet this test) and partly because EA has become convinced that talent in high-impact careers — research, policy, organizational leadership — is often more limiting than funding in the most important cause areas.
Longtermism and Existential Risk
As the effective altruism community matured and its analytical tools were applied to cause areas beyond global health, it increasingly turned toward what its members came to see as the most important question of all: the long-term trajectory of civilization.
The argument for longtermism begins with a thought experiment about scale. The number of people alive today is approximately 8 billion. If civilization persists for another million years — a modest assumption given the multi-billion-year lifespan of Earth's habitable period — and if population remains at anything like current levels, the number of people who will eventually live could be in the quadrillions. Even if we heavily discount future welfare relative to present welfare, the expected value of improving the long-run trajectory of civilization is potentially far larger than any present intervention.
This reasoning, combined with the evidentialist criterion of cause prioritization (scale, neglectedness, tractability), directed EA attention toward "existential risks" — risks that could cause human extinction or permanent civilizational collapse. These include misaligned artificial general intelligence (AI that pursues goals catastrophically misaligned with human values), engineered biological weapons capable of pandemic-scale harm, nuclear war, and other potentially civilization-ending scenarios. Against the vast potential future that would be lost if any of these risks materialized, longtermists argue that even a small reduction in their probability is extremely valuable — more valuable, in expectation, than saving many lives today.
MacAskill's 2022 book "What We Owe the Future" brought longtermism to a mainstream audience and reached major bestseller lists. It argued that future generations deserve moral consideration equal to present ones, that the current moment is "especially influential" in determining long-run trajectory (because we are making early choices about transformative technologies), and that this implies major investment in AI safety, biosecurity, and other existential risk reduction.
The Sam Bankman-Fried Collapse
The collapse of FTX in November 2022 was a crisis not only for cryptocurrency markets but for the effective altruism movement. Sam Bankman-Fried (SBF), the founder of FTX and its affiliated trading firm Alameda Research, had been celebrated in EA circles as a living proof of the earning-to-give concept: a young man who had read Peter Singer as a student, committed to giving away most of his fortune, and was actually doing it, having donated hundreds of millions of dollars to EA-aligned causes, pandemic preparedness, and political campaigns for biosecurity-focused candidates.
When FTX's liquidity crisis became public in November 2022, it emerged rapidly that Bankman-Fried had used customer deposits — money belonging to FTX users — to fund trading positions through Alameda Research, without disclosing this to customers. Approximately $8 billion of customer funds were missing. FTX filed for bankruptcy on November 11, 2022. SBF was arrested in the Bahamas in December and extradited to the United States. In October 2023, he was convicted on seven counts of fraud and conspiracy. In March 2024, he was sentenced to 25 years in prison.
The scandal raised several specific questions about EA. In interviews before and during his trial, Bankman-Fried had described a reasoning framework that critics recognized as a distorted form of EA consequentialism: a willingness to violate ordinary moral rules (do not use other people's money without their consent) if the expected positive consequences were large enough. This "ends justify means" logic is not endorsed by mainstream EA — MacAskill and other movement leaders explicitly repudiated it — but critics argued that EA's utilitarian framework, with its emphasis on expected value calculations, creates structural vulnerability to exactly this kind of rationalization.
The movement's dependence on Bankman-Fried's money was also exposed: EA organizations had received tens of millions of dollars from FTX-affiliated sources, and several were suddenly facing budget crises. The image of an ethics movement funded by what turned out to be stolen funds was deeply damaging.
Critiques and Responses
The substantive criticisms of effective altruism go beyond the Bankman-Fried case and deserve serious engagement.
Amia Srinivasan's 2015 essay "Does Anything Count as Poverty in This Circle?" argued that EA's framework systematically underweights political and structural change. By focusing on individually quantifiable interventions — bed nets, cash transfers — EA accepts the existing global economic order as a given and works within it, directing private charity to address symptoms while ignoring the structural causes of global inequality. This critique has real force: global poverty is not simply a problem of insufficient resources but of unjust trade rules, colonial legacies, tax policy, debt obligations, and political power. The EA response — that political change is less tractable and harder to measure than direct interventions — is defensible but also convenient for a movement whose major funders are themselves major beneficiaries of the existing economic order.
Timnit Gebru and other critics of AI safety-focused EA have argued that longtermism, by directing enormous resources toward speculative future risks, systematically deprioritizes concrete present harms — including the documented harms of current AI systems to marginalized communities, workers, and political processes. The AI safety movement, they argue, is dominated by a particular demographic (wealthy, white, male, Silicon Valley) whose concerns about speculative future AGI risks happen to align with the interests of the AI industry. EA proponents respond that existential risks deserve priority by definition, and that present harms and future risks are not necessarily in competition.
The demandingness objection — that Singer's argument implies giving far more than is psychologically realistic — has been addressed by EA advocates through emphasis on sustainability: the movement encourages people to give at levels they can maintain for life (the Giving What We Can pledge is 10%, not everything above subsistence). But the gap between EA's theoretical commitments and the actual giving levels of most self-identified EA members is substantial, raising questions about the relationship between philosophical commitment and behavioral change.
See also: What Is the Meaning of Life, Consequentialism: Outcomes Justify Actions, What Is Justice
References
- Singer, P. (1972). Famine, affluence, and morality. Philosophy and Public Affairs, 1(3), 229–243.
- MacAskill, W. (2015). Doing Good Better: Effective Altruism and a Radical New Way to Make a Difference. Guardian Faber.
- MacAskill, W. (2022). What We Owe the Future. Basic Books.
- Ord, T. (2020). The Precipice: Existential Risk and the Future of Humanity. Hachette Books.
- Singer, P. (2009). The Life You Can Save. Random House.
- Srinivasan, A. (2015, July 23). Does anything count as poverty in this circle? London Review of Books, 37(14).
- Bostrom, N. (2013). Existential risk prevention as global priority. Global Policy, 4(1), 15–31. https://doi.org/10.1111/1758-5899.12002
- GiveWell. (2023). Top Charities. https://www.givewell.org/charities/top-charities
- Haushofer, J., & Shapiro, J. (2016). The short-term impact of unconditional cash transfers to the poor: Experimental evidence from Kenya. Quarterly Journal of Economics, 131(4), 1973–2042. https://doi.org/10.1093/qje/qjw025
- Pummer, T. (2023). The Rules of Rescue: Cost, Distance, and Effective Altruism. Oxford University Press. https://doi.org/10.1093/oso/9780190884147.001.0001
- Berkey, B. (2018). The institutional critique of effective altruism. Utilitas, 30(2), 143–171. https://doi.org/10.1017/S0953820817000176