In 2009, Toby Ord, a philosophy lecturer at Oxford University, sat down to do some arithmetic. He calculated that if he donated 10% of his career earnings to highly effective charities — not a crushing sacrifice for an academic on a reasonable salary — he could fund treatment for approximately 80,000 people suffering from trachoma, the bacterial infection that causes blindness and is curable for a few dollars per person. Eighty thousand people. Not eighty thousand people helped in some vague and unmeasurable way, but eighty thousand people prevented from going blind. He was twenty-nine years old, had never given much away, and the calculation stopped him.
The question he asked himself was simple: if you could prevent 80,000 people from going blind, shouldn't you? He founded a pledge organization called Giving What We Can, committed to donating at least 10% of his income for the rest of his life, and began recruiting others to make the same promise. Within a few years, he had connected with the philosopher Peter Singer, the Oxford researcher William MacAskill, and a growing community of people who believed that charitable giving, like everything else, should be subject to evidence, reasoning, and the attempt to do the most good possible. The movement that grew from this circle would eventually call itself effective altruism.
The movement's story is not simply a story of moral seriousness and philosophical ingenuity. In November 2022, the most prominent and celebrated figure in effective altruism, a young cryptocurrency billionaire named Sam Bankman-Fried, was revealed to have committed what prosecutors described as one of the largest financial frauds in American history. The questions that followed — about the ethics of consequentialism, the dangers of certainty in one's own calculations of the good, and what happens when an ethics of doing the most good meets the corrupting effects of power and money — have not yet been fully answered.
"The question is not whether I am doing enough. The question is whether I am doing as much as I morally should." — Peter Singer, Famine, Affluence, and Morality (1972)
| EA Cause Area | Core Argument | Leading Organizations |
|---|---|---|
| Global health and poverty | Huge gains possible at low cost in poor countries | GiveWell; Against Malaria Foundation |
| Animal welfare | Factory farming causes enormous suffering | Animal Charity Evaluators |
| Existential risk | Catastrophic risks could end all future value | Future of Humanity Institute; MIRI |
| Biosecurity | Engineered pandemics as catastrophic risk | Johns Hopkins CHS; Nucleic Acid Observatory |
| AI safety | Misaligned AI as potential existential threat | Center for Human-Compatible AI |
Key Definitions
Effective altruism (EA): A philosophical and practical project that uses evidence and careful reasoning to identify and act on the most effective ways to benefit others; associated with a community of researchers, philanthropists, and practitioners organized around this goal.
Cause prioritization: The practice of systematically comparing different cause areas (global health, animal welfare, existential risk, etc.) using criteria such as scale (how many are affected), neglectedness (how little attention it receives), and tractability (how addressable it is), to identify where additional effort will do the most good.
GiveWell: The rigorous charity evaluator that assesses charities using evidence-based methods and recommends only those meeting high standards of cost-effectiveness, typically a few percent of organizations evaluated.
Earning to give: The EA strategy of pursuing high-income careers (in finance, tech, etc.) for the purpose of donating a large share of income to highly effective charities, rather than working directly in the social sector.
Longtermism: The view that improving the long-term future is a dominant moral priority, given the potentially vast number of people who will exist in the future and the disproportionate importance of reducing existential risks.
Existential risk: Risks that could cause human extinction or permanent civilizational collapse — including engineered pandemics, advanced artificial intelligence misalignment, and nuclear war — which longtermists argue deserve priority because of the enormous number of future lives at stake.
Giving What We Can: The pledge organization founded by Toby Ord in 2009, whose members commit to giving at least 10% of their income to highly effective charities.
Against Malaria Foundation (AMF): A consistently top-rated GiveWell charity that distributes insecticide-treated bed nets in sub-Saharan Africa; regularly cited as among the most cost-effective life-saving interventions available.
GiveDirectly: An EA-endorsed charity that provides unconditional cash transfers directly to very poor households in sub-Saharan Africa, allowing recipients to spend according to their own priorities.
Peter Singer's Argument
The intellectual foundation of effective altruism is Peter Singer's 1972 essay "Famine, Affluence, and Morality," written in response to the catastrophic famine in East Bengal that accompanied the Bangladesh Liberation War and killed millions. Singer's argument begins with a moral premise that almost everyone accepts: suffering and death from lack of food, shelter, and medical care are bad. From this he constructs a principle: "If it is in our power to prevent something bad from happening, without thereby sacrificing anything of comparable moral importance, we ought, morally, to do it."
To make this principle vivid, Singer invents a thought experiment that has become one of the most discussed in contemporary ethics. Suppose you are walking to work and you pass a shallow pond in which a small child is drowning. You can wade in and save the child at the cost of ruining your expensive new shoes and being late to work. Virtually everyone agrees: you are morally required to save the child. Your inconvenience is not of comparable moral importance to the child's death.
Now, Singer asks: what morally relevant difference is there between that child and a child dying of malaria in Malawi? Both deaths are preventable. Both are tragedies. The primary difference is distance — the drowning child is in front of you; the Malawi child is thousands of miles away. But why should distance matter morally? It is not as if your action actually costs more, or as if the child matters less, because of the physical distance between you. If distance is not morally relevant, Singer concludes, then you are as obligated to prevent the Malawi child's death as the drowning child's — and you can do so, at the cost of a few thousand dollars donated to an effective malaria prevention charity, approximately the cost of those shoes multiplied by many times.
Singer's conclusion is demanding: people who have money to spare above their basic needs are morally required to give substantial amounts to effective charities until giving more would sacrifice something of comparable moral importance to what they are preventing. This implies giving far more than typical charitable giving norms suggest. Singer himself gives approximately 25% of his income to effective charities.
The argument's power lies in its logical structure. You cannot accept the premises and reject the conclusion without identifying a relevant moral difference between the cases — and Singer argues persuasively that proximity, identifiability, and nationality are not morally relevant differences. Critics have argued that the argument proves too much (it demands a level of self-sacrifice that is psychologically unsustainable), proves too little (it addresses individual giving but not structural change), or rests on an oversimplified consequentialism. But its influence on the effective altruism movement has been foundational.
William MacAskill and the Founding of EA
William MacAskill, a Scottish philosopher, arrived at Oxford as a graduate student in 2007 and encountered Peter Singer's argument while simultaneously reading about careers and impact. He became convinced that the most important question a person could ask was not "what am I good at?" or "what do I enjoy?" but "what career path will allow me to have the most positive impact on the world?"
MacAskill collaborated with Toby Ord to found 80,000 Hours — a nonprofit that provides career advice aimed at directing talented people toward the most impactful career paths — and became the most prominent intellectual architect of the effective altruism movement. His 2015 book "Doing Good Better" made the EA framework accessible to a general audience and became the movement's canonical introduction.
The EA framework MacAskill developed centers on three criteria for cause prioritization: scale (how many people are affected, and how seriously?), neglectedness (how much attention and resources is the cause already receiving?), and tractability (how much progress can be made with additional effort?). These three criteria, applied systematically, are intended to identify cause areas that are large in impact, undersupported relative to their importance, and responsive to additional resources.
Applied in the early EA movement, this framework pointed strongly toward global health interventions: the scale of preventable death and suffering from malaria, diarrheal disease, and malnutrition in low-income countries is enormous; these causes are relatively neglected compared to health in wealthy countries; and the evidence base for cost-effective interventions is strong. GiveWell's evaluations operationalized this reasoning, identifying specific charities with strong evidence and high cost-effectiveness.
GiveWell and the Evidence Infrastructure
GiveWell was founded in 2006 by Holden Karnofsky and Elie Hassenfeld, two former hedge fund analysts who were frustrated by the absence of rigorous evidence in the charitable sector and decided to create the kind of analytical infrastructure that would allow evidence-based giving. Their methodology involves deep due diligence: reviewing a charity's programs, evidence base, financial management, and room for additional funding, and synthesizing this into a cost-effectiveness estimate — typically in terms of dollars per life saved or per unit of health improvement.
The findings from GiveWell's research have been consistently striking. The organization has found that only a small fraction of charities — typically 2-3% of those investigated — meet their standards for recommendation. Most charities either have insufficient evidence that their programs work, or implement programs for which the evidence is weak or negative, or have organizational issues that undermine their impact. GiveWell's discovery that most charitable giving is not evidence-based was itself one of the most important contributions of the EA movement.
GiveWell's consistent top charities have included the Against Malaria Foundation, which distributes long-lasting insecticide-treated bed nets for malaria prevention; GiveDirectly, which provides unconditional cash transfers; the Malaria Consortium; and Helen Keller International's vitamin A supplementation programs. The cost-effectiveness estimates for AMF have typically been in the range of $3,000-$5,000 per life saved — a figure that is both extraordinarily favorable compared to health spending in wealthy countries (where the cost per quality-adjusted life year often exceeds $100,000) and the subject of methodological debate among development economists.
The cash transfer model represented a particular philosophical commitment within EA: GiveDirectly's approach embodies the belief that poor people are better judges of their own needs than external organizations, and that giving them money directly respects their agency. Randomized controlled trials of GiveDirectly's programs have found significant positive effects on consumption, food security, and psychological wellbeing, with no evidence of the negative effects (increased alcohol consumption, reduced work incentives) that critics of cash transfer programs sometimes assert.
Earning to Give
The "earning to give" strategy — choosing a high-income career specifically in order to donate a large share of earnings to highly effective charities — was one of the more controversial EA ideas to gain mainstream attention. The underlying logic is straightforward: if the goal is to do the most good, and if the limiting factor in global health interventions is funding, then a person who earns $500,000 per year in finance and gives $200,000 of it to AMF may save more lives than a person who works directly for a development organization at $50,000 per year. The indirect path through financial success may produce more impact than the direct path through service.
The strategy attracted criticism on several grounds. It seemed to grant ethical legitimacy to participation in financial industries whose social value is contested, to treat human impact as fungible in a way that ignored the importance of personal engagement and relationships, and to provide cover for people who wanted to pursue conventional high-status careers without guilt. EA proponents responded that the criticism often rested on the intuition that direct engagement is more virtuous than indirect financial support — an intuition that, on reflection, seems to reflect feelings about the agent rather than consequences for the recipient.
In practice, earning to give has been de-emphasized somewhat in recent EA discourse, partly because the strategy works only if the career path genuinely generates excess money to give (not all lucrative careers meet this test) and partly because EA has become convinced that talent in high-impact careers — research, policy, organizational leadership — is often more limiting than funding in the most important cause areas.
Longtermism and Existential Risk
As the effective altruism community matured and its analytical tools were applied to cause areas beyond global health, it increasingly turned toward what its members came to see as the most important question of all: the long-term trajectory of civilization.
The argument for longtermism begins with a thought experiment about scale. The number of people alive today is approximately 8 billion. If civilization persists for another million years — a modest assumption given the multi-billion-year lifespan of Earth's habitable period — and if population remains at anything like current levels, the number of people who will eventually live could be in the quadrillions. Even if we heavily discount future welfare relative to present welfare, the expected value of improving the long-run trajectory of civilization is potentially far larger than any present intervention.
This reasoning, combined with the evidentialist criterion of cause prioritization (scale, neglectedness, tractability), directed EA attention toward "existential risks" — risks that could cause human extinction or permanent civilizational collapse. These include misaligned artificial general intelligence (AI that pursues goals catastrophically misaligned with human values), engineered biological weapons capable of pandemic-scale harm, nuclear war, and other potentially civilization-ending scenarios. Against the vast potential future that would be lost if any of these risks materialized, longtermists argue that even a small reduction in their probability is extremely valuable — more valuable, in expectation, than saving many lives today.
MacAskill's 2022 book "What We Owe the Future" brought longtermism to a mainstream audience and reached major bestseller lists. It argued that future generations deserve moral consideration equal to present ones, that the current moment is "especially influential" in determining long-run trajectory (because we are making early choices about transformative technologies), and that this implies major investment in AI safety, biosecurity, and other existential risk reduction.
The Sam Bankman-Fried Collapse
The collapse of FTX in November 2022 was a crisis not only for cryptocurrency markets but for the effective altruism movement. Sam Bankman-Fried (SBF), the founder of FTX and its affiliated trading firm Alameda Research, had been celebrated in EA circles as a living proof of the earning-to-give concept: a young man who had read Peter Singer as a student, committed to giving away most of his fortune, and was actually doing it, having donated hundreds of millions of dollars to EA-aligned causes, pandemic preparedness, and political campaigns for biosecurity-focused candidates.
When FTX's liquidity crisis became public in November 2022, it emerged rapidly that Bankman-Fried had used customer deposits — money belonging to FTX users — to fund trading positions through Alameda Research, without disclosing this to customers. Approximately $8 billion of customer funds were missing. FTX filed for bankruptcy on November 11, 2022. SBF was arrested in the Bahamas in December and extradited to the United States. In October 2023, he was convicted on seven counts of fraud and conspiracy. In March 2024, he was sentenced to 25 years in prison.
The scandal raised several specific questions about EA. In interviews before and during his trial, Bankman-Fried had described a reasoning framework that critics recognized as a distorted form of EA consequentialism: a willingness to violate ordinary moral rules (do not use other people's money without their consent) if the expected positive consequences were large enough. This "ends justify means" logic is not endorsed by mainstream EA — MacAskill and other movement leaders explicitly repudiated it — but critics argued that EA's utilitarian framework, with its emphasis on expected value calculations, creates structural vulnerability to exactly this kind of rationalization.
The movement's dependence on Bankman-Fried's money was also exposed: EA organizations had received tens of millions of dollars from FTX-affiliated sources, and several were suddenly facing budget crises. The image of an ethics movement funded by what turned out to be stolen funds was deeply damaging.
Critiques and Responses
The substantive criticisms of effective altruism go beyond the Bankman-Fried case and deserve serious engagement.
Amia Srinivasan's 2015 essay "Does Anything Count as Poverty in This Circle?" argued that EA's framework systematically underweights political and structural change. By focusing on individually quantifiable interventions — bed nets, cash transfers — EA accepts the existing global economic order as a given and works within it, directing private charity to address symptoms while ignoring the structural causes of global inequality. This critique has real force: global poverty is not simply a problem of insufficient resources but of unjust trade rules, colonial legacies, tax policy, debt obligations, and political power. The EA response — that political change is less tractable and harder to measure than direct interventions — is defensible but also convenient for a movement whose major funders are themselves major beneficiaries of the existing economic order.
Timnit Gebru and other critics of AI safety-focused EA have argued that longtermism, by directing enormous resources toward speculative future risks, systematically deprioritizes concrete present harms — including the documented harms of current AI systems to marginalized communities, workers, and political processes. The AI safety movement, they argue, is dominated by a particular demographic (wealthy, white, male, Silicon Valley) whose concerns about speculative future AGI risks happen to align with the interests of the AI industry. EA proponents respond that existential risks deserve priority by definition, and that present harms and future risks are not necessarily in competition.
The demandingness objection — that Singer's argument implies giving far more than is psychologically realistic — has been addressed by EA advocates through emphasis on sustainability: the movement encourages people to give at levels they can maintain for life (the Giving What We Can pledge is 10%, not everything above subsistence). But the gap between EA's theoretical commitments and the actual giving levels of most self-identified EA members is substantial, raising questions about the relationship between philosophical commitment and behavioral change.
See also: What Is the Meaning of Life, Consequentialism: Outcomes Justify Actions, What Is Justice
References
- Singer, P. (1972). Famine, affluence, and morality. Philosophy and Public Affairs, 1(3), 229–243.
- MacAskill, W. (2015). Doing Good Better: Effective Altruism and a Radical New Way to Make a Difference. Guardian Faber.
- MacAskill, W. (2022). What We Owe the Future. Basic Books.
- Ord, T. (2020). The Precipice: Existential Risk and the Future of Humanity. Hachette Books.
- Singer, P. (2009). The Life You Can Save. Random House.
- Srinivasan, A. (2015, July 23). Does anything count as poverty in this circle? London Review of Books, 37(14).
- Bostrom, N. (2013). Existential risk prevention as global priority. Global Policy, 4(1), 15–31. https://doi.org/10.1111/1758-5899.12002
- GiveWell. (2023). Top Charities. https://www.givewell.org/charities/top-charities
- Haushofer, J., & Shapiro, J. (2016). The short-term impact of unconditional cash transfers to the poor: Experimental evidence from Kenya. Quarterly Journal of Economics, 131(4), 1973–2042. https://doi.org/10.1093/qje/qjw025
- Pummer, T. (2023). The Rules of Rescue: Cost, Distance, and Effective Altruism. Oxford University Press. https://doi.org/10.1093/oso/9780190884147.001.0001
- Berkey, B. (2018). The institutional critique of effective altruism. Utilitas, 30(2), 143–171. https://doi.org/10.1017/S0953820817000176
Frequently Asked Questions
What is effective altruism?
Effective altruism (EA) is a philosophical and practical movement built on the proposition that if you care about doing good in the world, you should try to do as much good as possible — and that this requires using evidence and careful reasoning to identify the most effective ways to help. The movement's basic intellectual framework has three components. First, a moral claim: you have a significant obligation to help others who are suffering, even strangers far away, when you have the capacity to do so without disproportionate sacrifice to yourself. Second, an empirical claim: not all ways of helping are equally effective — some charities save many lives per dollar, while others accomplish little or nothing, and evidence can distinguish them. Third, a consequentialist implication: if these two claims are right, you should allocate your charitable giving and career choices toward the most effective interventions you can identify. The practical implications of this framework are more radical than they might initially appear. EA reasoning led its proponents to question the conventional wisdom that people should give to charities they have personal connections to, that impact matters more than efficiency, and that passion for a cause is a sufficient guide to good action. Instead, EA emphasizes systematic comparison across cause areas, quantified impact assessment, and a willingness to follow the evidence wherever it leads — even if it produces counterintuitive conclusions such as giving to unfamiliar charities in distant countries rather than local ones, or working in finance to give away a large salary rather than in direct service. EA has attracted significant controversy both for its intellectual claims and for the behavior of some of its most prominent figures.
What is the core argument for effective altruism?
The intellectual foundation of effective altruism is Peter Singer's 1972 essay 'Famine, Affluence, and Morality,' published at the time of the 1971 famine in East Bengal (Bangladesh) that killed millions. Singer's argument is deceptively simple and remarkably powerful. His central premise: 'If it is in our power to prevent something bad from happening, without thereby sacrificing anything of comparable moral importance, we ought, morally, to do it.' He applies this to a thought experiment: if you walked past a shallow pond and saw a small child drowning, you would be obligated to wade in and save the child, even if this ruined your expensive shoes and soaked your clothes. The slight inconvenience to you is not of comparable moral importance to the child's death. Singer then asks: why should it make a moral difference that the children dying of preventable diseases in Bengal (or wherever) are not in front of you but thousands of miles away? Distance and proximity cannot be morally relevant in themselves — a death is a death, a suffering is a suffering, regardless of whether it occurs nearby or far away. If you can, by giving a few thousand dollars to effective charities, prevent several children from dying of malaria, the failure to do so is morally comparable — on Singer's argument — to walking past the drowning child. The argument is elegant and uncomfortable. It implies that most affluent people in rich countries have profound obligations to give far more than they do, and that the conventional view that charity is supererogatory (praiseworthy but not required) is mistaken. Singer himself famously gives approximately 25% of his income to effective charities. The argument does not require utilitarian ethics — it can be reconstructed from most ethical frameworks — but it has a particularly natural home in utilitarian and consequentialist thinking.
What is GiveWell and evidence-based giving?
GiveWell, founded in 2006 by Holden Karnofsky and Elie Hassenfeld, is the most rigorous charity evaluator in the world and arguably the operational heart of the effective altruism movement's approach to global poverty. GiveWell's methodology involves investigating charities in depth — examining their programs, their evidence base, their financial management, their room for additional funding, and their cost-effectiveness — to identify the small number of charities that meet a standard of evidence rigorous enough to justify confident recommendation. GiveWell has consistently found that only a tiny fraction of well-intentioned charities meet their effectiveness standards: typically 2-3% of organizations they investigate receive top recommendations. Most charities, even those with excellent reputations and high public profiles, either have insufficient evidence that their programs work, implement programs that evidence shows are ineffective, or cannot effectively absorb additional funding. GiveWell's top-rated charities have included the Against Malaria Foundation (AMF), which distributes insecticide-treated bed nets to prevent malaria deaths in sub-Saharan Africa; GiveDirectly, which provides unconditional cash transfers directly to very poor households in Kenya and Uganda; the Malaria Consortium, which implements seasonal malaria chemoprevention; and Helen Keller International's vitamin A supplementation programs. The cost-effectiveness estimates for these charities — typically in the range of a few thousand dollars per life saved, or equivalent — are striking by any comparison with how charities are usually evaluated. A widely cited GiveWell estimate suggested that AMF could save a life for approximately \(3,000-\)5,000 in cost. Whether these cost-effectiveness estimates are methodologically reliable is debated among development economists, but the underlying programs (bed nets, cash transfers, vitamin A) are supported by multiple randomized controlled trials and large-scale evidence.
What is longtermism?
Longtermism is the philosophical view, associated most prominently with William MacAskill's 2022 book 'What We Owe the Future,' that the long-term future is among the most important considerations in moral decision-making, because the potential number of people who will exist in the future vastly outnumbers those alive today. The argument begins with a straightforward observation: human civilization is very young. Homo sapiens has existed for approximately 300,000 years; if the species persists for another million years at current and projected population sizes, the number of people who will ever live could be in the quadrillions. Even if we assign no special weight to future people relative to present people — treating them as moral equals — the sheer number of potential future people means that the expected value of improving long-run trajectory is enormous. Conversely, risks that could cause human extinction or permanent civilizational collapse — what MacAskill and colleagues call 'existential risks' — would be catastrophic not just because of the people currently alive but because of all the potential future people who would never exist. This framing, developed originally by philosopher Nick Bostrom, implies that a small reduction in the probability of human extinction could be more valuable than a massive improvement in current welfare. Applied through the EA framework, longtermism points toward prioritizing AI safety research (to reduce the risk that advanced artificial intelligence poses existential threats), biosecurity (to reduce risks from engineered pandemics), and reducing the risks from other potentially civilization-ending technologies. Longtermism has attracted both influence (MacAskill's book reached major bestseller status; prominent longtermists have influenced billions of dollars of philanthropic funding) and sharp criticism (the reasoning is speculative; the framing can justify ignoring present suffering in favor of uncertain future benefits; it privileges the concerns of wealthy technologists).
What are the main critiques of effective altruism?
Effective altruism has attracted substantive criticism from multiple directions, not all of which are compatible with each other. The most fundamental philosophical critique is that EA's consequentialist framework leads it to support causes and actions that many people find morally objectionable in ways that mere utility calculations cannot capture. Amia Srinivasan's influential essay 'Does Anything Count as Poverty in This Circle?' (published in the London Review of Books in 2015) argued that EA systematically underweights the importance of political and structural change: by focusing on measurable, quantifiable interventions (bed nets, cash transfers), EA directs attention and resources away from the political-economic structures that produce global poverty in the first place. Srinivasan argues that EA is in this sense politically conservative — it accepts the existing distribution of power and wealth and asks how individuals within it can do the most good, rather than asking how the system should be changed. A second critique concerns moral demandingness. Singer's argument, taken seriously, implies that affluent people in rich countries are obligated to give until the marginal utility of giving equals the marginal utility of keeping — essentially, until they have given away everything above subsistence level. Most people find this demand psychologically and practically unreasonable, and some philosophers argue that ethics cannot be this demanding without becoming self-defeating. EA's response — that people should give significantly more than they do, even if not everything — is reasonable, but the movement's rhetorical extremism on demandingness has generated hostility. A third critique concerns the political economy of EA's major philanthropic funding: EA has attracted extraordinary wealth from technology entrepreneurs, and critics argue that the movement's priorities reflect the concerns of wealthy Silicon Valley technologists more than any neutral assessment of global welfare.
What did the Sam Bankman-Fried scandal reveal about EA?
Sam Bankman-Fried (SBF) was, until November 2022, the most prominent and influential figure in the effective altruism movement. He had built FTX, a cryptocurrency exchange, into a multibillion-dollar business while publicly positioning himself as a practitioner of 'earning to give' — the EA strategy of pursuing high-income careers specifically to donate the proceeds to effective charities. He had pledged to give away most of his fortune, donated hundreds of millions of dollars to EA-aligned causes and political campaigns, and was celebrated by EA leaders including William MacAskill as a model of the movement's principles. When FTX collapsed in November 2022, it emerged that Bankman-Fried had used customer funds — money deposited by FTX customers and therefore not his to use — to fund trading positions through his affiliated trading firm Alameda Research, resulting in an approximately $8 billion hole in customer accounts. He was arrested in December 2022, tried in late 2023, convicted on seven counts of fraud and conspiracy, and sentenced to 25 years in prison. The scandal raised several questions about effective altruism specifically. First, critics argued that EA's utilitarian framework — with its willingness to weigh competing outcomes and its 'ends justify means' logic — may have provided Bankman-Fried with an intellectual justification for fraud: if enough charitable good could come from the money, perhaps ordinary moral constraints could be suspended. SBF gave public interviews suggesting he had accepted a reasoning framework of this kind. MacAskill and other EA leaders strongly rejected this characterization, emphasizing that EA does not condone law-breaking or fraud. Second, the scandal exposed the dependence of EA philanthropic infrastructure on Bankman-Fried's money and the reputational damage that dependence created. Third, it raised questions about whether a movement built on confidence in the ability of intelligent individuals to calculate optimal outcomes is systematically vulnerable to the rationalization of self-serving behavior as altruistically motivated.