Effective altruism, which started as a small movement at Oxford University over a decade ago, is an influential and controversial philosophy backed by billionaires—including Facebook co-founder Dustin Moskovitz, his wife Cari Tuna, Elon Musk —and notorious ex-billionaire Sam Bankman-Fried, and focuses on maximizing the impact of charitable giving.
Effective altruism is a philosophy that, as described by the New Yorker, seeks to “ to do good in the most clear-sighted, ambitious, and unsentimental way possible,” and is implicitly critical of big charities that don’t have a measurable impact on problems.
The movement generally encourages donors to work on problems few people have tried to solve, as opposed to a huge issue like climate change that has the attention of many groups, because the impact of individuals and teams on a problem are greater when fewer groups are involved.
The term effective altruism was coined in 2011 when a group of Oxford philosophers, including Toby Ord and William MacAskill, started The Centre For Effective Altruism—an umbrella company encompassing Giving What We Can, which helps people compare charities’ effectiveness, and 80,000 Hours, an organization co-founded founded by MacAskill to help people find impactful careers.
The Centre for Effective Altruism and nine other companies are now federated under the Effective Ventures Group, but organizations don’t have to be part of the group to be effective altruists—Moskovitz and Tuna, the biggest estimated donors to effective altruism, according to 80,000 Hours, have their own charity called Open Philanthropy.
Effective altruists claim a measurable impact on global problems, collectively giving $1 billion to support charities fighting malaria, helping prevent roughly 150,000 deaths, and campaigning with The Open Wing Alliance to get more than 2,000 companies to agree to purchase eggs from cage-free chickens.
As effective altruism grew, it attracted uber-rich converts. MacAskill recruited disgraced crypto tycoon Bankman-Fried to effective altruism in 2012; less than 10 years later, his crypto exchange FTX was valued at $18 billion, and he said he planned to give most of his share (roughly $16.2 billion at one point) away to fund effective altruism projects, according to MacAskill’s 80,000 Hours. The integration of effective altruism and wealthy Silicon Valley techies established AI safety as one of effective altruism’s most talked-about projects. Brainstorming regulations to prevent AI from threatening humans and other projects dedicated to protecting humanity from long-term threats—and potential extinction—are associated with longtermism, a growing view within effective altruism that focuses more on solving threats to humanity’s future than its present problems.
Longtermists, to varying degrees, believe effective altruists should consider the well-being of future generations when they determine which causes to pursue. Weak longtermists believe the well-being of future generations should be considered in problem-solving, but usually don’t prioritize it above all other moral considerations like alleviating suffering for current generations. Strong longtermists prioritize the well-being of future generations over the well-being of people alive today, reasoning effective altruists will make the biggest impact by solving long-term problems because exponentially more people will live in the future than live now. Strong longtermists consider the well-being of people living thousands and millions of years from now, not just in the next couple of generations, and tend to be concerned with potential extinction threats that would prevent future people from coming into existence. Longtermists, including Musk and Bankman-Fried, shifted some of effective altruism’s resources to longtermist projects like space colonization and AI regulation—Musk famously described colonizing Mars as a “civilian life insurance” policy for when the sun dies out—instead of pouring money into current problems like malaria or factory farming, reportedly one of Bankman-Fried’s original interests.
Before Bankman-Fried’s fall from grace and the vaporization of billions of dollars set to fund effective altruism projects, critics of the movement primarily took aim at longtermism, arguing it’s unethical to divert funds from people suffering in the present to solve hypothetical problems occuring in an uncertain future. Further, critics argue, a small group of predominantly white men should not get to take action to implement a utopian vision without consulting less represented groups. After FTX’s collapse, critics blasted effective altruism itself for lending Bankman-Fried legitimacy, despite red flags indicating his business practices were less than savory.
Carla Zoe Cremer, a doctoral student at the University of Oxford and a researcher at the Centre for the Study of Existential Risk at the University of Cambridge, used to be an effective activist—she was even interviewed to work for Alameda Research in 2018. Lately, she’s emerged as one of the biggest critics and would-be reformers of the effective altruism movement and its leaders. In a January article written for Vox, Cremer alleges that MacAskill and other influential leaders, like Bankman-Fried, wield too much power over people within the community; some altruists are afraid to question the system for fear their funding will be cut, Cremer claims, which allows leaders and influential donors to move projects in risky directions with almost no oversight. Cremer claims she fruitlessly urged MacAskill to implement stronger institutional measures imposing checks on big donors and insulating the movement from undue risk in February 2022—almost a year before the FTX disaster decimated one of effective altruism’s most lucrative sources of funding. Now, Cremer advocates that effective altruists should contribute to society by using their vast network of organizations to experiment with new and more effective forms of institutional decision making.
By Emily Washburn, Forbes Staff