A few days ago, I was interviewed for an episode of NPR’s On Point exploring the ideas behind effective altruism (in the wake of the FTX debacle). It made me realize there are a lot of misconceptions out there! But there wasn’t time in the episode to cover very much. So here’s an expanded version of what I’d like more people to understand about effective altruism.
What is Effective Altruism?
According to CEA’s official definition, “Effective altruism is the project of trying to find the best ways of helping others, and putting them into practice.”
The way I put it: effective altruism is about trying to help others as effectively as you can with whatever (non-trivial) resources of time or money you're willing to put towards that altruistic project.
There’s then an EA movement/community of people who self-identify as engaged with this project, and who would like to encourage others to join them. I support this, as I think a community can often achieve things that isolated individuals cannot. But if you don’t like the EA community for any reason, there’s no inherent barrier to pursuing your own effective altruism independently. (Many people may implicitly be doing this already, without necessarily thinking of themselves as “effective altruists” at all.)
Is Effective Altruism extremely demanding?
Nope. Some individual EAs may (following Peter Singer) believe in extreme demands of beneficence. And some may choose to perform extremely altruistic acts, like donating a kidney to a stranger, or donating everything they earn above a basic living wage. But neither of these is essential to EA as such. Many EAs prefer to conceive of beneficence as an opportunity rather than an obligation, and there’s certainly no expectation of extreme self-sacrifice. A common norm is that one either donates 10% to effective charities, as per the Giving What We Can pledge, or works directly in a high-impact cause area, as recommended by 80 000 hours. (Some people “earn to give”, working in a very lucrative job and donating 50% or more of what they earn; but my sense is that this remains relatively rare. It gets outsized media attention because it’s so distinctive.)
Isn’t donating 10% still a pretty big ask?
I guess it depends on your circumstances. I’ve personally never found it to be a burden, but it probably helps that I started as a graduate student, and so never experienced any drop in material comfort. I like to think of it as follows:
Suppose you’re graduating from college, and choosing between two very similar job prospects. The only difference is that, whereas one job pays 10% extra, the other would involve saving someone’s life, or even several lives, every single year (in addition to whatever value your work produces on a day-to-day basis). If you don’t take the second job, there’s no replacement waiting to take your place; rather, those lives will simply not be saved. In light of all this, which job seems overall most rewarding—most meaningful—for you?
If you’re anything like me, you’ll probably agree that the second seems an overall more appealing career path. Getting to make a real difference is a significant reward in its own right, and can easily outweigh a mere 10% salary boost (given that the base salary is already more than enough for you & your family to comfortably live off). The thing I love about the Giving What We Can pledge is that it opens up this choice to anyone, even if their career (like that of a philosophy professor) wouldn’t ordinarily involve saving lives. You don’t have to work directly for a charity in order for your work to support that charity.
N.B. Not all EAs take the 10% pledge. In recent years, there’s been an increasing focus on high-impact careers — the pursuit of which may not involve any personal sacrifice at all. If you find that you love doing high-impact work, this “win-win” situation seems pretty ideal to me. While people often associate “altruism” with self-sacrifice, that isn’t the EA conception. We simply care about doing good (effectively). “Effective Beneficence” might have been a better label.
Why does “effectiveness” matter?
Some charities do hundreds of times more good than others. So it’s wasteful—a lost opportunity—to do only a tiny bit of a good when you could have done so much more, at no greater cost.
Imagine watching your neighbor heroically rush into a burning building, only to rescue a goldfish rather than the two children screaming in the other room. This would seem messed up! Personally, I think we really ought to be efficient in our core moral efforts. (This is also an implication of my willpower satisficing consequentialism.) But even if you think it’s often permissible to be inefficient, it’s surely always better to do more good rather than less (all else equal). So it’s worth taking effectiveness into account—it’s better to do so—even if it isn’t strictly “required”.
Does EA assume a narrow conception of “effectiveness”?
No, EA as a philosophy does not specify how to measure “effectiveness”. It’s open to you to argue that donating to children’s sports somehow does more good than anti-malarial bednets (though I doubt such an argument would be very plausible on its merits). GiveWell’s top charities provide a baseline against which more speculative proposals may be compared: can you really do, in expectation, more good with $3000 - $5000 than saving someone’s life? Maybe! People have offered strong arguments for thinking that effective animal charities and longtermist projects may be even more cost-effective in expectation, though it’s obviously impossible to prove this with any kind of certainty. Proposals should be backed with suitable evidence to justify them, but what exactly counts as good evidence is always contestable.
Ultimately, I would want to emphasize two points:
(1) EA is open-minded about how we can most effectively do good; those who claim that the underlying philosophy rules out their favoured interventions are either (a) implicitly conceding that there’s no reasonable basis for believing their favoured interventions to be better, or else (b) confused. If there’s a good case to be made for a different methodological approach to cause selection, we’re keen to hear it. And even if you can’t convince other EAs of your preferred approach, you’re welcome to be a slightly heterodox EA, as long as you sincerely share the core project of doing good effectively.
(2) But that’s not to say “anything goes”. One does need to make a sincere, good faith effort to discern what’s best (in expectation)—which might involve either deferring to credible authorities like GiveWell, or challenging them if you think you have a good basis for doing so. Merely insisting (e.g.) that “donating to children’s sports is optimal” doesn’t make it so, and donating in this way isn’t compatible with the spirit of EA if there’s no reasonable basis for that verdict. There’s room for dispute about exactly what counts as good evidence, but wishful thinking certainly doesn’t count.
Does that mean I’m not allowed to donate to local community organizations?
Donating to children’s sports (or whatever) isn’t part of the EA project. But there’s nothing in EA which says you aren’t allowed to have other projects! So, while this wouldn’t count towards your 10% pledge, for example, you’re always free to direct the other 90% of your resources however you would if you weren’t an EA. Donating to good-but-suboptimal organizations is presumably still better than spending on personal luxuries, after all.
What causes do EAs tend to support in practice?
While EA is officially cause-neutral, and individuals are always free to use their own judgment, there are currently four broad cause areas that are widely regarded as our “best bets” for doing good (taking into account considerations such as importance, neglectedness, and tractability):
(1) Global health and development, e.g. the Against Malaria Foundation and GiveDirectly.
(2) Animal Welfare, especially lobbying for factory-farmed animals, e.g. corporate cage-free campaigns, veg*n advocacy, etc.
(3) Longtermism and global catastrophic risk reduction, e.g. AI safety, pandemic preparedness, moral progress, etc.
(4) EA Infrastructure and movement-building, to further build support for all the other cause areas.
Some people like some of these areas more than others. The first category is obviously the most “legible” to a general audience, though I think there are compelling reasons to expect that the others do even more good (in expectation). Personally, I support all four cause areas, via the associated Effective Altruism Funds. Beyond these standard causes, I’d be excited to see more investigation of funding political lobbying as an intervention; my sense is that this is currently underexplored (but I may just be unaware of the relevant investigations).
This all sounds very reasonable, so why is EA so controversial?
Good question! No doubt some of the (more unhinged) opposition reflects motivated reasoning from those for whom EA ideas are detrimental to their personal interests or moral self-conception. (Cf. popular hostility towards vegans.)
[FWIW, I think it’s probably a mistake to think that group membership of this kind reveals all that much about our moral character, as it probably instead reveals more about our backgrounds and social networks. Regardless, I’d always encourage folks to think less about their own virtue and more about how to do good, including by joining more morally ambitious groups.]
But bad-faith hostility aside, there are some genuinely controversial features of EA thought, reflecting (what I take to be) the movement’s unusual degree of intellectual honesty and clarity (or what critics might regard as a pathologically analytical approach to ethics):
(1) Willingness to explicitly consider trade-offs and engage in cause prioritization. Alas, there seems to be something taboo about “ranking the sacred”, even though the alternative of acting upon (possibly indefensible) implicit moral priorities is obviously worse.
This explicit prioritization naturally makes enemies of ineffective altruists. E.g., Crary, Gruen, and Adams: “To grasp how disastrously an apparently altruistic movement has run off course, consider that the value of organizations that provide healthy vegan food within their underserved communities are ignored as an area of funding… Or how covering the costs of caring for survivors of industrial animal farming in sanctuaries is seen as a bad use of funds.”
These critics don’t offer any argument that providing vegan food to “underserved” Americans is actually more important than preventing deaths from malaria. Nor do they argue that providing sanctuaries for individual animals should take priority over efforts to systematically reform factory farming. Instead, they hide the indefensibility of their ineffective altruism by refusing to face up to the reality of trade-offs at all. Effective altruists are too intellectually honest to take this route, and so appear more “controversial” as a result.
(See also my previous critique of an Oxfam CEO’s fuzzy thinking in rejecting the logic of prioritization.)
(2) Willingness to consider weird edge cases in search of systematically justified general principles. As I argued in ‘Puzzles for Everyone’, there are areas of ethics that admit of no fully intuitive answers. If you have no answer to a tricky question, you’re obviously not in a place to deride those who try to work out the least-bad-seeming of the available options. Alas, too few critics of effective altruism (esp. in its “longtermist” branch) realize that they don’t actually have any better answers, which results in a lot of very shallow bad takes.
(3) Challenging the status hierarchies of conventional ethics. Moral revisionism is naturally controversial, as folks aren’t always open to changing their minds. For example, many people are currently invested in the morality tale on which “rich people are bad” (or the more specific—and traditionally antisemitic—trope that “working in finance is evil”). EA complicates this message, by suggesting instead that wealth is (potentially) good — what’s bad is wasting (too much of) it on personal consumption.
Conversely, conventional morality upholds the self-sacrificing ascetic as the paradigm of virtue, regardless of what they actually achieve. Again, the moral perspective implicit in EA challenges this picture. Donating 90% of your income to an average local charity is actually not that great—likely worse than donating even just 1% to the most effective charities. But many people find this way of thinking counterintuitive.
(4) Giving full weight to indirect benefits. The idea of “earning to give” as something morally admirable seems to really upset some people. Some of this may be due to its challenging the above morality tale. But another factor may be that giving full weight to indirect benefits is genuinely controversial.
Spending on EA infrastructure and movement-building may be controversial for the same reason. People tend to evaluate charities based on superficial procedural metrics like CEO pay, overhead, etc., where they want to see signals of frugality. But EAs have always argued that these are terrible metrics (it can be well worth paying extra for a better CEO, fundraising often pays for itself many times over, etc.), and that we should instead evaluate charities by what they ultimately achieve per $ spent. (Most recently, people have been up in arms about an EA organization purchasing an expensive “estate” / conference center near Oxford. I have no idea whether this was a worthwhile use of funds or not, and I don’t know how the critics imagine they’re in a position to assess this, either. Given the importance of global priorities research and dissemination, it’s not hard to imagine how the enabled events could prove of immense value, especially if they increase the likelihood of participation from key policymakers. But again, willingness to consider such downstream benefits seems controversial.)
I have a different question…
Great! Ask away in the comments… :-)
(You can also check out CEA’s official FAQ, here.)
Don't be too dismissive of giving to local recreational sports. Giving sports vouchers to children who are at risk or who have already been involved in a correctional system reduces the likelihood of becoming involved in crime and reduces recidivism. Sports, or pretty much anything that keeps kids busy doing something constructive after school, reduces the pool of future criminals, makes for a safer community, reduces the future demand for prisons, and engenders socially and economically valuable adults. Pretty good returns for low investment.
I hesitate to raise this in part because it is a really vague question-- and perhaps one better suited to hearing your thoughts on in a more nuanced, and therefore inevitably off-line environment. But my concern with EA, insofar as it exists, has never been with the underlying argument per se or even (many of) the more popularly advocated conclusions defended by advocates of the view, all of which strike me as generally plausible. Instead, my worry is more about the culture surrounding EA which has always struck me (as an admitted outsider) as having shared resonances with certain versions of tech-bro-ism and more toxic variants of libertarianism in ways that I find troubling (slightly misogynistic, possibly slightly deaf to certain risks or lived experiences). (Again, my exposure is slightly orthogonal and limited, so I can be talked out of this empirical claim.) Again, as I said, there isn't anything about the ACTUAL EA values or arguments that per se justifies or explains this (the same can be said of tech culture and libertarianism.) Yet it has made me wonder sometimes if there wasn't something in the way of thinking or approaching valuing that supported or at least lent itself to that, given peoples' psychologies. (Like I said, this is an almost unhelpfully vague can't quite put my finger on it kind of worry). And that has made me somewhat hesitant about its role in public discourse EVEN IF its actual value structure is correct. I'm not quite sure if this is a real question, but I'd love to hear your thoughts on it (if you have any) even if it just to tell me that my vague sense is bunk.