Some people are extremely hostile to Effective Altruism. I find this puzzling. At least, I find it puzzling how to interpret this most charitably. (There are obvious explanations that would reflect poorly on the hostile critics, such as “do-gooder derogation”, akin to how some omnivores hate vegans for making them look/feel bad in comparison. No doubt such motivated reasoning is part of the story.1 But what does it feel like, from the inside, to hate on EA? What story does the critic tell about themselves, when they discourage others from doing good effectively, that doesn’t make them seem the villain?)
Others don’t seem to find it so puzzling. So I wrote this paper to explain my puzzlement. I hope it will serve as a useful introduction to the philosophical debates surrounding Effective Altruism, for any undergraduate classes that touch on this topic. (Suggestions welcome, in the comments below, for the best philosophical critique of EA to pair this with as an assigned reading.)
My paper argues that (i) EA principles are clearly good; (ii) core EA claims on controversial topics (from “earning to give” to “longtermism”) are clearly correct, even if there’s room for dispute on the margins; and (iii) we should generally affirm important moral truths, such as (i) and (ii) above, even if they’re politically inconvenient.
Moreover, the “political” critique of EA as “actually harmful, even if well-meaning” plausibly applies much more strongly to the critics themselves: by discouraging others from giving effectively, they are very likely causally responsible for immense harms (e.g. children dying from avoidable malaria). Their primary real-world effect is—very obviously!—to provide “moral cover” to the morally complacent. This should be more widely recognized as disreputable.
As the paper abstract summarizes:
Effective altruism sounds so innocuous—who could possibly be opposed to doing good, more effectively? Yet it has inspired significant backlash in recent years. This paper addresses some common misconceptions, and argues that the core "beneficentric" ideas of effective altruism are both excellent and widely neglected. Reasonable people may disagree on details of implementation, but all should share the basic goals or values underlying effective altruism.
A Dialectical Oddity
It was a weird paper to write, since it all feels incredibly obvious: I’m not really sure how anyone could honestly disagree. (I’ll share some highlights below.) Hopefully some who do disagree will explain their thinking. Obviously it’s fine to disagree over details of implementation: how to actually go about doing good effectively. What I can’t comprehend is all the hostility to the very idea of EA. As noted in fn 7:
The degree of hostility many critics express towards EA doesn’t make sense if they agree with EA principles and simply disagree about how best to apply them. One doesn’t see these critics say, “EA is a great idea, and here’s how we could do it better.” Their disagreement seems deeper than that…
On the other hand, if it turns out that critics actually love the idea of effective altruism, and are merely suspicious of actually-existing Effective Altruists (for whatever reason), that would certainly be interesting to hear! But then why do they sound so much like they want to sink the whole project, rather than improve upon it?
On Prioritization
Perhaps the thing that most sets Effective Altruism apart is its commitment to explicit cause prioritization: considering trade-offs, and seriously trying to work out what should be our top moral priority (on current margins). I think this is a really big deal. It’s obviously a very fallible process. But between the options of (i) trying to do more good rather than less, all else equal, or (ii) not even trying, it seems pretty obvious that the former is the way to go!
The objections to this are completely daft. Consider Srinivasan:
What’s the expected marginal value of becoming an anti-capitalist revolutionary? To answer that you’d need to put a value and probability measure on achieving an unrecognizably different world—even, perhaps, on our becoming unrecognizably different sorts of people. It’s hard enough to quantify the value of a philanthropic intervention: how would we go about quantifying the consequences of radically reorganizing society?
But as I explain in fn 13, at least a rough ballpark estimate in answer to these questions would seem necessary in order to have a justified belief that becoming an anti-capitalist revolutionary is actually a good idea. If you’re truly clueless about the expected consequences of an action, it’s hard to see much reason to do it. It would seem especially indefensible to pass up saving someone’s life because you prefer to take a gamble that you don’t even think is positive in expectation.
This doesn’t necessarily require “quantification” in any strict sense: we can be guided by expected value without necessarily making explicit calculations. Relatedly, critics sometimes misattribute an unduly narrow conception of “evidence” to EAs, but obviously any real epistemic reason should count. As I summarize the dilemma faced by those who think that “systemic change” somehow constitutes a challenge to EA:
Either their total evidence supports the idea that attempting to promote systemic change would be a better bet (in expectation) than safer alternatives, or it does not. If it does, then EA principles straightforwardly endorse attempting to promote systemic change. If it does not, then by their own lights they have no basis for thinking it a better option. In neither case does it constitute a coherent objection to EA principles.
As far as I can tell, the only reason to reject the core idea of effective altruism is that you’re antecedently committed to some other project that you suspect is less good, but you don’t want to have to admit that it is less good. Better, then, for the question of effectiveness/prioritization to not even be asked. (If you think I’m being unduly cynical here, I’d love to hear how you think to reconcile these criticisms with both rationality and intellectual integrity.)
On Earning to Give
Many people seem to find the very idea of “earning to give” somehow disreputable. This is, again, completely daft:
Moral theorists may argue about precisely which directly harmful careers could, or could not, be justified by indirectly saving more lives. But these edge cases are a distraction from the core idea, much as an excessive focus on the ethics of Robin Hoodery would be a distraction when evaluating the basic case for giving more to the poor. In both cases, we can simply limit our attention to increasing one’s donations via permissible means.
Rare exceptions aside, most careers are presumably permissible. The basic idea of earning to give is just that we have good moral reasons to prefer better-paying careers, from among our permissible options, if we would donate the excess earnings. There can thus be excellent altruistic reasons to pursue higher pay. This claim is both true and widely neglected.
On Billionaire Philanthropy
[I]t makes sense that if billionaires exist, we should prefer that they spend their money in ways that effectively help others. And billionaires, notoriously, do exist…
There is nothing inconsistent about both (i) trying to change the system to make it more egalitarian, and (ii) until such a time as those efforts succeed, encouraging those with excessive wealth to dispose of it in better rather than worse ways.
Even if your top moral priority were to institute egalitarian reforms so that no billionaires exist in future,2 you would probably be better served on the margin by having an extra billionaire supporting your project than an extra activist. Criticizing social reformers for seeking funding support for their reforms is absurd: it’s effectively to criticize them for being instrumentally rational. (“How dare you try to actually achieve your moral goals?”)
On Longtermism
I trust that most readers of this paper are sufficiently cosmopolitan to agree that we should not ignore the greater plight of children dying of malaria overseas, merely because they are geographically distant from us. We can—and should—intellectually appreciate that “statistical lives” are every bit as real as the ones we see before our eyes. But distance in time seems no more intrinsically significant than distance in space. So we should not be moved by appeals to strictly prioritize the more easily identifiable individuals of the “here and now”. We should want to help people, and bring about a better world, without (geographic or temporal) restriction.
The paper goes on to explain the basics of population ethics, including why we should unconditionally value good lives. I conclude:
[I]t remains an open question how to implement a concern for protecting future generations. You could accept life-affirming longtermism in principle while remaining highly uncertain about what should be done in practice. Longtermists can disagree about whether to prioritize (i) specific risk-mitigating interventions, or (ii) more general investigation into possible risks and responses, or (iii) more general societal (ethical, scientific, and economic) progress and capacity-building so that future generations can do a better job than we at tackling future problems. Maybe there are other options too. I leave open such questions of implementation. I’m merely arguing that we should all agree on the in-principle importance of the long-term future.
Conclusion
The answer to our title question, ‘Why not effective altruism?’, is that there’s no principled reason why not. We should all want to do more good rather than less, and use the best available evidence to guide our efforts. There’s plenty of room for reasonable disagreement about how best to pursue this humanitarian goal. But its in-principle desirability cannot reasonably be disputed…
Some may nonetheless argue that we can have good political reasons to bury inconvenient (or “harmful”) truths. I grant that this is possible, but I think we should have a high bar for endorsing such dishonesty. I also worry that it’s far more likely that denunciations of effective altruism function to provide “moral cover” for the morally complacent. Doing more good may not be in our self-interest, after all. But it is worth doing, nonetheless.
For example, philosopher Mary Townsend seems pretty openly vicious when she writes, “It’s almost too easy to feel a certain schadenfreude at the possibility that effective altruism—and its parent philosophy, classical utilitarianism—will really, finally get the pie in the face they deserve.”
Not something I personally recommend: seems rather too likely to have negative unintended consequences.
Why does Srinavasan use the expected value of being an anticapitalist revolutionary as an example of something that is hard to quantify? There have been anticapitalist revolutionaries around for more than a century now and they have enough of a track record to establish that their expected marginal value is massively negative. Becoming an anticapitalist revolutionary is a rational thing to do if you want to maximize death and suffering. If EA philosophy stops people from becoming anticapitalist revolutionaries then it's already made the world a better place, even if they don't go on to do any good at all.
Others have said similar things, but to add my two cents:
First, I am sympathetic to, and probably count as, an EA, so I am not really the kind of person you are addressing, but I can think of a few things:
First, you really might disagree with some of the core ideas: you may be a deontologist, so that some proposed EA interventions, though positive in expectation, are still impermissible (e.g. a "charity" that harvests organs unwillingly from the homeless and donates them to orphans is bad, no matter how compelling your EV calculation). Or as Michael St. Jules points out, on longtermism, you might reject any number of the supporting propositions.
Second: Agreement with the core ideas doesn't imply all that much; you say to Michael that you are only interested in defending longtermism as meaning "the far future merits being an important priority"; but this is hardly distinctive to EA! If EA just means, "we should try to think carefully about what it means to do good", then almost any program for improving the world will endorse some version of that! What makes EA distinctive isn't the versions of its claims that are most broadly acceptable!
You can agree in principle with "core" EA ideas but think there is some methodological flaw, or a particular set of analytical blinders in the EA community such that the EA version of those ideas is hopelessly flawed. This is entangled with
Third: So, if you agree with the EA basics, and you think EA is making a big mistake in how it interprets/uses/understands those basics, why not try to get on board to improve the program? Either because those misunderstandings/methodologies/viewpoints are so central to EA that it makes more sense to just start again fresh, or because EA as an actual social movement is too resistant to hearing such critiques.
Like, take the revolutionary communist example from the other end: lots of people (even many EAs) would agree to core communist principles like "Material abundance should be shared broadly", and revolutionary ideas like "We shouldn't stick to a broken status quo just because it would take violence to reach a better world"--and there is a sense in which you can start as a revolutionary communist, and ultimately talk yourself into a completely different viewpoint that still takes those ideas as fundamental but otherwise looks nothing like revolutionary communism (indeed, I think this is a journey many left-leaning teenagers go through, and it wouldn't even surprise me if some of them end up at something like EA).
But I don't think people who don't start from the point of view of communism should feel obliged to present their critiques as ways of improving the doctrine of revolutionary communism. This is for both philosophical reasons (there is too much bad philosophy in there that takes a long time to clear out, better to present your ideas as a separate system on their own merits) and social ones (the actual people who spend all their time thinking about revolutionary communism aren't the kind of people you can have productive discussions with about this sort of thing).
Obviously that's an unfair comparison to EA, but people below have pointed out that EA-the-movement is at least a little bit cult-y, and has had a few high-profile misfires of people applying its ideas. I personally think its successes more than outweigh the failures, but I think it's fair for someone to disagree.
Finally, I'd like to try steelman the "become an anticapitalist revolutionary" point of view. Basically, the point here is that "thinking on the margin" often blinds one to coordination problems--perhaps we could get the most expected value if a sufficiently large number of people become anticapitalist revolutionaries, but below some large threshold, there is no value--then the marginal benefit of becoming a revolutionary is negligible, yet it still may be the case that we would wish to coordinate on that action if we could. This is (I think) what Srinivasan is getting at: the value of being a revolutionary is conditional on lots of other people being revolutionaries as well. It's not impossible to fit this sort of thinking into an EA-type framework, but I think it's a lot more convoluted and complicated. But I don't think we should rule it out as a theory of doing good, or prioritizing how to do good, even if I don't find that particular example very compelling.