As I wrote in ‘Why Not Effective Altruism?’, I find the extreme hostility towards effective altruism from some quarters to be rather baffling. Group evaluations can be vexing: perhaps what the critics have in mind when they hate on EA has little or no overlap with what I have in mind when I support it? It’s hard to know without getting into details, which the critics rarely do. So here are some concrete claims that I think are true and important. If you disagree with any of them, I’d be curious to hear which ones, and why!
What I think:
It’s good and virtuous to be beneficent and want to help others, for example by taking the Giving What We Can 10% pledge.
It’s good and virtuous to want to help others effectively: to help more rather than less with one’s efforts.
We have the potential to do a lot of good in the face of severe global problems (including global poverty, factory-farmed animal welfare, and protecting against global catastrophic risks such as future pandemics).
In all these areas, it is worth making deliberate, informed efforts to act effectively. Better targeting our efforts may make even more of a difference than the initial decision to help at all.
In all these areas, we can find interventions that we can reasonably be confident are very positive in expectation. (One can never be so confident of actual outcomes in any given instance, but being robustly positive in prospect is what’s decision-relevant.)
Beneficent efforts can be expected to prove (much) more effective if guided by careful, in-depth empirical research. Quantitative tools and evidence, used wisely, can help us to do more good.
So it’s good and virtuous to use quantitatively tools and evidence wisely.
GiveWell does incredibly careful, in-depth empirical research evaluating promising-seeming global charities, using quantitative tools and evidence wisely.
So it’s good and virtuous to be guided by GiveWell (or comparably high-quality evaluators) rather than less-effective alternatives like choosing charities based on locality, personal passion, or gut feelings.
There’s no good reason to think that GiveWell’s top charities are net harmful.1
But even if you’re the world’s most extreme aid skeptic, it’s clearly good and virtuous to voluntary redistribute your own wealth to some of the world’s poorest people via GiveDirectly. (And again: more good and virtuous than typical alternatives.)
Many are repelled by how “hands-off” effective philanthropy is compared to (e.g.) local volunteering. But it’s good and virtuous to care more about saving and improving lives than about being hands on. To prioritize the latter over the former would be morally self-indulgent.
Hits-based giving is a good idea. A portfolio of long shots can collectively be likely to do more good than putting all your resources into lower-expected-value “sure things”. In such cases, this is worth doing.
Even if one-off cases, it is often better and more virtuous to accept some risk of inefficacy in exchange for a reasonable shot at proportionately greater positive impact. (But reasonable people can disagree about which trade-offs of this sort are worth it.)
The above point encompasses much relating to politics and “systemic change”, in addition to longtermist long-shots. It’s very possible for well-targeted efforts in these areas to be even better in expectation than traditional philanthropy—just note that this potential impact comes at the cost of both (i) far greater uncertainty, contestability, and potential for bias; and often (ii) potential for immense harm if you get it wrong.
Anti-capitalist critics of effective altruism are absurdly overconfident about the value of their preferred political interventions. Many objections to speculative longtermism apply at least as strongly to speculative politics.
In general, I don’t think that doing good through one’s advocacy should be treated as a substitute for “putting one’s money where one’s mouth is”. It strikes me as overly convenient, and potentially morally corrupt, when I hear people (whether political advocates or longtermists) excusing not making any personal financial sacrifices to improve the world, when we know we can do so much. But I’m completely open to judging political donations (when epistemically justified) as constituting “effective philanthropy”—I don’t think we should put narrow constraints on the latter concept, or limit it to traditional charities.
Decision theory provides useful tools (in particular, the concept of expected value) for thinking about these trade-offs between certainty and potential impact.
It would be very bad for humanity to go extinct. We should take reasonable precautions to try to reduce the risk of this.
Ethical cosmopolitanism is correct: It’s better and more virtuous for one’s sympathy to extend to a broader moral circle (including distant strangers) than to be narrowly limited. Entering your field of sight does not make someone matter more.
Insofar as one’s natural sympathy falls short, it’s better and more virtuous to at least be “continent” (as Aristotle would say) and allow one’s reason to set one on the path that the fully virtuous agent would follow from apt feelings.
Since we can do so much good via effective donations, we have—in principle—excellent moral reason to want to make more money (via permissible means) in order to give it away to these good causes.
Many individuals have in fact successfully pursued this path. (So Rousseauian predictions of inevitable corruption seem misguided.)
Someone who shares all my above beliefs is likely to do more good as a result. (For example, they are likely to donate more to effective charities, which is indeed a good thing to do.)
When the stakes are high, there are no “safe” options. For example, discouraging someone from earning to give, when they would have otherwise given $50k per year to GiveWell’s top charities, would make you causally responsible for approximately ten deaths every year. That’s really bad! You should only cause this clear harm if you have good grounds for believing that the alternative would be even worse. (If you do have good grounds for thinking this, then of course EA principles support your criticism.)
Most public critics of effective altruism display reckless disregard for these predictable costs of discouraging acts of effective altruism. (They don’t, for example, provide evidence to think that alternative acts would do more good for the world.) They are either deliberately or negligently making the world worse.
Deliberately or negligently making the world worse is vicious, bad, and wrong.
Most (all?) of us are not as effectively beneficent as would be morally ideal.
Our moral motivations are very shaped by social norms and expectations—by community and culture.
This means it is good and virtuous to be public about one’s efforts to do good effectively.
If there’s a risk that others will perceive you negatively (e.g. as boastful), accepting this reputational cost for the sake of better promoting norms of beneficence is even more virtuous. Staying quiet for fear of seeming arrogant or boastful would be selfish in comparison.
In principle, we should expect it to be good for the world to have a community of do-gooders who are explicitly aiming to be more effectively beneficent, together.
For most individuals: it would be good (and improve their moral character) to be part of a community whose culture, social norms, and expectations promoted greater effective beneficence.
That’s what the “Effective Altruism” community constitutively aims to do.
It clearly failed in the case of SBF: he seems to have been influenced by EA ideas, but his fraud was not remotely effectively beneficent or good for the world (even in prospect).
Community leaders (e.g. the Centre for Effective Altruism) should carefully investigate / reflect on how they can reduce the risk of the EA community generating more bad actors in future.
Such reflection has indeed happened. (I don’t know exactly how much.) For example, EA messaging now includes much greater attention to downside risks, and the value of moral constraints. This seems like a good development. (It’s not entirely new, of course: SBF’s fraud flagrantly violated extant EA norms;2 everyone I know was genuinely shocked by it. But greater emphasis on the practical wisdom of commonsense moral constraints seems like a good idea. As does changing the culture to be more “professional” in various ways.)
No community is foolproof against bad actors. It would not be fair or reasonable to tar others with “guilt by association”, merely for sharing a community with someone who turned out to be very bad. The existence of SBF (n=1) is extremely weak evidence that EA is generally a force for ill in the world.
The actually-existing EA community has (very) positive expected value for the world. We should expect that having more people exposed to EA ideas would result in more acts of (successful) effective beneficence, and hence we should view the prospect favorably.
The truth of the above claims does not much depend upon how likeable or annoying EAs in general turn out to be.
If you find the EA community annoying, it’s fine to say so (and reject the “EA” label), but it would still be good and virtuous to practice, and publicly promote, the underlying principles of effective beneficence. It would be very vicious to let children die of malaria because you find EAs annoying or don’t want to be associated with them.
None of the above assumes utilitarianism. (Rossian pluralists and cosmopolitan virtue ethicists could plausibly agree with all the relevant normative claims.)
Things I don’t think
I don’t think:
that people should blindly follow crude calculations, or otherwise attempt to directly implement ideal theory in practice without properly taking into account our cognitive limitations
that you’re obligated to dedicate your entire life to maximizing the good, neglecting your loved ones and personal projects. (The suggestion is just that it would be good and virtuous for advancing impartially effective beneficence to be among one’s life projects.)
that we should care about numbers rather than people (rather, as suggested above, I think we should use numbers as a tool to enable us to help more people)
that we should completely ignore present-day needs in pursuit of tiling the universe with digital experience machines
that double or nothing existence gambles are worth taking
that inexperienced, self-styled “rationalist” EAs are thereby competent to run important organizations (just based on a priori first principles)
that you should trust someone with great power (e.g. unregulated control of AI) just because they identify as an “EA” (let alone a “rationalist”).
Conclusion: Beware of Stereotypes
A couple of months ago, Dustin Moskovitz (the billionaire funder behind Open Philanthropy) wrote some very thoughtful reflections on “the long journey to doing good better”. I highly recommend it. I was especially taken by his comments on why outside perceptions of a movement can seem so alien to those within it:
When a group has a shared sense of identity, the people within it are still not all one thing, a homogenous group with one big set of shared beliefs — and yet they often are perceived that way. Necessarily, the way that you engage in characterizing a group is by giving it broad, sweeping attributes that describe how the people in the group are similar, or distinctive relative to the broader world. As an individual within a group trying to understand yourself, however, this gets flipped, and you can more easily see how you differ. Any one of those sweeping attributes do apply to some of the group, and it’s hard to identify with the group when you clearly don’t identify with many of the individuals, in particular the ones with the strongest beliefs. I often observe that the people with the most fringe opinions inside a group paradoxically get the most visibility outside the group, precisely because they are saying something unfamiliar and controversial.
(Though I also think that critics often just straw man their targets.)
Anyway, I hope my above listing proves illuminating to some. I would be especially curious to hear from the haters of EA about which numbered points they actually disagree with (and why).3 Perhaps there will turn out to be such fundamental disagreements that reasoned conversation is pointless? But you never know until you try.
For example, what empirical evidence we have on the question suggests that Deaton’s speculative worries about political accountability are easily addressed: “Political accountability is not necessarily undermined by foreign aid: even illiterate and semi-literate folks in rural Bangladesh appear to be quite sophisticated about how they evaluate their leaders, given the information they possess. Further, any unintended negative accountability consequences were effectively countered by a simple, scalable information campaign.”
Not to mention the standard practical advice of the utilitarian tradition, as I’ve known ever since I was an undergrad (sadly many senior philosophers persist in misrepresenting it).
To explain my curiosity: most anti-EA criticism I’ve come across to date, especially by philosophers, has struck me as painfully stupid entirely missing the point. It doesn’t help that it’s all so unrelentingly hostile—which makes me question whether it’s in good faith, as it prima facie seems a rather inexplicably vicious attitude to take towards people who are trying to do good, often at significant personal cost! If any critics reading this are capable of explaining their precise disagreements with me (not an imagined straw-EA) in a civil tone, I’d be delighted to hear it.
Marxists: Every single leader we've ever produced has been a complete moral monster. This has no bearing on whether our view is bad. However, SBF's existence decisively refutes EA.
The greatest challenge weaving through these points is Plant's "Meat-eater's Problem": undercutting much of 10, greatly shifting 15 and greatly forking 19, underscoring 20, providing the clearest example for 25, and profoundly underscoring 31, for starters.