Effective Altruism has long emphasized the value of movement-building,1 since the “return on investment” looks to be incredibly high. For example, if an additional Giving What We Can pledge yields an average of $22,000 to effective charities, it could easily be worth spending a million dollars on outreach activities just to secure 100 new pledges. And many (esp. longtermist) EAs think that influencing career choice does even more good than increasing donations. As a result, EA funders provided over $100 million in movement-building grants in 2022. That much money could save over 20,000 lives if donated to GiveWell’s top charities. So the funders presumably think their grants have even higher expected value than one life saved per $5000 granted. That’s pretty impressive, if true!2
People often seem skeptical of the idea that there could be a better use of $5000 than directly saving someone’s life. Certainly, our personal consumption spending is not nearly so valuable (as emphasized in Singer’s ‘Famine, Affluence, and Morality’). But, obviously, if you could spend the $5000 in a way that indirectly saves even more lives in expectation, that would be even better! And EA movement building may well have this feature.
Something I find curious is that people often seem to assume an asymmetry in the moral tradeoffs here. Many are horrified by the opportunity cost of EA investments in longtermist projects, for example. And it is indeed awful that we live in a world where people are dying for want of a $5000 donation to GiveWell’s top charities. Obviously people should give more. (Have you taken the pledge? Please give it some serious thought, if you haven’t already.) But it may be even more awful that we live in a world where projects with immense long-term expected value (e.g., reducing global catastrophic risks, or improving long-run social, moral and scientific progress) also remain underfunded. It’s by no means obvious that short-term life-saving is the best we can possibly do, so I find it bizarre when people just assume, without argument, that short-termism has the moral high ground.3 It’s a very complex, highly uncertain empirical question which approach is actually best.4
The combination of high stakes and great uncertainty is naturally angst-inducing. But we should be at least as worried about underfunding vital longtermist projects as we are about underfunding immediate life-saving interventions. (If anything, we should probably be more concerned about the risk of neglecting longtermist projects, since the potential stakes are so much higher.)
Even so, I’m not about to criticize anyone for supporting short-termist interventions.5 I’m not that confident about the right answer, and I generally appreciate people doing good things for the world even when they fall short of optimality (for decent reasons). If you’re trying your best to make the world a better place, and doing a broadly reasonable job of it,6 I don’t (unlike some) think you have anything to “atone” for.
I also doubt that it’s productive to wallow in angst or other negative emotions. High stakes mean that it’s worth investing significant resources into research, to try to make the best decisions we can. It may also motivate hedging our bets, to try to avoid the worst outcomes. So: we should do the best we can. And then, I think, we should feel positively about that. Doing the best you can is great! Maybe it won’t work out in the end, but you can’t control that. Of the things you can control, you can’t really do better than your overall best effort, using your best judgment to the best of your ability. If you’ve done the latter (again, in a broadly reasonable way), then I think that merits appreciation.
Though with care to avoid the “meta trap” of putting all one’s efforts into growth and then never actually achieving anything with it!
Note that this doesn’t require that each $5000 spent saves at least one life. It’s more plausible that several of the grants do no good at all, while others (not identifiable in advance) end up having a vastly outsized impact, which makes it all worthwhile. Compare OpenPhilanthropy on hits-based giving.
It’s especially bizarre when some of the same people complain that not enough EAs support anticapitalist revolution, as though the latter were an obviously good idea.
That’s actually where most of my personal donations go! I think it’s pretty common for EAs to combine more longterm-focused work with present-focused donations.
So not, like, committing fraud or anything.
As far as I can tell, pretty much the same objections you discuss could be raised against animal causes. Yet I hardly ever see these objections being raised. I don't recall, for instance, David Thorstad pointing to the millions Open Philanthropy spends on corporate animal welfare campaigns as "the price of EAA". I think such an objection would be pretty weak, and that's also why I don't think much of the analogous objections to longtermism.