The Strange Shortage of Moral Optimizers
What would an "Alt-EA" beneficentrist movement look like?
Three Kinds of Critics
Some people just aren’t very altruistic, and so may quietly dislike Effective Altruism for promoting values that conflict with their interests. (It’s easy to see how wealthy academics might be better off with a moral ideology that prioritizes verbiage over material outcomes, for example.) One doesn’t often hear this perspective explicitly voiced, but—human nature being what it is—I expect it must be out there.
Others may be broadly enthusiastic about the idea of Effective Altruism, but have some concerns about the current state of the movement as it actually stands. From here one might offer friendly/internal critiques of EA: “Here’s how you might do better by your own lights!” And my sense is that good-faith critiques of this sort tend to get a very positive reception on the EA forum. (Indeed, there’s now a $100k incentive for criticism of EA and its current priorities.)
Finally, a third class of critics claims to agree with the beneficent values of effective altruism, but regard the actual EA movement as hopelessly misguided, ineffective, cultish, a mere smokescreen for political complacency, or what have you. (These sorts often use sneer quotes to speak of the “Effective” “Altruism” movement.) I find this final group more puzzling. As Jeff McMahan has noted, “the philosophical critics of effective altruism tend to express their objections in a mocking and disdainful manner… suggestive of bad faith.”
One major concern I have with the actually-existing wholesale criticisms of EA is that they tend to reinforce a kind of moral complacency. No need to really do anything beneficent so long as you give it lip-service, and insist that the rest is a “collective responsibility” best left to the state to take care of (just don’t hold your breath…). I feel like these critics are discouraging real-life beneficence, and thereby doing real harm.
Viewed in this light, the absence of any competing explicitly beneficentrist movements is striking. EA seems to be the only game in town for those who are practically concerned to promote the general good in a serious, scope-sensitive, goal-directed kind of way. If a large number of genuinely beneficent people believed that actually-existing-EA was going about this all wrong, I’m surprised that they haven’t set up an alternative movement that better pursues these goals while avoiding the shortcomings they associate with traditional EA. (Perhaps they’d prefer different branding. That’s fine; I’m not concerned here with the label, but with the underlying values and ideas.)
What might an Alt-EA movement look like?
I’d genuinely love to hear from critics what they think a better alternative might look like. (I think it’s now widely acknowledged that early EA was too narrowly focused on doing good with high certainty—as evidenced through RCTs or the like—perhaps in reaction to the aid skepticism that seemed like the major barrier to uptake at the time. But EA is now much more open to diverse approaches and uncertain prospects so long as a decent case can be made for their expected value being high.)
Maybe the alternative would involve a greater political focus, with local community organizing being a major cause priority (as an implicit form of community-building)? Maybe it would avoid utilitarian/cosmopolitan rhetoric, and focus more on meeting the median voter where they are—with appeals to more local and emotive values such as solidarity—with an eye to encouraging many small nudges towards a better world? Maybe it would be more optimistic about the likely outcomes of a political “revolution”, and less optimistic about technocratic interventions? I’m not too sure what the epistemic basis for any of this would be, but perhaps one could lean hard into “self-effacingness” and insist that globally better results can be achieved by not aiming too directly at this goal, along with being guided more by hope than by evidence?
Might it then turn out that already-existing popular political movements can be viewed as alternative (albeit highly indirect) implementations of beneficentrism after all? I’m dubious—it seems awfully fishy to just insist that one’s favoured form of not carefully aiming at the general good should somehow be expected to actually have the effect of best promoting the general good. While it clearly wouldn’t be optimal to make every single decision by appeal to explicit cost-benefit analysis, it seems crazily implausible that (in realistic circumstances) it somehow maximizes expected utility to never employ direct utilitarian reasoning. It’s notable that the utilitarian philosophers who have thought most about this issue end up advocating for a multi-level approach (using explicit utilitarian reasoning in unusual or unexpected high-stakes situations—e.g. pandemic policy—and during “calm, reflective moments” to help guide our choice of everyday heuristics, strategies, and virtues, for example).
But I’d be curious if others—especially those who sneer at actually-existing EA—are more inclined to defend the optimality of existing political movements. Or if they have an entirely different conception of what Alt-EA should look like?
Moral Sincerity
One obvious possibility is that those hostile to EA aren’t truly sympathetic to beneficentrism at all, and really just have worse values. I’d be happy to see that hypothesis refuted. I think it’d be especially exciting to see an entirely new Alt-EA ecosystem spring up around those other beneficentrists who sincerely pursue the general good in a different way, or with a different rhetorical/ideological framing, that maybe appeals better to a different audience than traditional EA does. (So long as this alternative movement has good epistemics and doesn’t seem likely to be positively counterproductive and bad for the world, that is!)
Given the risk of paying empty lip-service to good values, I think it’s worth making the challenge explicit: if not EA, how do you move beyond cheap talk and take your values seriously—promoting them in a scope-sensitive, goal-directed, outcome-oriented way?
I find it so frustrating that the hostile critics don’t even seem to be interested in this question! Whatever your values are, there are so many ways that you could more effectively promote them through donations, direct work, and advocacy (that is explicitly directed towards encouraging more donations and direct work for the best causes). So even if EA is somehow misguided, I think it could still do the world a great service by encouraging more people to actually (and effectively) do more good: to achieve the EA aim, even if they think that the existing EA movement is (for whatever reason) failing in its ambitions.
I really think the great enemy here is not competing values or approaches so much as failing to act (sufficiently) on values at all. Of course, we’re all driven by a variety of motivations, many no doubt less lofty than we would normally like to think. The extent to which our professed values are “sincere” is probably best understood as a matter of degree, rather than a sharp binary distinction between sincere akratics (who don’t always manage to live up to their ambitious values) and outright hypocrites (who don’t genuinely hold the professed values at all). No one with ambitious values always manages to live up to them, but I wouldn’t want fear of being labelled a “hypocrite” to disincentivize having ambitious values at all. (There are worse things in the world than hypocrisy!)
So I’m trying to find a way to frame my point without using the H-word—I grant that we’re all a messy mix of motivations, heavily influenced by the contingent circumstances in which we find ourselves. And, let’s face it, life can be hard—even those in privileged material circumstances aren’t always in a mental space to be able to do more than just get through the day. I want to explicitly grant all that.
But if some social movements or moral ideologies do more to bring our actions more in line with our (ambitious) expressed values, then that seems good, important, and worth encouraging. Good social norms can make it much easier for us to do good things. And it seems to me that EA is near unique in this regard. It just seems remarkably rare for people to treat their values seriously in the way that EA invites us to.
And so, while I guess non-EAs wouldn’t be thrilled to be charged with failing to take their values seriously, and I certainly don’t mean to be gratuitously offensive, I hope that pointing out this disturbingly common disconnect might help to make it less common. It would be great if, in order to avoid this objection, more non-EAs worked to make their own groups and practices more morally ambitious and goal-directed. It would be great to see more embrace an ethos that was oriented more towards promoting good outcomes and less towards expressive symbolism. It would be great, in short, for others to achieve what EA at least tries to achieve.
Here's one way to make a consequentialist critique of EA as it currently exists.
Consider the US-China status quo. The US is not attacking China in pursuit of regime change, and China is not conquering Taiwan. The risk of the former seems minute; the risk of the latter does not. What if a 5% increase in the chance that this status quo holds was more of a net positive than all non-x-risk-related EA efforts combined?
Here are some of the possible negative outcomes if China tries to conquer Taiwan:
-conventional and nuclear war between China and the US, and their allies, with the possibility of up to several billion deaths;
-hundreds of satellite shootdowns causing Kessler syndrome, leading to the destruction of most other satellites, leaving us with little warning of impending natural disasters such as typhoons and drought;
-sidelining of AI safety concerns, in the rush to create AGI for military purposes;
-end to US-China biosecurity cooperation, and possible biowarfare by whichever side feels it is losing (which might be both sides at once - nuclear war would be a very confusing experience);
-wars elsewhere following the withdrawal of overburdened US forces, e.g. a Russian invasion of Eastern and Central Europe backed by the threat of nuclear attack, or an Israeli/Saudi/Emirati versus Iranian/Hezbollah war that destroys a substantial share of global oil production;
-economic catastrophe: a deep global depression; widespread blackouts; years of major famines and fuel shortages, leading to Sri Lanka type riots in dozens of countries at once, with little chance of multinational bailouts.
-substantial decline in efforts to treat/reduce/vaccinate against HIV, malaria, antibiotic resistant infections (e.g. XDR/MDR tuberculosis), COVID-19, etc.
If your simplified approach to international relations is more realist than anything else, you probably believe that a major factor in whether war breaks out over Taiwan is the credibility of US deterrence.
How much of EA works on preserving, or else improving, the status quo between the US and China, whether through enhancing the credibility of US deterrence (the probable realist approach) or anything else? Very little. Is that due solely to calculation of risk? Is it also because the issue doesn't seem tractable? If so, that should at least be regularly acknowledged. Could the average EA's attitude to politics be playing a role?
To the extent that the US-China war risk is discussed in EA, I do not think it is done with the subtle political awareness that you find in non-EA national security circles. Compare e.g. the discussions here (https://forum.effectivealtruism.org/topics/great-power-conflict) with the writing of someone like Tanner Greer (https://scholars-stage.org/) and those he links to.
In case you are wondering, I have no strong opinion on which US political party would be better at avoiding WW3. There are arguments for both, and I continue to weigh them, probably incompetently. I do think it would be better if there were plenty of EAs in both parties.
I have no meaningful thoughts on how to decide whether unaligned AI or WW3 is a bigger threat. (Despite 30-40 hours of reading about AI in the past few months, I still understand very little.)
I've read one alternative approach that is well written and made in good faith: Bruce Wydick's book "Shrewd Samaritan" [0].
It's a Christian perspective on doing good, and arrives at many conclusions that are similar to effective altruism. The main difference is an emphasis on "flourishing" in a more holistic way than what is typically done by a narrowly-focused effective charity like AMF. Wydick relates this to the Hebrew concept of Shalom, that is, holistic peace and wellbeing and blessing.
In practical terms, this means that Wydick more strongly (compared to, say, GiveWell) recommends interventions that focus on more than one aspect of wellbeing. For example, child sponsorships or graduation approaches, where poor people get an asset (cash or a cow or similar) plus the ability to save (e.g., a bank account) plus training.
I believe that these approaches fare pretty well when evaluated, and indeed there are some RCTs evaluating them [1]. These programs are more complex to evaluate, however, than programs that do one thing, like distributing bednets. That said, the rationale that "cash + saving + training > cash only" is intuitive to me, and so this might be an area where GiveWell/EA is a bit biased toward stuff that is more easily measurable.
[0]: https://www.goodreads.com/book/show/42772060-shrewd-samaritan
[1]: https://blog.brac.net/ultra-poor-graduation-the-strongest-case-so-far-for-why-financial-services-must-be-a-part-of-the-solution-to-extreme-poverty/