tl;dr: It actually seems pretty rare for people to care about the general good as such (i.e., optimizing cause-agnostic impartial well-being), as we can see by their hasty dismissals of EA concern for non-standard beneficiaries.
Introduction
Moral truisms may still be widely ignored. The moral truism underlying Effective Altruism is that we have strong reasons to do more good, and it’s worth adopting the efficient promotion of the impartial good as one among one’s many life projects. Anyone who personally adopts that project (to any non-trivial extent) counts, in my book, as an effective altruist (whatever their opinion of the EA movement and its institutions). Many people don’t adopt this explicit goal as a personal priority to any degree, but still do significant good via more particular good commitments (from parenthood to local community groups and causes). That’s fine by me, but I do think that even people who aren’t themselves effective altruists should recognize the EA project as a good one. We should all generally want people to be more motivated by efficient impartial beneficence (on the margins), even if you don’t think it’s the only thing that matters.
A popular (but silly) criticism of effective altruism is that it is entirely vacuous. As Freddie deBoer writes:
[T]his sounds like so obvious and general a project that it can hardly denote a specific philosophy or project at all… [T]his is an utterly banal set of goals that are shared by literally everyone who sincerely tries to act charitably.
This is clearly false. As Bentham’s Bulldog replies, most people give lip service to doing good effectively. But then they go and donate to local children’s hospitals and puppy shelters, not to neglected tropical diseases or improving factory-farmed animal welfare. DeBoer himself dismisses without argument “weird” concerns about shrimp welfare and existential risk reduction, which one very clearly cannot just dismiss as a priori irrelevant if one actually cares about promoting the impartial good.
The fact is, an open-minded, cause-agnostic concern for promoting the impartial good is vanishingly rare. As a result, the few people who sincerely have and act upon this concern end up striking everyone else as extremely weird. We all know that the way you’re supposed to behave is to be a good ally to your social group, do normal socially-approved things that signal conformity and loyalty (and perhaps a non-threatening degree of generosity towards socially-approved recipients). “Literally everyone” does this much, I guess. But what sort of weirdo starts looking into numbers, and argues on that basis that chickens are a higher priority than puppies? Horrible utilitarian nerds, that’s who! Or so the normie social defense mechanism seems to be (never mind that efficient impartial beneficence is not exclusively utilitarian, and ought rather to be a significant component of any reasonable moral view).
Let’s be honest
Everyone is motivated to rationalize what they’re antecedently inclined to do. I do plenty of suboptimal things myself, due to both (i) failing to care as much as would be objectively warranted about many things (from non-cute animals to distant people), and (ii) being akratic and failing to be sufficiently moved even by things I value, like my own health and well-being. But I try to be honest about it, and recognize that (like everyone) I’m just irrational in a lot of ways, and that’s OK, even if it isn’t ideal.
Vegans care more about animals than I do, and that’s clearly to their credit. I try to compensate through some high-impact donations, and I think that’s also good (and better than going vegan without the donations). I encourage others to do likewise.
Most people have various “rooted” concerns, linked to particular communities or causes to which they have a social or emotional connection. That’s all good. Those motivations are an appropriate response to real goods in the world. But we all know there are lots of other goods in the world that we don’t so easily or naturally perceive, and that could plausibly outweigh the goods that are more personally salient to us. The really distinctive thing about effective altruism is that it seriously attempts to take all those neglected interests into account. As I wrote in level-up impartiality:
[Imagine] taking all the warmth and wonder and richness that you’re aware of in your personal life, and imaginatively projecting it into the shadows of strangers.
We glimpse but a glimmer of the world’s true value. It’s enough to turn our heads, and rightly so. If we could but see all that’s glimpsed by various others, in all its richness, depth, and importance, we would better understand what’s truly warranted. But even from our limited personal perspectives, we may at least come to understand that there is such value in everyone, even if we cannot always grasp it directly. And if we strive to let that knowledge guide our most important choices, our actions will be more in line with the reasons that exist—reasons we know we would endorse, if only we could see them as clearly as we do the ones in our more personal vicinity.
And yes, from the outside this may look like being moved by drab shadows rather than the vibrant values we grasp closer to home. But of course it isn’t really the shadows that move us, but the promise of the person beneath: a person every bit as complex, vulnerable, and vibrant as those you know and love.
Such impartiality involves a very distinctive—you might even say weird—moral perspective. I think it should be generally recognized as a good and admirable one, but I don’t think it is common. Few people who give to charity make any serious effort to do the most good they can with the donation. Few people who engage in political activism are trying to do they most good they can with their activism. Few people pursuing an “ethical career” are trying to do the most good they can with their career. And that’s all fine—plenty of good can still be done from more partial and less optimizing motives (and even EAs only pursue the EA project in part of their life). But the claim that the moral perspective underlying EA is “trivial” or already “shared by literally everyone” is clearly false.
I wonder if part of the resistance to EA may stem from people not wanting to admit that they actually aren’t much motivated by a cause-agnostic concern for the general good. Maybe it sounds like an embarrassing thing to admit, because surely the general good is a worthy thing to aim at!
Maybe it would help to make the implications more explicit. To have a cause-agnostic concern for the impartial good, you have to be open to the possibility that shrimp welfare might matter more than saving a human life. Most people probably don’t want to be open to that possibility. Maybe what they really want is more speciesist, like helping humans effectively. Further, maybe they don’t want to be open to the possibility that a 10% chance of saving a million lives is better than saving 1000 for certain (or that even the tiniest probabilities, if sufficiently well-grounded, could take priority over sure things). So maybe they really just want to do something like make a near-certain positive difference to human well-being. That’s a fine goal too, and maybe a sufficient basis to make use of some effective altruist resources like GiveWell. But again, it’s very different from simply caring about impartial value as such.1
I don’t think anyone needs to be embarrassed about having narrower concerns along these lines. Lots of them are really good! But I do think a broader concern is better—just like saving distant kids from dying of malaria is better than providing local kids with more cultural opportunities (even though the latter is also good!). So I always like to encourage folks to question whether their current moral efforts are well-focused, and consider shifting their attention if they realize that they could do more good elsewhere. I think it’s especially worth noting that our collective moral attention often gets caught on relatively less-important (but highly divisive) issues, so it’s worth trying not to get sucked into that.
These recommendations flows from taking seriously the ideas of effective altruism, regardless of what you think of the actually-existing EA movement and institutions. If you aren’t regularly prompted to think about whether your moral efforts are optimally allocated, then you should probably admit that the effective altruist project is actually pretty distinctive.
OK, but what about the actual movement/institutions?
I’m also a fan of those: I know the GWWC pledge has helped me to do far more good than I otherwise would have.
See Scott Alexander’s two-part defense of the EA movement as (i) having done a lot of actual good, and (ii) providing the social scaffolding to encourage people to actually put their beneficent motivations into practice.
For the best critical piece that I’m aware of, see Benjamin Ross Hoffman’s Effective Altruism is Self-Recommending. It’s a totally fair worry that it’s hard to assess whether the more speculative branches of EA are actually effective at achieving their goals or not. I’m generally pretty trusting of these folks, but have no beef with those who judge things differently.2 I’d welcome a more diverse ecosystem of people seriously trying to optimize promotion of the impartial good via a range of different (but reasonable and rights-respecting) approaches.
That said, there are probably many departures from strict “maximizing impartial expected value” that could still count as close enough for practical purposes. One could add prioritarian weighting for the worst-off, or some modest degree of risk-aversion or ambiguity-aversion, etc. So I don’t mean to be making any strict proclamations here about precisely where to draw the line to qualify as having some concern for “cause-agnostic impartial value” per se. My point is just that anything in this remote vicinity is pretty radically different from what most people are actually concerned with.
That said, anyone who thinks it’s obvious that any of the actually-existing branches of EA is “terrible” probably isn’t open to cause-agnostic value-promotion, since there’s a strong prima facie case to be made for all the main EA cause areas. You can certainly come to different verdicts at the end of the day, but if you think mainstream EAs are obviously mistaken in their priorities then I think there’s a good chance that you’re just being closed-minded in pre-judging the matter.
“So many Twitter critics confidently repeat some utterly conventional thought as though the mere fact of going against the conventional wisdom is evidence that EAs are nuts.”
If you replace the word “evidence” with “conclusive proof,” this accurately sums up literally every criticism of longtermism I’ve ever read.
I think that I can explain how someone can be hostile to EA ideas despite them being obviously true. It's not that they're hostile to the ideas themselves but rather to people consciously adopting them as goals. More generally I think that the EA criticisms you see are, low-decoupling style, mostly not criticisms of the ideas on their own merits but rather (very strong and universal) claims about the psychological and social effects of people believing the claims, which they think cause problems that are not unique to current EA institutions but basically intrinsic to any human attempt to implement EA ideas.
The claim is something like: the idea that you should seek to do cause impartial beneficent good is such a brain rot and so terribly corrosive in terms of what it does to human motivations that even though on paper it seems like there's no possible way that pursuing this idea could be worse than never thinking about doing it, in real life it just destroys you.
According to these critics, every time anyone tries to adopt this as a deliberate goal it's like picking up the One Ring, and you're near guaranteed to end up in ruinous failure because...? There are a bunch of reasons offered some of which contradict each other. One is that it smuggles in being okay with the status quo and not being okay with overthrowing modern civilization to produce something else. Another is that it sets you up to be easily manipulated because it sets a distant and broad goal such that you can justify anything with your biases and/or be tricked. Another is that it gives you a sense of superiority over everyone else around you and lets you take actions that are very distantly connected from doing good in the here and now, which means that you can always justify pretty much any bad thing you want to do as being part of the greater good. Another is that if you do believe in EA for real, it just corrodes your soul and stops you from having close human relationships and lets you neglect side constraints and instinctive warning signs that you're going wrong.
The claim isn't that any of these are intrinsic features of the ideas, but just that if you start believing strongly enough that you should do impartially beneficent good, because of the way human minds work, you'll get captured and possessed by this mindset and turn into a moral monster no matter what.
So on this view if you do care about impartial beneficent good you have to do something like trick yourself into thinking that's not what you really want and pursue some more narrow local project with more tight feedback loops. BUT of course you have to forget that this is why you did it, and forget the act of forgetting... doublethink style.
And obviously there's no real evidence given that this is how it necessarily goes other than pointing at a few high profile EA failures as if there aren't also high profile failures all over the place in more local and partial attempts to do good. (And as if the usually preferred alternative of starting an anti-capitalist revolution doesn't have every problem just listed to a far greater extreme)
It's essentially a conspiracy theory/genetic fallacy psychoanalysis argument. And this view also can't account for the good that EA has unequivocally done except to say something like "oh that all happened before you got fully corrupted/as an accidental contingent side effect on the way to full corruption".
And of course it's also diametrically opposite to the point you quote at the start of your post, i.e. EA ideas are both obvious tautologies and so extreme and strange that taking them seriously cores open your brain and makes you instantly turn into a moral monster.