26 Comments
Dec 1, 2023Liked by Richard Y Chappell

“So many Twitter critics confidently repeat some utterly conventional thought as though the mere fact of going against the conventional wisdom is evidence that EAs are nuts.”

If you replace the word “evidence” with “conclusive proof,” this accurately sums up literally every criticism of longtermism I’ve ever read.

Expand full comment
Dec 2, 2023·edited Dec 2, 2023Liked by Richard Y Chappell

I think that I can explain how someone can be hostile to EA ideas despite them being obviously true. It's not that they're hostile to the ideas themselves but rather to people consciously adopting them as goals. More generally I think that the EA criticisms you see are, low-decoupling style, mostly not criticisms of the ideas on their own merits but rather (very strong and universal) claims about the psychological and social effects of people believing the claims, which they think cause problems that are not unique to current EA institutions but basically intrinsic to any human attempt to implement EA ideas.

The claim is something like: the idea that you should seek to do cause impartial beneficent good is such a brain rot and so terribly corrosive in terms of what it does to human motivations that even though on paper it seems like there's no possible way that pursuing this idea could be worse than never thinking about doing it, in real life it just destroys you.

According to these critics, every time anyone tries to adopt this as a deliberate goal it's like picking up the One Ring, and you're near guaranteed to end up in ruinous failure because...? There are a bunch of reasons offered some of which contradict each other. One is that it smuggles in being okay with the status quo and not being okay with overthrowing modern civilization to produce something else. Another is that it sets you up to be easily manipulated because it sets a distant and broad goal such that you can justify anything with your biases and/or be tricked. Another is that it gives you a sense of superiority over everyone else around you and lets you take actions that are very distantly connected from doing good in the here and now, which means that you can always justify pretty much any bad thing you want to do as being part of the greater good. Another is that if you do believe in EA for real, it just corrodes your soul and stops you from having close human relationships and lets you neglect side constraints and instinctive warning signs that you're going wrong.

The claim isn't that any of these are intrinsic features of the ideas, but just that if you start believing strongly enough that you should do impartially beneficent good, because of the way human minds work, you'll get captured and possessed by this mindset and turn into a moral monster no matter what.

So on this view if you do care about impartial beneficent good you have to do something like trick yourself into thinking that's not what you really want and pursue some more narrow local project with more tight feedback loops. BUT of course you have to forget that this is why you did it, and forget the act of forgetting... doublethink style.

And obviously there's no real evidence given that this is how it necessarily goes other than pointing at a few high profile EA failures as if there aren't also high profile failures all over the place in more local and partial attempts to do good. (And as if the usually preferred alternative of starting an anti-capitalist revolution doesn't have every problem just listed to a far greater extreme)

It's essentially a conspiracy theory/genetic fallacy psychoanalysis argument. And this view also can't account for the good that EA has unequivocally done except to say something like "oh that all happened before you got fully corrupted/as an accidental contingent side effect on the way to full corruption".

And of course it's also diametrically opposite to the point you quote at the start of your post, i.e. EA ideas are both obvious tautologies and so extreme and strange that taking them seriously cores open your brain and makes you instantly turn into a moral monster.

Expand full comment
Dec 1, 2023Liked by Richard Y Chappell

Hard agree, and as someone who took the devil's advocate view in your previous post, I should say clearly: while there's much to quibble with, I think the mainstream EA movement is obviously tremendously thoughtful about these issues, and has done a great job promoting some obviously excellent, previously neglected causes. There is a huge amount to admire, both on an intellectual level and a movement level.

Expand full comment
author

Thanks! And I do like your other comment - https://rychappell.substack.com/p/why-not-effective-altruism/comment/44597996 - as it connects to Hoffman's reasonable worry about the more speculative stuff getting "captured" and not ultimately proving effective after all. (Though an important difference is that I take it there's *no* plausible pathway-to-impact for the "put-your-name-on-an-art-museum form of charity", whereas I think there are plausible pathways for how the speculative wings of EA could be doing really vital work.)

Expand full comment
Dec 1, 2023Liked by Richard Y Chappell

Yeah, agree with that as well...I think it's a feature of EA that they are *thinking* about longtermism and wild animal extinction and all that weird stuff, even if I think it's very likely a mistake to reorient one's charitable donations all that much in that direction.

Expand full comment
Dec 4, 2023Liked by Richard Y Chappell

In my bubble, a prominent reason for not being impartially good is respect for one's parents and religion. Our parents teach us what is moral and we follow their example. When our parents cook a turkey on Thanksgiving, we appreciate that, and emulate it, and pass the tradition on to our own children. When our religion lays out moral rules, those who want to be good followers take these seriously, and put a lot of moral weight on widows and orphans and embryos in their local church.

Departure from these morals is difficult and often seen as disrespectful. When children become vegan and avoid the Thanksgiving tradition, that can lead to tough arguments and seems ungrateful. When people suggest that some number of shrimp would be more important than a human life, it's an attack on moral roots that are thousands of years old.

In my experience, and maybe contrary to your article, people are quite open thinking about how to lead a good life. But they (and also I myself) find it difficult to put the thoughts in practice, when they go against tradition and the wisdom of our ancients.

Expand full comment
Dec 2, 2023·edited Dec 2, 2023Liked by Richard Y Chappell

I think a lot people (e.g. Ross Douthat) worry that there's a slippery slope from thinking about effectiveness to more "sinister" aspects of utilitarianism. Of course this is a slippery slope fallacy in Theory, but many who recognize this still think it's a healthy part of the Practice of avoiding "naive maximization".

Expand full comment
author

I'm not even sure that there's a slope there at all, let alone a slippery one. Both theoretically and psychologically, violating constraints against harm is completely independent from favouring more good over less *when all else is equal*. It's like two completely different moral axes, and the only reason to treat one as tending towards the other is that we attached a label - "utilitarianism" - to their point of intersection.

Expand full comment
Dec 2, 2023·edited Dec 2, 2023

I figured you would say that. But lack of constraints against harm is not the only thing that bothers people about utilitarianism. I think there are several axes of concern that are not orthogonal to the effectiveness one. For example, community obligations: Paying to efficiently feed starving people in Africa instead of paying for highly expensive care for elderly Americans (whether they're relatives or otherwise) seems appalling to many.

Of course, like most of my comments, I'm playing devil's advocate.

Expand full comment
author

Special obligations strike me as similarly orthogonal to efficient beneficence? If it's really an obligation, then that's just another constraint that you're not allowed to violate. Still, *within the range of the morally permissible*, we should always prefer better outcomes over worse ones. This seems like a very stable, commonsensical sort of position!

Expand full comment
Dec 3, 2023Liked by Richard Y Chappell

On a descriptive level, I don't think most people treat most of what get called "special obligations" as actual obligations, but rather as a weighting of moral patients, which works similarly to the way I weight moral patients on the basis of rough, uncertain estimates of their level of sentience and types of experience. Sure, there are a lot of beliefs about how to treat one's own children or spouse that look like side constraints, but the motivation to pay more for less benefit to compatriots versus foreigners looks like an indefensible weighting on the single axis of universal well-being.

Expand full comment
author

Yeah, the only kind of partiality that seems remotely reasonable to me is grounded in close personal relationships. Partiality towards mere compatriots seems clearly unjustified. That said, there could be some special arrangements concerning pensions, healthcare, etc., that one thinks are owed by states to their citizens as a matter of political obligation that's distinct from general beneficence. (Not a fan of such a view myself, but I'm trying to be as ecumenical as possible here.)

Expand full comment
Dec 2, 2023·edited Dec 2, 2023

A lot of what underlies "special obligations" ideology is just the belief that we should prioritizing beings that are salient to us and that we feel empathy for. In that sense, people have *special obligations* to those cute dogs they see on tv ads but not to (less cute) broiler chickens that they don't see on tv ads.

Once you unpack "special obligations" in this way, you start to see that they're ubiquitous, and your "all else equal clause" looks much less innocuous.

Indeed, much of what you call mere "effectiveness" is really a substantive claim that our goals should involve counting the welfare of two different things equally. And that is often very radical (as you say), even in a mundane case like the dogs vs. chickens. In that sense, I don't think effectiveness and community/salience prioritization are totally orthogonal.

Expand full comment
author

I think you're conflating genealogy vs content. It's possible that claims about "special obligations" are subconsciously motivated by salience, limited empathy, etc. But if so, that sounds like a debunking explanation. I don't think defenders of special obligations would *endorse* that account. So I don't think they would claim that we have special obligations to puppies.

If it really did end up functioning in the way you suggest, as a kind of universal barrier to allowing *any* space for impartial beneficence, then I would have to argue against *that* form of not-so-special-anymore obligations.

But so long as they can be limited in scope, as I think their defenders usually allow, then there's no barrier to combining special obligations with beneficentrism.

Expand full comment
Dec 2, 2023·edited Dec 2, 2023Liked by Richard Y Chappell

Well I think Bernard Williams would accept my account just fine - see for example his Princeton lecture where he said he's against aliens.

I'd guess many defenders of special obs would admit that salience / limited empathy are at least part of what is involved special obligations, and that there isn't always a sharp difference between those and whatever "deeper" forces they like to think ground the more noble instances of these obligations.

Thanks for your engagement. I feel that you have heard my points.

Expand full comment

No one really believes in obligations. They're a codeword for higher priority on one thing vs. another. For this reason, it's often not useful to think of them in terms of constraints. Outside of quantum mechanics nothing is ever discrete.

Expand full comment

I also think you're hiding a lot of substantive content in that *when all else is equal* clause.

Expand full comment
Dec 2, 2023·edited Dec 2, 2023Liked by Richard Y Chappell

I'm not sure whether I care about animals more than you do. I'd guess that the major difference between myself and sentient-eating consequentialists is likely to be that I think the time is especially ripe for massive societal progress on the issue, particularly because almost all of the non-trivial tradeoffs at this point are the result of negative network effects (e.g. difficulty finding a restaurant, strain on relationships, time spent explaining) , whereas other forms of moral low-hanging fruit have substantially more inelastic costs.

Expand full comment

The case you are making only suffices to demonstrate that EA is yet another group that cares about doing good (which is great); but being a group that cares about doing good doesn't make it as unique as you seem to believe. (The "most people then just go and donate to the dog shelter" part of this makes it sound as you think the "normies" are very simple minded.)

For example, it isn't just weird utilitarian effective altruist nerds who care about factory farming. The anti factory farming movement has been around a lot longer than EA. Switzerland banned battery cages for hens in 1992, the whole EU in 2012 (a law passed in 1999). Was that the effective altruists?

Expand full comment
author

Some people were effective altruists (i.e., motivated by an optimizing impartial concern for the general good as such) before the term "Effective Altruism" was coined.

Also, some people have especially good, high-impact specific concerns without having any cause-agnostic concern for the general good.

Expand full comment

Well, isn't this the point Freddie deBoer was making, that the good parts are mostly the parts that would have been considered good before the movement was started, and the weird parts are weird?

Expand full comment
author

I'm rejecting the conflation of "good" with "socially conventional". We should want people to promote the good, not just to promote the good in conventionally-accepted ways.

Expand full comment

So is EA a method for discovering what is good, even though social convention might not accept it, or is it just a way of effectively funding/advancing causes that people are inclined to accept as good? (or both?)

Expand full comment
Dec 2, 2023·edited Dec 2, 2023

Both.

Expand full comment

What has EA discovered that is good, that isn't conventionally accepted?

Expand full comment