4 Comments
⭠ Return to thread

I appreciate the desire to be charitable and only focus on the literal truth or falsity of the criticism (or what it omits) but I fear that in explaining what's going on with this reaction to EA's I think it's necessary to look at the social motivations for why people like to rag on EA.

And, at it's core, I think it's two-fold.

First, people can't help but feel a bit guilty in reaction to EAs. Even if EAs never call anyone out, if you've been donating to the make a wish foundation it's hard not to hear the EA pitch and not start to feel guilty rather than good about your donation. Why didn't you donate to people who needed it much more.

A natural human tendency in response to this is to lash out at the people who made you feel bad. I don't have a good solution to this issue but it's worth keeping in mind.

Secondly, EA challenges a certain way of approaching the world that echoes the tension between STEM folks and humanities individuals (and the analytic continental divide as well).

EA encourages a very quantative, literal truth oriented way of looking at the world. Instead of looking at the social meaning of your donations, what they say about your values and how that comments on society, it asks us to count up the effects. At an aesthetic level this is something that many people find unpleasant and antithetical to how they approach the world. In other words, it's the framework that is what really does most of the offense not the actual outcomes. You could imagine pitching those same things in a different way where it wasn't about biting hard bullets but all stated in terms of increasing the status of things they approved of and I think the reaction would be different.

Expand full comment

And TBF to the critics, I do understand why they react so negatively when people justify weird longtermist projects as EA or using money to do AI safety research.

When it was just bed nets, I think most people weren't too bothered but they see this other shit and think:

Hey those EA folks are lecturing us on doing what has the most impact and it turns out they are just hypocritically using that to justify giving to whatever cause feels important to them.

And yes, there is alot of truth to that. Of course, I'm inclined to see that as at least people agreeing on the right principle and then making the normal human mistakes about how best to interpret that.

But it's easy to see why this feels like an afront to the sort of person who tends to see the world in less of the STEM/literal way and more of the commentary on values/groups/etc way (probably this is better understood as high vs low decouplers)

They aren't seeing the question about whether we should give in the way that makes the most impact as just an independent question of fact. They tacitly assume that the only reason you say that is to critisize those who are just giving based on what feels important to them.

And so they inherently see EAs as engaged in a kind of moral lecture (we're better than you) and as such respond with the normal anger people feel when a moral scold is revealed to be hypocritically engaged in the same kind of behavior.

--

Ofc I'd prefer philosophy not do this. But then again, I take a very high decoupler approach where I see the only value of philosophy as trying just to figure out what claims are true and tend to see the parts of the subject that don't embrace decoupling (the less analytic stuff) as simply mistakes to be eradicated.

So I'm hardly one to say how to fix this problem since I kinda embody the attitude that upsets the low decouplers in the first place and does see that approach as wrongheaded.

Expand full comment

Yeah, that sort of low-decoupling is just inherently antithetical to philosophy (and academic ideals more generally), IMO.

Expand full comment

I think Anscombe would disagree with that. Same with her followers: neo-Aristotelians and (to lesser extent) Rawlsians. These people also appeal to Wittgensteinian phil of language. I never read W, but perhaps he would also disagree.

Maybe Quine too. Wasn't his point that everything is connected?

They'd acknowledge some decoupling is good but not total decoupling.

This is why their theories can only be modeled through machine learning, not English.

Expand full comment