34 Comments
Apr 5Liked by Richard Y Chappell

I appreciate the desire to be charitable and only focus on the literal truth or falsity of the criticism (or what it omits) but I fear that in explaining what's going on with this reaction to EA's I think it's necessary to look at the social motivations for why people like to rag on EA.

And, at it's core, I think it's two-fold.

First, people can't help but feel a bit guilty in reaction to EAs. Even if EAs never call anyone out, if you've been donating to the make a wish foundation it's hard not to hear the EA pitch and not start to feel guilty rather than good about your donation. Why didn't you donate to people who needed it much more.

A natural human tendency in response to this is to lash out at the people who made you feel bad. I don't have a good solution to this issue but it's worth keeping in mind.

Secondly, EA challenges a certain way of approaching the world that echoes the tension between STEM folks and humanities individuals (and the analytic continental divide as well).

EA encourages a very quantative, literal truth oriented way of looking at the world. Instead of looking at the social meaning of your donations, what they say about your values and how that comments on society, it asks us to count up the effects. At an aesthetic level this is something that many people find unpleasant and antithetical to how they approach the world. In other words, it's the framework that is what really does most of the offense not the actual outcomes. You could imagine pitching those same things in a different way where it wasn't about biting hard bullets but all stated in terms of increasing the status of things they approved of and I think the reaction would be different.

Expand full comment
Apr 5Liked by Richard Y Chappell

And TBF to the critics, I do understand why they react so negatively when people justify weird longtermist projects as EA or using money to do AI safety research.

When it was just bed nets, I think most people weren't too bothered but they see this other shit and think:

Hey those EA folks are lecturing us on doing what has the most impact and it turns out they are just hypocritically using that to justify giving to whatever cause feels important to them.

And yes, there is alot of truth to that. Of course, I'm inclined to see that as at least people agreeing on the right principle and then making the normal human mistakes about how best to interpret that.

But it's easy to see why this feels like an afront to the sort of person who tends to see the world in less of the STEM/literal way and more of the commentary on values/groups/etc way (probably this is better understood as high vs low decouplers)

They aren't seeing the question about whether we should give in the way that makes the most impact as just an independent question of fact. They tacitly assume that the only reason you say that is to critisize those who are just giving based on what feels important to them.

And so they inherently see EAs as engaged in a kind of moral lecture (we're better than you) and as such respond with the normal anger people feel when a moral scold is revealed to be hypocritically engaged in the same kind of behavior.

--

Ofc I'd prefer philosophy not do this. But then again, I take a very high decoupler approach where I see the only value of philosophy as trying just to figure out what claims are true and tend to see the parts of the subject that don't embrace decoupling (the less analytic stuff) as simply mistakes to be eradicated.

So I'm hardly one to say how to fix this problem since I kinda embody the attitude that upsets the low decouplers in the first place and does see that approach as wrongheaded.

Expand full comment
author

Yeah, that sort of low-decoupling is just inherently antithetical to philosophy (and academic ideals more generally), IMO.

Expand full comment
Apr 6·edited Apr 6

I think Anscombe would disagree with that. Same with her followers: neo-Aristotelians and (to lesser extent) Rawlsians. These people also appeal to Wittgensteinian phil of language. I never read W, but perhaps he would also disagree.

Maybe Quine too. Wasn't his point that everything is connected?

They'd acknowledge some decoupling is good but not total decoupling.

This is why their theories can only be modeled through machine learning, not English.

Expand full comment
Apr 5Liked by Richard Y Chappell

"I think it’s almost always possible to find a responsible way to express your beliefs. And it’s usually worth doing so: even Good Things can be further improved, after all. (Or you might learn that your beliefs are false, and update accordingly.)"

I think the internet has gotten further and further into the unfortunate situation where honesty and nuance are disincentivized. EA is a reasonable and good moral idea. So, how can you attack it? Be ridiculous, inflammatory, misleading, highlight--and create (I'm thinking Bostrom) scandles. It wouldn't be surprising if the thought process behind some people went like this EA is getting attention --> I can get attention critiquing EA --> I will find reasons to oppose EA--> oh there aren't really very many; let me turn my snark up to 100 and highlight all their weirdness. Then people profit by getting clicks. A major issue of our time.

Expand full comment

I also see that if someone strongly commits themselves to socialism such that they make it their identity, then they, even if they are vegan, attack effective altruists for "not taking the systemic issues of capitalism" seriously. If they just made "doing good" or "making the world a better place" their identity and searched for whatever system achieves that goal, then they would not attack effective altruism. And, well, they would not attack real capitalism either because real capitalism does maximize wellbeing more compared to real socialism.

Expand full comment
Apr 6Liked by Richard Y Chappell

You've voiced something I think about every time I see people criticise EA in the way you describe. I don't think they realise that they are literally arguing against helping others. Do they consider that for everyone they convince, fewer people get life saving medication, and more animals are subjugated to torture? How do people (especially leftists) not realise that's crazy?

Expand full comment

*Somebody* has to argue against helping others. Charitable giving/thinking has its place, but it can’t really be the basis of society so it’s important that people take a balanced approach. And for people to take a balanced approach it can be useful to hear more extreme voices on both sides.

Expand full comment
Apr 5Liked by Richard Y Chappell

Another excellent post, Richard! Great work!

Expand full comment

The phrase "things aren't as simple as [true statement]" is almost always misleading.

Expand full comment

Apply the nearest and dearest test to vaccines. Before rolling them out, imagine that all of the victims are your family members and none of the people saved are. That's surely a responsible way to reason about things.

Wenar's article was just terrible. Good response!

Expand full comment

Whether or not the relevant conditions hold, is it really a productive way to have a conversation to start out by comparing your opponents to anti vaxxers? (I’m EA, btw)

Expand full comment
author

It's the clearest way to bring out the substance of my criticism. My central aim here is to clearly communicate what's important and true, not to "have a conversation" with the misguided.

Expand full comment

If you only talk to people who agree with you, then you didn’t communicate anything. Antagonizing “the misguided” doesn’t seem helpful. If the goal is to do good, then it seems to me to be valuable to carefully consider whether your rhetoric will best win allies.

Expand full comment
author
Apr 19·edited Apr 19Author

My main target audience is the sympathetic undecided: those who don't already agree with me, but are receptive to thinking clearly once it's brought to their attention. Antagonizing the deeply misguided few (who are already extremely antagonized -- did you read Wenar's article?) strikes me as a trivial cost by comparison.

I don't optimize my communication for "winning allies". I optimize for communicating importance-weighted truth. This very series explains why I think it's valuable for academics to fill this role. It would absolutely reduce the truth content for me to refrain from making clear that Leif Wenar's article was both intellectually and morally atrocious in just the way I describe with the anti-vax analogies. I'm not going to refrain from forceful criticism when it is called for, and I don't think it would be "helpful" to fail to make my objections as clear and vivid as possible to the "sensible middle" (e.g. typical philosophers and philosophy-adjacent audiences) who are my target audience.

Expand full comment
Apr 19Liked by Richard Y Chappell

Thank you for explaining. That makes sense.

Expand full comment

(Which you can try to do without reducing the truth content)

Expand full comment
User was temporarily suspended for this comment. Show
Expand full comment
author
Apr 13·edited Apr 13Author

Comment sections are for (on-topic, relevant) replies to the post. If you want to write off-topic, do so on your own blog. (24 hour ban)

Expand full comment

It's only off-topic if you do not consider a healthy democracy to be a fundamental instrument/condition for the minimization of suffering in the modern world. If you don't, then stating so would be quite illuminating for context. (We find a spectrum, not only on Scott's blog, but widely in influencer culture, of those who say tear it all down because it's too badly broken, and those advancing authoritarian models -- with most of the in-crowd [Joe Rogan et el] portraying pro-democracy voices as hysterical -- which means the culture is gearing for the continued vandalism of liberal institutions.) Otherwise, my scope is fully and pragmatically relevant to the sweeping concerns you have been presenting here, given the current conditions on the ground. BTW, I have no blog or hidden agenda, which is why I participate in open public forums such as yours from time to time when I have something to offer that others haven't already said. Merely stating my areas of agreement would be a waste of space.

Expand full comment
author

Actually, no, the topic of this post is *not* an invitation to brainstorm "fundamental conditions for the minimization of suffering", so that is still (very obviously) off-topic. The topic is "anti-philanthropic misdirection". Comments should engage with WHAT I WROTE. Period.

If you disagree with a specific claim I made in the post, then by all means explain your disagreement. But if you can't tell the difference between *engaging with the SPECIFICS of what I wrote* and *a random rant about Scott Alexander*, then I can only reiterate that this is not the comments section for you. (If you don't already have a blog, you can always start one.)

Expand full comment

Bottom line, you're in charge of declaring which Russian nesting dolls you'll abide in this space. But I reject your insinuations that I'm too feebleminded to discern the topic at hand. My scope is not yours, but that doesn't mean it's "random." You were discussing the biased, harmful effects of warped "pragmatic" reasoning. And then you referenced a long, colorful Scott rant to bolster your argument about how despicable it is to use bad reasoning to avoid giving (or, I would say, to avoid helping). I pointed out that Scott provides a great living example of warped pragmatic reasoning, due to his uninterrogated biases, and that in so doing he undermines the broadly stated goals of EA, even while writing quite reasonably in certain nesting dolls -- incidentally exemplifying the ongoing fungibility/subjectivity challenges of EA.

Though Scott's misdirection is anti-democratic (rather than "anti-philanthropic"), my point is certainly relevant, enough for a comment section anyway, since the shape is exactly the same: missing the forest for the trees, obsessing over emotionalized, anecdotal grievances (like a bad action by a charity, or a "woke" student) to nihilistically dismiss institutions meant to reduce suffering. Importantly, this particular flavor of misdirection is currently at the center of the most salient fight of our age, and EA is becoming a safe harbor for those whose bad reasoning tells them it's okay not to care when bad actors want to blow up centuries of progress because hip influencers assure them that all that progress was just a horrible wrong turn.

I would also submit that when you ban a commenter, you are navigating your own biases, which means you risk carelessly limiting diversity of thought and experience in exchange for the comfort of intellectual/aesthetic familiarity and control. I don't think it's good for expansive dialogue, and I think it leads to a dialectically incestuous community, but that's your business, I've offered my two cents, and you're right, it's not the kind of space I find valuable enough to keep returning to.

Expand full comment
User was temporarily suspended for this comment. Show
Expand full comment
author
Apr 13·edited Apr 13Author

What a bizarre comment. You can see my interests on my academic webpage -- http://yetterchappell.net/Richard/ (Many academics in general have rather narrow interests; if anything I think that's *less* true of the EAs I'm familiar with.)

But most importantly: how is this relevant to anything? You could respond to literally any argument for anything by saying, "Maybe we'd take you more seriously if you talked more about the things *we* are interested in," but that's just stupid and a total non-sequitur. If you're not interested in a topic, then don't bother joining the conversation.

(24 hour ban)

Expand full comment

I already had some misgivings about your “moral misdirection” article to begin with, and seeing you use it in this context has amplified them. On the other hand, when I see that quote from Scott Alexander, in your footnote, I find it fairly sympathetic. Thinking it over, I realise that I’m reading Scott as participating in an argument on a first-order level: you’ve said your views and now I am going to say mine. By contrast, I’m reading you as participating on a second-order level: you’ve said your views and now I am going to explain why you should agree that you’re not allowed to say them.

It’s not that I disagree entirely with the notion that people should consider the impression their words will give as well as their denotative content. On the contrary: I scorn the “high decoupling is always better than low decoupling” claim precisely because it implies that such impressions are irrelevant. There is, instead, a golden mean here: it’s good to care about connotations as well as denotations, AND it’s good to cultivate the ability to hear a point that someone is trying to make even if they say it in a way that raises (even potentially justified) alarm bells. It’s not that we should decouple bad implications from potentially useful facts in order to only hear the latter, but that we should strive to be able to see both at once.

However, when it comes to second-order argumentation, I worry that you’re trying to get agreement on more slippery topics (such as the accuracy of an overall impression) as a way of dodging disagreements on simpler topics (such as whether GiveWell’s method of evaluating charities is worth anything at all). Someone who disagrees with you on the latter is unlikely to agree with your conclusions on the former.

If “moral misdirection” arguments just rally your supporters around an agreement that we can dismiss people on second-order grounds without addressing first-order disagreements, then I’m against them. But of course it’s possible that I am reading you incorrectly, and that you are instead merely defending first-order arguments that also address connotations and impressions. Certainly, no reasonable person ought to object to the latter. However, precisely because no reasonable person would object to the latter, it would make more sense to simply take it as given and go ahead and make your first-order counter-arguments.

Expand full comment
author

I thought I was very explicit that you're (pretty much) always "allowed to say" your views -- see the sections on "how to criticize good things responsibly" (previous post) and "responsible criticism" (this post). I'm rather offering a general view about *which ways* of expressing one's views are more or less intellectually responsible.

This general view about responsible discourse is as open to evaluation as any other moral claim. I'm most interested in first-order disagreement about whether it is *true* or not. But if you think my way of expressing my view threatens to predictably reduce the importance-weighted accuracy of my audience's beliefs about the matter, that would also be important to note!

Expand full comment

The problem is, when you stipulate that criticism has to be “responsible” in order to be permissible, you’re introducing a secondary debate about what “responsible” means in this context. People who disagree with you on object-level claims are more likely to also disagree with you on second-order claims about which statements are “responsible.” Accordingly, second-order requirements like this ought to be loosely and generously applied, recognising that subjective impressions of “responsible” tone can be even harder to agree on than object-level claims.

So let’s apply your requirements, loosely and generously, to the case of Leif Wenar’s article. You have two main points. Firstly, people who agree with TSPG (or “other important neglected truths”) should clarify that up front. Secondly, people should only give the impression that “EAs are stupid and wrong about everything” if they actually believe this, in which case they should explain why they think this is true.

Wenar has done both of these things. I don’t think he agrees with TSPG as EAs apply it in practice, but he does affirm an alternate neglected principle — namely, that of ethical cosmopolitanism. He writes “Hundreds of millions of people were living each day on less than what $2 can buy in America. Fifty thousand people were dying every day from things like malaria and malnutrition. Each of those lives was as important as mine.” He’s also clear that he thinks most Effective Altruists are well-meaning people who sincerely want to help. And, yes, I think he is sincere in believing that EA is wrong about most things. He relates having spent a long time in his own sincere effort to find out what kinds of large-scale international aid are truly effective. His conclusion seems to be that it mostly isn’t.

Accordingly, I think this is best dealt with as an object-level disagreement. Wenar is giving the impression that EAs are foolish and wrong because, based on his own attempts to do what they are doing, he sincerely believes that they are, in fact, foolish and wrong. Calling him “irresponsible” for accurately conveying this belief is missing the point.

Expand full comment
author

He explicitly disavows aid skepticism, and does not deny that GiveWell is effective. What he denies is that it *only* does good. But this is irrelevant. (As I discuss: he also, very stupidly, accuses them of dishonesty for not highlighting rare harms on their front page.)

I'm suggesting that it's irresponsible for him to highlight rare harms if he doesn't actually think (or at least can't credibly support the claim) that these harms outweigh the benefits.

Expand full comment

Since Wenar's actual underlying beliefs are pretty important to evaluating whether he is being "responsible" by his own lights, I decided to take a look at "Poverty is no Pond." It's here: https://files.givewell.org/files/DWDA%202009/Interventions/Wenar%202010.pdf (Yes, I note the irony that GiveWell is hosting it. Let's send an appreciative nod their way for being the best resource on their own critics, shall we?)

In "Poverty is no Pond," Wenar says he "will make no overall assessment whatsoever about whether aid of any type does more good than harm, or more harm than good. Nor does the paper aim to discourage individuals in any way from acting to relieve poverty by discussing the challenges of doing so." That's complex. Later on, he suggests that between "believers" and "atheists" about aid, the correct move may be to be "agnostic."

So, you're right that I've overestimated his aid skepticism somewhat, and hence I must concede your point that he's given an overall impression that is more depressing than his actual beliefs. However, the distance between "aid skeptic" and "aid agnostic" is not as far as the distance between "aid skeptic" and "aid believer," so the magnitude of his moral misdirection may be less than you're claiming.

Economic, political and socio-cultural harms are not at all rare in Wenar's reckoning. Indeed, one of his main points is that these issues are known by responsible aid agencies, they happen all the time, and yet they aren't properly studied and GiveWell leaves them out of its calculations entirely.

Also on the topic of "rare harms," the WIRED article links to this paper, which suggests that mosquito net fishing accounts for 27.7% of coral reef fishing in Madagascar: https://onlinelibrary.wiley.com/doi/abs/10.1111/fme.12637 . That's not rare, and it would appear that small nets are more likely to lead to overfishing (because they catch the small fish too), thereby raising the chance of depleted food resources in the future.

From what I can see, then, Wenar genuinely believes that the question of whether the harms outweigh the benefits is one that can be very sensitively dependent on the underlying moral theory.

Expand full comment
author
Apr 6·edited Apr 6Author

I don't really think that Wenar's "underlying beliefs" much matter. What matters is that his article *implicates* rather than *explicitly argues* for the important conclusion. Compare Don the xenophobe: he could genuinely believe that immigrants are dangerous and net-negative for society. But if all he does is suggestively point to examples of immigrant crime, that's classic misdirection. His contribution to the discourse is *predictably detrimental* to his audience's beliefs: not (just) because he's substantively mistaken, but because he's implicating things that go beyond what he's willing or able to explicitly defend or support with evidence -- which is a kind of discourse-contribution that will tend to generate and amplify mistakes.

> "these issues are known by responsible aid agencies, they happen all the time, and yet they aren't properly studied and GiveWell leaves them out of its calculations entirely"

This claim of his was very dishonest. Many of his examples he got *from reading GiveWell's own reports*:

https://www.givewell.org/charities/new-incentives#Potential_negative_or_offsetting_effects

Their section on "potential negative or offsetting effects" explains: "Below, we discuss potential negative and offsetting effects of the program that we believe to be too small or unlikely to have been included in our quantitative model."

He literally trawled their website for things they explicitly mention as too trivial to be worth quantifying, and then complained that they don't both to quantify them. Utterly dopey.

re: net fishing, you can read GiveWell's analysis of the issue here:

https://blog.givewell.org/2015/02/05/putting-the-problem-of-bed-nets-used-for-fishing-in-perspective/

Again, it's just not true that they haven't thought carefully about these issues. If Wenar just disagrees on the first-order merits of whether these interventions are net good, he's certainly free to think that. But it wouldn't justify his vitriol, and the way he *very misleadingly* portrays GiveWell in his article. (He evidently misled many readers, yourself included, into thinking that he was supporting a stronger aid-skeptical conclusion than he really does. Quibbling about "the magnitude of his moral misdirection" at this point is just too precious.)

Expand full comment

The way I would put it is that Wenar has, you might say, a pre-existing Bayesian prior, for which he has given detailed arguments elsewhere (namely, in “Poverty is no Pond,” to which he points in his article) that international aid has pervasive and troubling side effects. The time he spent coming to that prior conclusion seems to have been nontrivial, his conclusion is sincere, and he summarises his arguments in the WIRED article without giving as much detail on them (because the WIRED article doesn’t have space and he has other things he wants to say).

Given that prior, he concludes that the individual instances that GiveWell believes to be too trivial to consider are in fact symptoms of pervasive problems that require deeper study. Because he has already explained to GiveWell why he believes such problems to be likely, he concludes that they ought to know to examine them more closely instead of dismissing them. Hence, vitriol.

I think Wenar’s precise motivations and beliefs matter very deeply, in this case, to understanding what he intends to say and the type of argument he is intending to make. Whether he succeeds in making the kind of argument he intends to make is also a valid question, but at this point I have enough detail to conclude that failures in that department are probably more likely due to the difficulty of the task rather than to negligence.

This isn’t quibbling, in my worldview. It’s essential argumentative charity, without which communication across substantial differences becomes much harder.

Expand full comment
User was temporarily suspended for this comment. Show
Expand full comment
author
Apr 5·edited Apr 5Author

Off-topic thread-hijacking isn't welcome. (24 hour ban)

Expand full comment