I already had some misgivings about your “moral misdirection” article to begin with, and seeing you use it in this context has amplified them. On the other hand, when I see that quote from Scott Alexander, in your footnote, I find it fairly sympathetic. Thinking it over, I realise that I’m reading Scott as participating in an argument on…
I already had some misgivings about your “moral misdirection” article to begin with, and seeing you use it in this context has amplified them. On the other hand, when I see that quote from Scott Alexander, in your footnote, I find it fairly sympathetic. Thinking it over, I realise that I’m reading Scott as participating in an argument on a first-order level: you’ve said your views and now I am going to say mine. By contrast, I’m reading you as participating on a second-order level: you’ve said your views and now I am going to explain why you should agree that you’re not allowed to say them.
It’s not that I disagree entirely with the notion that people should consider the impression their words will give as well as their denotative content. On the contrary: I scorn the “high decoupling is always better than low decoupling” claim precisely because it implies that such impressions are irrelevant. There is, instead, a golden mean here: it’s good to care about connotations as well as denotations, AND it’s good to cultivate the ability to hear a point that someone is trying to make even if they say it in a way that raises (even potentially justified) alarm bells. It’s not that we should decouple bad implications from potentially useful facts in order to only hear the latter, but that we should strive to be able to see both at once.
However, when it comes to second-order argumentation, I worry that you’re trying to get agreement on more slippery topics (such as the accuracy of an overall impression) as a way of dodging disagreements on simpler topics (such as whether GiveWell’s method of evaluating charities is worth anything at all). Someone who disagrees with you on the latter is unlikely to agree with your conclusions on the former.
If “moral misdirection” arguments just rally your supporters around an agreement that we can dismiss people on second-order grounds without addressing first-order disagreements, then I’m against them. But of course it’s possible that I am reading you incorrectly, and that you are instead merely defending first-order arguments that also address connotations and impressions. Certainly, no reasonable person ought to object to the latter. However, precisely because no reasonable person would object to the latter, it would make more sense to simply take it as given and go ahead and make your first-order counter-arguments.
I thought I was very explicit that you're (pretty much) always "allowed to say" your views -- see the sections on "how to criticize good things responsibly" (previous post) and "responsible criticism" (this post). I'm rather offering a general view about *which ways* of expressing one's views are more or less intellectually responsible.
This general view about responsible discourse is as open to evaluation as any other moral claim. I'm most interested in first-order disagreement about whether it is *true* or not. But if you think my way of expressing my view threatens to predictably reduce the importance-weighted accuracy of my audience's beliefs about the matter, that would also be important to note!
The problem is, when you stipulate that criticism has to be “responsible” in order to be permissible, you’re introducing a secondary debate about what “responsible” means in this context. People who disagree with you on object-level claims are more likely to also disagree with you on second-order claims about which statements are “responsible.” Accordingly, second-order requirements like this ought to be loosely and generously applied, recognising that subjective impressions of “responsible” tone can be even harder to agree on than object-level claims.
So let’s apply your requirements, loosely and generously, to the case of Leif Wenar’s article. You have two main points. Firstly, people who agree with TSPG (or “other important neglected truths”) should clarify that up front. Secondly, people should only give the impression that “EAs are stupid and wrong about everything” if they actually believe this, in which case they should explain why they think this is true.
Wenar has done both of these things. I don’t think he agrees with TSPG as EAs apply it in practice, but he does affirm an alternate neglected principle — namely, that of ethical cosmopolitanism. He writes “Hundreds of millions of people were living each day on less than what $2 can buy in America. Fifty thousand people were dying every day from things like malaria and malnutrition. Each of those lives was as important as mine.” He’s also clear that he thinks most Effective Altruists are well-meaning people who sincerely want to help. And, yes, I think he is sincere in believing that EA is wrong about most things. He relates having spent a long time in his own sincere effort to find out what kinds of large-scale international aid are truly effective. His conclusion seems to be that it mostly isn’t.
Accordingly, I think this is best dealt with as an object-level disagreement. Wenar is giving the impression that EAs are foolish and wrong because, based on his own attempts to do what they are doing, he sincerely believes that they are, in fact, foolish and wrong. Calling him “irresponsible” for accurately conveying this belief is missing the point.
He explicitly disavows aid skepticism, and does not deny that GiveWell is effective. What he denies is that it *only* does good. But this is irrelevant. (As I discuss: he also, very stupidly, accuses them of dishonesty for not highlighting rare harms on their front page.)
I'm suggesting that it's irresponsible for him to highlight rare harms if he doesn't actually think (or at least can't credibly support the claim) that these harms outweigh the benefits.
Since Wenar's actual underlying beliefs are pretty important to evaluating whether he is being "responsible" by his own lights, I decided to take a look at "Poverty is no Pond." It's here: https://files.givewell.org/files/DWDA%202009/Interventions/Wenar%202010.pdf (Yes, I note the irony that GiveWell is hosting it. Let's send an appreciative nod their way for being the best resource on their own critics, shall we?)
In "Poverty is no Pond," Wenar says he "will make no overall assessment whatsoever about whether aid of any type does more good than harm, or more harm than good. Nor does the paper aim to discourage individuals in any way from acting to relieve poverty by discussing the challenges of doing so." That's complex. Later on, he suggests that between "believers" and "atheists" about aid, the correct move may be to be "agnostic."
So, you're right that I've overestimated his aid skepticism somewhat, and hence I must concede your point that he's given an overall impression that is more depressing than his actual beliefs. However, the distance between "aid skeptic" and "aid agnostic" is not as far as the distance between "aid skeptic" and "aid believer," so the magnitude of his moral misdirection may be less than you're claiming.
Economic, political and socio-cultural harms are not at all rare in Wenar's reckoning. Indeed, one of his main points is that these issues are known by responsible aid agencies, they happen all the time, and yet they aren't properly studied and GiveWell leaves them out of its calculations entirely.
Also on the topic of "rare harms," the WIRED article links to this paper, which suggests that mosquito net fishing accounts for 27.7% of coral reef fishing in Madagascar: https://onlinelibrary.wiley.com/doi/abs/10.1111/fme.12637 . That's not rare, and it would appear that small nets are more likely to lead to overfishing (because they catch the small fish too), thereby raising the chance of depleted food resources in the future.
From what I can see, then, Wenar genuinely believes that the question of whether the harms outweigh the benefits is one that can be very sensitively dependent on the underlying moral theory.
I don't really think that Wenar's "underlying beliefs" much matter. What matters is that his article *implicates* rather than *explicitly argues* for the important conclusion. Compare Don the xenophobe: he could genuinely believe that immigrants are dangerous and net-negative for society. But if all he does is suggestively point to examples of immigrant crime, that's classic misdirection. His contribution to the discourse is *predictably detrimental* to his audience's beliefs: not (just) because he's substantively mistaken, but because he's implicating things that go beyond what he's willing or able to explicitly defend or support with evidence -- which is a kind of discourse-contribution that will tend to generate and amplify mistakes.
> "these issues are known by responsible aid agencies, they happen all the time, and yet they aren't properly studied and GiveWell leaves them out of its calculations entirely"
This claim of his was very dishonest. Many of his examples he got *from reading GiveWell's own reports*:
Their section on "potential negative or offsetting effects" explains: "Below, we discuss potential negative and offsetting effects of the program that we believe to be too small or unlikely to have been included in our quantitative model."
He literally trawled their website for things they explicitly mention as too trivial to be worth quantifying, and then complained that they don't both to quantify them. Utterly dopey.
re: net fishing, you can read GiveWell's analysis of the issue here:
Again, it's just not true that they haven't thought carefully about these issues. If Wenar just disagrees on the first-order merits of whether these interventions are net good, he's certainly free to think that. But it wouldn't justify his vitriol, and the way he *very misleadingly* portrays GiveWell in his article. (He evidently misled many readers, yourself included, into thinking that he was supporting a stronger aid-skeptical conclusion than he really does. Quibbling about "the magnitude of his moral misdirection" at this point is just too precious.)
The way I would put it is that Wenar has, you might say, a pre-existing Bayesian prior, for which he has given detailed arguments elsewhere (namely, in “Poverty is no Pond,” to which he points in his article) that international aid has pervasive and troubling side effects. The time he spent coming to that prior conclusion seems to have been nontrivial, his conclusion is sincere, and he summarises his arguments in the WIRED article without giving as much detail on them (because the WIRED article doesn’t have space and he has other things he wants to say).
Given that prior, he concludes that the individual instances that GiveWell believes to be too trivial to consider are in fact symptoms of pervasive problems that require deeper study. Because he has already explained to GiveWell why he believes such problems to be likely, he concludes that they ought to know to examine them more closely instead of dismissing them. Hence, vitriol.
I think Wenar’s precise motivations and beliefs matter very deeply, in this case, to understanding what he intends to say and the type of argument he is intending to make. Whether he succeeds in making the kind of argument he intends to make is also a valid question, but at this point I have enough detail to conclude that failures in that department are probably more likely due to the difficulty of the task rather than to negligence.
This isn’t quibbling, in my worldview. It’s essential argumentative charity, without which communication across substantial differences becomes much harder.
Reading the WIRED article, I was also wondering: "Isn`t this exactly what GiveWell does, to also consider the side effects of the actions of their recommended charities?" Maybe the solution to your debate comes down to the 27.7%. One of the sides must have blundered here as the difference between this number and "insubstantial" is quite considerable.
Either way, I DO find it great that also their critics get a voice on the GiveWell website and we should take criticism seriously even if we believe it is unsubstantiated.
I already had some misgivings about your “moral misdirection” article to begin with, and seeing you use it in this context has amplified them. On the other hand, when I see that quote from Scott Alexander, in your footnote, I find it fairly sympathetic. Thinking it over, I realise that I’m reading Scott as participating in an argument on a first-order level: you’ve said your views and now I am going to say mine. By contrast, I’m reading you as participating on a second-order level: you’ve said your views and now I am going to explain why you should agree that you’re not allowed to say them.
It’s not that I disagree entirely with the notion that people should consider the impression their words will give as well as their denotative content. On the contrary: I scorn the “high decoupling is always better than low decoupling” claim precisely because it implies that such impressions are irrelevant. There is, instead, a golden mean here: it’s good to care about connotations as well as denotations, AND it’s good to cultivate the ability to hear a point that someone is trying to make even if they say it in a way that raises (even potentially justified) alarm bells. It’s not that we should decouple bad implications from potentially useful facts in order to only hear the latter, but that we should strive to be able to see both at once.
However, when it comes to second-order argumentation, I worry that you’re trying to get agreement on more slippery topics (such as the accuracy of an overall impression) as a way of dodging disagreements on simpler topics (such as whether GiveWell’s method of evaluating charities is worth anything at all). Someone who disagrees with you on the latter is unlikely to agree with your conclusions on the former.
If “moral misdirection” arguments just rally your supporters around an agreement that we can dismiss people on second-order grounds without addressing first-order disagreements, then I’m against them. But of course it’s possible that I am reading you incorrectly, and that you are instead merely defending first-order arguments that also address connotations and impressions. Certainly, no reasonable person ought to object to the latter. However, precisely because no reasonable person would object to the latter, it would make more sense to simply take it as given and go ahead and make your first-order counter-arguments.
I thought I was very explicit that you're (pretty much) always "allowed to say" your views -- see the sections on "how to criticize good things responsibly" (previous post) and "responsible criticism" (this post). I'm rather offering a general view about *which ways* of expressing one's views are more or less intellectually responsible.
This general view about responsible discourse is as open to evaluation as any other moral claim. I'm most interested in first-order disagreement about whether it is *true* or not. But if you think my way of expressing my view threatens to predictably reduce the importance-weighted accuracy of my audience's beliefs about the matter, that would also be important to note!
The problem is, when you stipulate that criticism has to be “responsible” in order to be permissible, you’re introducing a secondary debate about what “responsible” means in this context. People who disagree with you on object-level claims are more likely to also disagree with you on second-order claims about which statements are “responsible.” Accordingly, second-order requirements like this ought to be loosely and generously applied, recognising that subjective impressions of “responsible” tone can be even harder to agree on than object-level claims.
So let’s apply your requirements, loosely and generously, to the case of Leif Wenar’s article. You have two main points. Firstly, people who agree with TSPG (or “other important neglected truths”) should clarify that up front. Secondly, people should only give the impression that “EAs are stupid and wrong about everything” if they actually believe this, in which case they should explain why they think this is true.
Wenar has done both of these things. I don’t think he agrees with TSPG as EAs apply it in practice, but he does affirm an alternate neglected principle — namely, that of ethical cosmopolitanism. He writes “Hundreds of millions of people were living each day on less than what $2 can buy in America. Fifty thousand people were dying every day from things like malaria and malnutrition. Each of those lives was as important as mine.” He’s also clear that he thinks most Effective Altruists are well-meaning people who sincerely want to help. And, yes, I think he is sincere in believing that EA is wrong about most things. He relates having spent a long time in his own sincere effort to find out what kinds of large-scale international aid are truly effective. His conclusion seems to be that it mostly isn’t.
Accordingly, I think this is best dealt with as an object-level disagreement. Wenar is giving the impression that EAs are foolish and wrong because, based on his own attempts to do what they are doing, he sincerely believes that they are, in fact, foolish and wrong. Calling him “irresponsible” for accurately conveying this belief is missing the point.
He explicitly disavows aid skepticism, and does not deny that GiveWell is effective. What he denies is that it *only* does good. But this is irrelevant. (As I discuss: he also, very stupidly, accuses them of dishonesty for not highlighting rare harms on their front page.)
I'm suggesting that it's irresponsible for him to highlight rare harms if he doesn't actually think (or at least can't credibly support the claim) that these harms outweigh the benefits.
Since Wenar's actual underlying beliefs are pretty important to evaluating whether he is being "responsible" by his own lights, I decided to take a look at "Poverty is no Pond." It's here: https://files.givewell.org/files/DWDA%202009/Interventions/Wenar%202010.pdf (Yes, I note the irony that GiveWell is hosting it. Let's send an appreciative nod their way for being the best resource on their own critics, shall we?)
In "Poverty is no Pond," Wenar says he "will make no overall assessment whatsoever about whether aid of any type does more good than harm, or more harm than good. Nor does the paper aim to discourage individuals in any way from acting to relieve poverty by discussing the challenges of doing so." That's complex. Later on, he suggests that between "believers" and "atheists" about aid, the correct move may be to be "agnostic."
So, you're right that I've overestimated his aid skepticism somewhat, and hence I must concede your point that he's given an overall impression that is more depressing than his actual beliefs. However, the distance between "aid skeptic" and "aid agnostic" is not as far as the distance between "aid skeptic" and "aid believer," so the magnitude of his moral misdirection may be less than you're claiming.
Economic, political and socio-cultural harms are not at all rare in Wenar's reckoning. Indeed, one of his main points is that these issues are known by responsible aid agencies, they happen all the time, and yet they aren't properly studied and GiveWell leaves them out of its calculations entirely.
Also on the topic of "rare harms," the WIRED article links to this paper, which suggests that mosquito net fishing accounts for 27.7% of coral reef fishing in Madagascar: https://onlinelibrary.wiley.com/doi/abs/10.1111/fme.12637 . That's not rare, and it would appear that small nets are more likely to lead to overfishing (because they catch the small fish too), thereby raising the chance of depleted food resources in the future.
From what I can see, then, Wenar genuinely believes that the question of whether the harms outweigh the benefits is one that can be very sensitively dependent on the underlying moral theory.
I don't really think that Wenar's "underlying beliefs" much matter. What matters is that his article *implicates* rather than *explicitly argues* for the important conclusion. Compare Don the xenophobe: he could genuinely believe that immigrants are dangerous and net-negative for society. But if all he does is suggestively point to examples of immigrant crime, that's classic misdirection. His contribution to the discourse is *predictably detrimental* to his audience's beliefs: not (just) because he's substantively mistaken, but because he's implicating things that go beyond what he's willing or able to explicitly defend or support with evidence -- which is a kind of discourse-contribution that will tend to generate and amplify mistakes.
> "these issues are known by responsible aid agencies, they happen all the time, and yet they aren't properly studied and GiveWell leaves them out of its calculations entirely"
This claim of his was very dishonest. Many of his examples he got *from reading GiveWell's own reports*:
https://www.givewell.org/charities/new-incentives#Potential_negative_or_offsetting_effects
Their section on "potential negative or offsetting effects" explains: "Below, we discuss potential negative and offsetting effects of the program that we believe to be too small or unlikely to have been included in our quantitative model."
He literally trawled their website for things they explicitly mention as too trivial to be worth quantifying, and then complained that they don't both to quantify them. Utterly dopey.
re: net fishing, you can read GiveWell's analysis of the issue here:
https://blog.givewell.org/2015/02/05/putting-the-problem-of-bed-nets-used-for-fishing-in-perspective/
Again, it's just not true that they haven't thought carefully about these issues. If Wenar just disagrees on the first-order merits of whether these interventions are net good, he's certainly free to think that. But it wouldn't justify his vitriol, and the way he *very misleadingly* portrays GiveWell in his article. (He evidently misled many readers, yourself included, into thinking that he was supporting a stronger aid-skeptical conclusion than he really does. Quibbling about "the magnitude of his moral misdirection" at this point is just too precious.)
The way I would put it is that Wenar has, you might say, a pre-existing Bayesian prior, for which he has given detailed arguments elsewhere (namely, in “Poverty is no Pond,” to which he points in his article) that international aid has pervasive and troubling side effects. The time he spent coming to that prior conclusion seems to have been nontrivial, his conclusion is sincere, and he summarises his arguments in the WIRED article without giving as much detail on them (because the WIRED article doesn’t have space and he has other things he wants to say).
Given that prior, he concludes that the individual instances that GiveWell believes to be too trivial to consider are in fact symptoms of pervasive problems that require deeper study. Because he has already explained to GiveWell why he believes such problems to be likely, he concludes that they ought to know to examine them more closely instead of dismissing them. Hence, vitriol.
I think Wenar’s precise motivations and beliefs matter very deeply, in this case, to understanding what he intends to say and the type of argument he is intending to make. Whether he succeeds in making the kind of argument he intends to make is also a valid question, but at this point I have enough detail to conclude that failures in that department are probably more likely due to the difficulty of the task rather than to negligence.
This isn’t quibbling, in my worldview. It’s essential argumentative charity, without which communication across substantial differences becomes much harder.
Reading the WIRED article, I was also wondering: "Isn`t this exactly what GiveWell does, to also consider the side effects of the actions of their recommended charities?" Maybe the solution to your debate comes down to the 27.7%. One of the sides must have blundered here as the difference between this number and "insubstantial" is quite considerable.
Either way, I DO find it great that also their critics get a voice on the GiveWell website and we should take criticism seriously even if we believe it is unsubstantiated.