Peter Singer--the quintessential real-world effective altruist--isn't a real EA because he doesn't conform to Crary's ridiculous strawmen. What a joke!
> Crary’s [...] observation that EA doesn’t tend to fund the social justice advocacy of her political allies.
I think Crary's objection boils down to "Oh no! Those EAs are helping people, and being effective at it. This means that they'll get some status that I'd rather went to me and my friends."
I realise that's a maximally uncharitable take on Crary. But I also think it is the most parsimonious explanation.
Your next post after this one has a paywalled section, which is completely reasonable. But unfortunately, it seems that footnotes by default are below the paywall, even if the position in the text it corresponds to is above the paywall. I was worried that Substack had changed the footnote feature, but the paywall seems to be the explanation why I couldn’t see the footnotes.
‘It positions rich people as “saviours” of poor people’
Help is help! I don’t believe anyone in a hard situation would reject help just because the person giving it is wealthy.. very strange reasoning indeed.
Hey Richard, I think you are broadly right on most of your statements, but I do also want to express disappointment at the way you wrote about it and your tone. I don’t feel you fairly portrayed her arguments, nor responded to them meaningfully, and were resorting to generalizations and personal attacks in spots. Perhaps it is true that much of the critique against EA is sharply worded, and does not present as fair of a picture of EA as it should, but I think that would in no way be a justification to turn around and do the same.
I wish you would have presented her claims charitably, and in fact with strength (steelman) and responded to those. I actually get the impression from the way you present her arguments and style that you think there is _nothing_ valuable about her criticisms and arguments. Would you agree?
Criticism, however spitefully worded (I thought the debate itself was very civil), should not distract from truths it might convey, and at least personally, I think much of the nuance she is trying to bring to EA is very helpful, even if it might be wrong. I’m disappointed you didn’t even attempt to discuss points on which you think she does contribute something meaningful.
Should you be an average person, don’t think I would’ve written this, but you’re presenting yourself as a philosophy Professor on this platform, and setting this kind of standard seems to be to go against what it means to study philosophy and have these kinds of discussions.
> "I actually get the impression from the way you present her arguments and style that you think there is _nothing_ valuable about her criticisms and arguments. Would you agree?"
Correct. I can't discuss points on which she "does contribute something meaningful" because I do not believe there are any.
Tone policing isn't productive - you can surely anticipate that I disagree with your assertions. (I think it's valuable to convey my honest philosophical judgment, even if it is negative. You seem to be presupposing a kind of "in-betweenism" that I find objectionable: https://www.goodthoughts.blog/p/against-confidence-policing#%C2%A7open-mindedness-in-betweenism ) You'd do better to simply explain what valuable "nuance" you think she's offering that I'm overlooking.
> “You'd do better to simply explain what valuable "nuance" you think she's offering that I'm overlooking.”
Sure -- the main gist of her argument to me is that EA is too narrow minded in the impact it considers, and is missing many systemic causes and impacts. This means that EA is both missing better possible avenues for impact, and impacts that its current activities are having, some of which are negative. This, she thinks, actually makes EA net negative.
My take is that she’s right about the importance of considering systemic factors. And, as Singer points out (25:54), that these are not incompatible with the core EA idea. The valuable nuance I think Crary brings is in pointing out the importance of these systemic factors and that the things that EA has historically done haven’t paid enough attention to them. In fact, I think it is plausible that in some places, a lack of attention may very well have caused EA to cause more harm than good (which Singer also seems to acknowledge in response to your question).
What do you think? Perhaps EA considers systemic factors much more than I thought (I know, for example, that there are more recent efforts to work on AI policy), and I would love to be wrong here. Perhaps you do not think systemic factors are as important as I and her make them out to be. I’d love to hear why.
Sidenote: What I agree can be maddening is her definition of EA, which, as you point out, is only be able to use RCTs. Taking this definition to be true shows how her argument is circular, and thus fallacious (although I think she might also disagree that this is her definition, maybe not). But, this specific thread shouldn’t discount the nuance I think she does bring in pointing to the importance of systemic impacts.
On tone, I agree with your linked post. Policing _confidence_ is not useful, and that isn’t what I’m trying to do here. I am very glad that you present your beliefs confidently, such as in explicitly stating that you think there is nothing valuable she contributes to the discussion. Thank you for that, and I also hope for more well-founded confidence in the world.
What I want to discourage is your lack of evidence or concrete arguments to back up your confidence. I don’t think you presented her views fairly, and that your rebuttals were lazy and relied on personal attacks more than actual arguments. For example, in rebutting her circular definition of EA, your argument is just a personal attack:
“Her underlying reasoning, as emerged at a few points, rested on the observation that EA doesn’t tend to fund the social justice advocacy of her political allies. Apparently the only possible explanation is that EA is blinded by an RCT-obsessed methodology.”
I agree with most of the your conclusions, and do not think that it is wrong that you presented them confidently. I am criticizing the strength and tone of arguments you made, which I do not think are convincing nor in good spirit, and do create bad epistemic norms.
Nobody denies that systemic factors can be important - that's not "nuance", it's utterly trite. The challenge is in identifying promising systemic *interventions* (that are likely to do good, and aren't even more likely to do harm). And I think EA does this far, far better than Crary does.
I'm not sure why you think my analysis of Crary's motivations is "just a personal attack", or that I lack supporting evidence. I've read a lot of what she's written on the topic, and I shared what I sincerely judge to be the best explanation of her behavior. Consider her article on the OUP blog:
There, Crary et al. lament EA funding priorities in the animal welfare sphere. EA funders want systemic change, and fund things (like corporate campaigns and transformative alt-meat research) that have a chance of achieving and accelerating the needed big changes. But Crary et al. complain that EA won't fund sanctuaries for individual rescued animals: "covering the costs of caring for survivors of industrial animal farming in sanctuaries is seen as a bad use of funds."
This is the *opposite* of advocating for systemic change. Crary's attacks on EA are thus seen to be opportunistic rather than principled. She explicitly *doesn't want* to prioritize funding for systemic change (of a sort that her friends aren't into). She wants funding for her friends and co-authors, and she doesn't want it to have to pass any kind of rigorous evaluation for cost-effectiveness relative to competing uses of the available funds.
Have you read Crary et al's book? As they tell the story, it grew out of a conference where a bunch of political allies were grousing together about how EA wouldn't fund their work, and so they decided that this proved that EA was too closed-minded, racist (!!), and "grievously harmful". They wrote the book to share their collective complaints. (I reviewed it.) It's all quite transparent. It's hardly "bad epistemics" to notice when people are engaged in transparently motivated reasoning.
Anyway, I won't have time to pursue this disagreement further. I'll just close by flagging that your criticisms rest on baseless, unsupported assumptions. You imagine that you're in a position to judge that I "lack evidence" for my negative judgment of Crary's motivations. But you don't know all my evidence. All you can say is that I didn't, in my post, share enough of my background evidence to convince you. But that's rather different.
You're exactly right though. I'm not saying you lack evidence _in general_ for your negative judgement of Crary. I know you've engaged very substantailly with her work and have put forth strong arguments, in other places, and including in this specific comment. I'm saying you lack evidence in this specific post, and that your tone in this post was overly harsh. That, is bad epistemics. I do think there's a bit of 'devil effect' (opposite of halo) going on too, but perhaps that's another argument.
> "And I think EA does this far, far better than Crary does."
And to this, I also agree. But the point to me is not comparing how good each person does it, but how they could be improved. The value I see in engaging with Crary's argument is realizing out that EA should engage with even more systemic factors, _especially_ those which are not easily captured through EA's methodology (as Singer points out in response to your question).
Anyways, I'm also happy to close things off here. Thank you for engaging. I apprecite this and have gotten value out of it.
Sorry, I haven't had time to cite examples, but I don't really like your tone in some of your writing re social justice type professors. It's seems kind of mean, and it doesn't seem like you're really trying to understand their perspective. That's a comment on a number of things you've written, not just this article. I'm not gonna elaborate right now (maybe later), but for now just take or leave it.
Ok, I appreciate the feedback. If you do end up expanding upon your concern, I'd be curious about the extent to which this is just a matter of my responding very critically to the (IMO unreasonable on their part) hostility that's very publicly promoted by Crary, Wenar, etc., or in what cases you think my response is actually unwarranted.
(I assume that it's warranted to be hostile towards those who unreasonably initiate hostilities. And I'm sure you're aware that social justice ideology is both (i) overwhelmingly dominant in academia, and (ii) not exactly known for its understanding or openness to diverging viewpoints.)
I really don't understand the conflict between EA and leftism/social justice. It seems to me EA fits nicely with leftist ideals.
I think it's sick we have a system that produces billionaires with so much money they build space rockets for fun along with millions of children suffering from something as easy to fix as a vitamin A deficiency. But I don't see any tension between 1. Wanting that system to change and fighting to change it and 2. thinking, in the meantime, it's good to donate to alleviate suffering caused by that system.
I also see no conflict between wanting democracy extended to the workplace, or wanting massive reforms to criminal justice system, etc. and giving to orgs that distribute anti-malaria nets.
All of the hostility is very disappointing, leftists should be allies to EA.
Yes, I very much agree. As a result, basically the only way I can make sense of it is as a kind of group politics / perceived status threat. E.g., Crary seems very upset that many of the best and most idealistic students on campus are now getting excited about EA and listening to the likes of Will MacAskill rather than just deferring to her & her friends. That doesn't seem like it should be hugely bothersome if one thinks about it in terms of "Are these students going to be doing good things as a result?" But if you attribute less high-minded motivations to the critics, it becomes easier to understand their behavior. Humans are social animals, after all.
I think the main point of conflict is that standard left-wing movements worry that their preferred concerns and solutions won't be very legible to EA--think of something like Communist revolution being analyzed in the classic EA way--and so the worry is the two movements will compete for people with broadly similar worldviews but in a way that draws people away from the left's preferred viewpoint.
Charitably, the worry is analytical: EA isn't equipped to come to the "correct" solution, uncharitably it's just worry over being outcompeted as a movement, and realistically it's a bit of both.
I would be a lot more sympathetic to the charitable interpretation here if those critics would offer some sort of argument/evidence that their preferred views *really are* the "correct" ones (i.e., a first-order argument with which EA could then engage), rather than just *presupposing* this or treating it as self-evident. If the idea is that their view is correct but one can only know this through direct revelation, not reason, then that seems a real problem! On the other hand, if they really do have good reasons then they should try explicitly sharing them, rather than just obliquely complaining that EA is not equipped to recognize their superiority.
It seems to me that one of the things that's going on in lots of leftist/hardcore progressive thought is a shared view that the status quo is really deeply unacceptable--inhumane--to the point where trying facially plausible alternatives that don't have evidence in their favor is a better move than letting things continue as-is.
Think of how one might try to give a defense of the French Revolution in terms a bit more consequentialist than Payne's defense, and with reasonable intellectual humility. You might say "Look, in the present era there is no robust empirical or theoretical understanding of economics or sociology... but come on, a more egalitarian society has to be a better bet than dehumanizing serfdom!"
I think that's where a lot of people are about "global capitalism."
Oh sure, and I'm more comfortable extending that charity to critics who aren't professional philosophers, who I think you're more concerned with...I have in mind the average social justice-minded person who otherwise isn't thinking too deeply about any of this.
We need a name for this, as it seems to be responsible for around 98% of bad left-wing arguments by activists, and a lot of the bad right-wing arguments too.
Peter Singer--the quintessential real-world effective altruist--isn't a real EA because he doesn't conform to Crary's ridiculous strawmen. What a joke!
> Crary’s [...] observation that EA doesn’t tend to fund the social justice advocacy of her political allies.
I think Crary's objection boils down to "Oh no! Those EAs are helping people, and being effective at it. This means that they'll get some status that I'd rather went to me and my friends."
I realise that's a maximally uncharitable take on Crary. But I also think it is the most parsimonious explanation.
Devastating!
Your next post after this one has a paywalled section, which is completely reasonable. But unfortunately, it seems that footnotes by default are below the paywall, even if the position in the text it corresponds to is above the paywall. I was worried that Substack had changed the footnote feature, but the paywall seems to be the explanation why I couldn’t see the footnotes.
Oh, that's annoying! Thanks for letting me know. I'll try contacting substack support to see if there's any way to change that.
In the meantime, I've screenshotted the most substantive ones here:
https://substack.com/profile/32790987-richard-y-chappell/note/c-67844544
‘It positions rich people as “saviours” of poor people’
Help is help! I don’t believe anyone in a hard situation would reject help just because the person giving it is wealthy.. very strange reasoning indeed.
Hey Richard, I think you are broadly right on most of your statements, but I do also want to express disappointment at the way you wrote about it and your tone. I don’t feel you fairly portrayed her arguments, nor responded to them meaningfully, and were resorting to generalizations and personal attacks in spots. Perhaps it is true that much of the critique against EA is sharply worded, and does not present as fair of a picture of EA as it should, but I think that would in no way be a justification to turn around and do the same.
I wish you would have presented her claims charitably, and in fact with strength (steelman) and responded to those. I actually get the impression from the way you present her arguments and style that you think there is _nothing_ valuable about her criticisms and arguments. Would you agree?
Criticism, however spitefully worded (I thought the debate itself was very civil), should not distract from truths it might convey, and at least personally, I think much of the nuance she is trying to bring to EA is very helpful, even if it might be wrong. I’m disappointed you didn’t even attempt to discuss points on which you think she does contribute something meaningful.
Should you be an average person, don’t think I would’ve written this, but you’re presenting yourself as a philosophy Professor on this platform, and setting this kind of standard seems to be to go against what it means to study philosophy and have these kinds of discussions.
> "I actually get the impression from the way you present her arguments and style that you think there is _nothing_ valuable about her criticisms and arguments. Would you agree?"
Correct. I can't discuss points on which she "does contribute something meaningful" because I do not believe there are any.
Tone policing isn't productive - you can surely anticipate that I disagree with your assertions. (I think it's valuable to convey my honest philosophical judgment, even if it is negative. You seem to be presupposing a kind of "in-betweenism" that I find objectionable: https://www.goodthoughts.blog/p/against-confidence-policing#%C2%A7open-mindedness-in-betweenism ) You'd do better to simply explain what valuable "nuance" you think she's offering that I'm overlooking.
> “You'd do better to simply explain what valuable "nuance" you think she's offering that I'm overlooking.”
Sure -- the main gist of her argument to me is that EA is too narrow minded in the impact it considers, and is missing many systemic causes and impacts. This means that EA is both missing better possible avenues for impact, and impacts that its current activities are having, some of which are negative. This, she thinks, actually makes EA net negative.
My take is that she’s right about the importance of considering systemic factors. And, as Singer points out (25:54), that these are not incompatible with the core EA idea. The valuable nuance I think Crary brings is in pointing out the importance of these systemic factors and that the things that EA has historically done haven’t paid enough attention to them. In fact, I think it is plausible that in some places, a lack of attention may very well have caused EA to cause more harm than good (which Singer also seems to acknowledge in response to your question).
What do you think? Perhaps EA considers systemic factors much more than I thought (I know, for example, that there are more recent efforts to work on AI policy), and I would love to be wrong here. Perhaps you do not think systemic factors are as important as I and her make them out to be. I’d love to hear why.
Sidenote: What I agree can be maddening is her definition of EA, which, as you point out, is only be able to use RCTs. Taking this definition to be true shows how her argument is circular, and thus fallacious (although I think she might also disagree that this is her definition, maybe not). But, this specific thread shouldn’t discount the nuance I think she does bring in pointing to the importance of systemic impacts.
On tone, I agree with your linked post. Policing _confidence_ is not useful, and that isn’t what I’m trying to do here. I am very glad that you present your beliefs confidently, such as in explicitly stating that you think there is nothing valuable she contributes to the discussion. Thank you for that, and I also hope for more well-founded confidence in the world.
What I want to discourage is your lack of evidence or concrete arguments to back up your confidence. I don’t think you presented her views fairly, and that your rebuttals were lazy and relied on personal attacks more than actual arguments. For example, in rebutting her circular definition of EA, your argument is just a personal attack:
“Her underlying reasoning, as emerged at a few points, rested on the observation that EA doesn’t tend to fund the social justice advocacy of her political allies. Apparently the only possible explanation is that EA is blinded by an RCT-obsessed methodology.”
I agree with most of the your conclusions, and do not think that it is wrong that you presented them confidently. I am criticizing the strength and tone of arguments you made, which I do not think are convincing nor in good spirit, and do create bad epistemic norms.
Nobody denies that systemic factors can be important - that's not "nuance", it's utterly trite. The challenge is in identifying promising systemic *interventions* (that are likely to do good, and aren't even more likely to do harm). And I think EA does this far, far better than Crary does.
I'm not sure why you think my analysis of Crary's motivations is "just a personal attack", or that I lack supporting evidence. I've read a lot of what she's written on the topic, and I shared what I sincerely judge to be the best explanation of her behavior. Consider her article on the OUP blog:
https://blog.oup.com/2022/12/the-predictably-grievous-harms-of-effective-altruism/
There, Crary et al. lament EA funding priorities in the animal welfare sphere. EA funders want systemic change, and fund things (like corporate campaigns and transformative alt-meat research) that have a chance of achieving and accelerating the needed big changes. But Crary et al. complain that EA won't fund sanctuaries for individual rescued animals: "covering the costs of caring for survivors of industrial animal farming in sanctuaries is seen as a bad use of funds."
This is the *opposite* of advocating for systemic change. Crary's attacks on EA are thus seen to be opportunistic rather than principled. She explicitly *doesn't want* to prioritize funding for systemic change (of a sort that her friends aren't into). She wants funding for her friends and co-authors, and she doesn't want it to have to pass any kind of rigorous evaluation for cost-effectiveness relative to competing uses of the available funds.
Have you read Crary et al's book? As they tell the story, it grew out of a conference where a bunch of political allies were grousing together about how EA wouldn't fund their work, and so they decided that this proved that EA was too closed-minded, racist (!!), and "grievously harmful". They wrote the book to share their collective complaints. (I reviewed it.) It's all quite transparent. It's hardly "bad epistemics" to notice when people are engaged in transparently motivated reasoning.
Anyway, I won't have time to pursue this disagreement further. I'll just close by flagging that your criticisms rest on baseless, unsupported assumptions. You imagine that you're in a position to judge that I "lack evidence" for my negative judgment of Crary's motivations. But you don't know all my evidence. All you can say is that I didn't, in my post, share enough of my background evidence to convince you. But that's rather different.
You're exactly right though. I'm not saying you lack evidence _in general_ for your negative judgement of Crary. I know you've engaged very substantailly with her work and have put forth strong arguments, in other places, and including in this specific comment. I'm saying you lack evidence in this specific post, and that your tone in this post was overly harsh. That, is bad epistemics. I do think there's a bit of 'devil effect' (opposite of halo) going on too, but perhaps that's another argument.
> "And I think EA does this far, far better than Crary does."
And to this, I also agree. But the point to me is not comparing how good each person does it, but how they could be improved. The value I see in engaging with Crary's argument is realizing out that EA should engage with even more systemic factors, _especially_ those which are not easily captured through EA's methodology (as Singer points out in response to your question).
Anyways, I'm also happy to close things off here. Thank you for engaging. I apprecite this and have gotten value out of it.
Sorry, I haven't had time to cite examples, but I don't really like your tone in some of your writing re social justice type professors. It's seems kind of mean, and it doesn't seem like you're really trying to understand their perspective. That's a comment on a number of things you've written, not just this article. I'm not gonna elaborate right now (maybe later), but for now just take or leave it.
Ok, I appreciate the feedback. If you do end up expanding upon your concern, I'd be curious about the extent to which this is just a matter of my responding very critically to the (IMO unreasonable on their part) hostility that's very publicly promoted by Crary, Wenar, etc., or in what cases you think my response is actually unwarranted.
(I assume that it's warranted to be hostile towards those who unreasonably initiate hostilities. And I'm sure you're aware that social justice ideology is both (i) overwhelmingly dominant in academia, and (ii) not exactly known for its understanding or openness to diverging viewpoints.)
I really don't understand the conflict between EA and leftism/social justice. It seems to me EA fits nicely with leftist ideals.
I think it's sick we have a system that produces billionaires with so much money they build space rockets for fun along with millions of children suffering from something as easy to fix as a vitamin A deficiency. But I don't see any tension between 1. Wanting that system to change and fighting to change it and 2. thinking, in the meantime, it's good to donate to alleviate suffering caused by that system.
I also see no conflict between wanting democracy extended to the workplace, or wanting massive reforms to criminal justice system, etc. and giving to orgs that distribute anti-malaria nets.
All of the hostility is very disappointing, leftists should be allies to EA.
Yes, I very much agree. As a result, basically the only way I can make sense of it is as a kind of group politics / perceived status threat. E.g., Crary seems very upset that many of the best and most idealistic students on campus are now getting excited about EA and listening to the likes of Will MacAskill rather than just deferring to her & her friends. That doesn't seem like it should be hugely bothersome if one thinks about it in terms of "Are these students going to be doing good things as a result?" But if you attribute less high-minded motivations to the critics, it becomes easier to understand their behavior. Humans are social animals, after all.
I think the main point of conflict is that standard left-wing movements worry that their preferred concerns and solutions won't be very legible to EA--think of something like Communist revolution being analyzed in the classic EA way--and so the worry is the two movements will compete for people with broadly similar worldviews but in a way that draws people away from the left's preferred viewpoint.
Charitably, the worry is analytical: EA isn't equipped to come to the "correct" solution, uncharitably it's just worry over being outcompeted as a movement, and realistically it's a bit of both.
I would be a lot more sympathetic to the charitable interpretation here if those critics would offer some sort of argument/evidence that their preferred views *really are* the "correct" ones (i.e., a first-order argument with which EA could then engage), rather than just *presupposing* this or treating it as self-evident. If the idea is that their view is correct but one can only know this through direct revelation, not reason, then that seems a real problem! On the other hand, if they really do have good reasons then they should try explicitly sharing them, rather than just obliquely complaining that EA is not equipped to recognize their superiority.
It seems to me that one of the things that's going on in lots of leftist/hardcore progressive thought is a shared view that the status quo is really deeply unacceptable--inhumane--to the point where trying facially plausible alternatives that don't have evidence in their favor is a better move than letting things continue as-is.
Think of how one might try to give a defense of the French Revolution in terms a bit more consequentialist than Payne's defense, and with reasonable intellectual humility. You might say "Look, in the present era there is no robust empirical or theoretical understanding of economics or sociology... but come on, a more egalitarian society has to be a better bet than dehumanizing serfdom!"
I think that's where a lot of people are about "global capitalism."
Oh sure, and I'm more comfortable extending that charity to critics who aren't professional philosophers, who I think you're more concerned with...I have in mind the average social justice-minded person who otherwise isn't thinking too deeply about any of this.
We need a name for this, as it seems to be responsible for around 98% of bad left-wing arguments by activists, and a lot of the bad right-wing arguments too.
"Guilt by rhetorical association"? Maybe someone can suggest a catchier name. I agree it's maddeningly common.
[Edited to remove 'arbitrary' from the suggested phrase. Brief is better.]