Prof. Chappell's reading of the essays in this book, which I contributed to, is not merely ungenerous but tendentious--he rejects the authors' positions mainly because we reject the faulty epistemic and political terrain of EA itself, i.e. cost-benefit analysis. Of my own chapter ("Effective Altruism and the Reified Mind"), Chappell writ…
Prof. Chappell's reading of the essays in this book, which I contributed to, is not merely ungenerous but tendentious--he rejects the authors' positions mainly because we reject the faulty epistemic and political terrain of EA itself, i.e. cost-benefit analysis. Of my own chapter ("Effective Altruism and the Reified Mind"), Chappell writes:
"Elsewhere, we’re informed that EA 'misidentifies the biggest problems today as global health, factory farming, and existential threats' when really 'the global poor suffer from adverse health outcomes because of capitalist social relations.'" (p. 218)
Nu? Prof. Chappell appears to disagree with me, but does not say so. Instead, he merely highlights the words "capitalist social relations," as though that were evidence of my public idiocy, rather than of his own ignorance of critical theory and sociology. He continues:
"For a moment, I wondered whether the low quality of this book might constitute positive evidence in support of effective altruism....Unfortunately, many of the authors seem so ideologically opposed to cost-effectiveness evaluation that I expect they would’ve written the same tripe even if there was strong evidence available that EA interventions really were worse in expectation."
Again, rather than grapple with my argument--which concerns reification--he simply dismisses it as "tripe." Well, perhaps it *is* tripe--far be it from me to deny it. But Prof. Chappell, alas, hasn't shown it. Here as in the rest of this lazy essay, Prof. Chappell shows he is less interested in exploring the positions than in circling the wagons around his own utilitarian cult. Like others in EA, he is unable to see past his own bad methodology, and his even worse ideology. I gasped when I read this: "What if billionaires and financiers could actually do more good than grass-roots activists and radicals? This thought is verboten." What is he talking about? No, it isn't verboten--on the contrary, it's the banal self-understanding of the bourgeoisie and therefore of society at large. In reality, billionaires and financiers are destroying the Earth and all of its life forms, but Prof. Chappell hasn't yet gotten the memo.
It is in fact apparent that Prof. Chappell has no "empirical" comprehension of the state, society, economics, or the nature of politics, and is blithely unaware of the fact. That too is a symptom of reification--the Effective Altruist's failure to heed the ancient dictum, "know thyself."
-- John Sanbonmatsu, Worcester Polytechnic Institute
Hmm. There's something very odd about the dialectic here. My blog post laments that an OUP-published book of "critical essays" contains no real arguments or evidence that EA is bad, just ideological posturing and unsupported assertions. I didn't notice anything new, informative, or challenging in the book. Just bare assertions, with no reasons to indicate why I should take them seriously. So, for one who doesn't share the authors' ideology, there's just not enough *substance* here to constitute a challenge or prompt constructive dialogue.
If you think I've overlooked a substantive, non-question-begging argument, you're welcome to take another stab at explaining what your argument is supposed to be. But simply throwing insults ("lazy", "cult", etc.) doesn't provide any *reason* to change my mind. Indeed, this is the fundamental problem with the book as a whole. All insults, no persuasion.
re: "verboten", I was talking about what is apparently unthinkable *to the book authors*. Merely calling something "bourgeoisie" doesn't show that it's false. Nor does merely asserting the contrary view, and pompously declaring that anyone who doesn't agree with you "hasn't yet gotten the memo."
> "we reject the faulty epistemic and political terrain of EA itself, i.e. cost-benefit analysis."
This point is worth dwelling on. How can you claim that EA is "bad" if you cannot show that its costs outweigh its benefits? Rejecting the very *idea* of cost-benefit analysis is incoherent. It's like rejecting the very idea of *rational argumentation*. It's going to be very hard for you to establish that anyone has any reason to take you seriously, if you do not even attempt to make rational arguments or to analyze costs vs benefits when offering practical prescriptions.
To be clear, my arguments about EA are in my chapter in the book--I made no attempt to repeat them here. I was merely pointing out that here you've ironically done what you accuse me of: viz., criticizing my argument without actually bothering to explain it or understand it. In any event, I am not rejecting all forms of consequentialist reasoning (which, indeed, I say in my piece), but taking issue with the positivistic, ahistorical, and ideologically-laden way that EA approaches social problems. In my opinion, EA has a childishly naive conception of the world--and that makes it dangerous. Or, if you prefer, its costs outweigh its benefits.
I've had a very quick skim of your article (apologies I'm short on time). I think it is useful to distinguish two criticisms of EA. 1) criticism of their objectives (normative), and 2) criticism of their strategies for pursuing those objectives (positive). Your passage below seems to fall within #2.
> "Today we find these same asocial assumptions embedded in EA discourse as well. MacAskill’s morally repugnant call for an increase in the number of sweatshops in the Third World (2016, 128–132) is merely the artifact of a utilitarian ideology incapable of recognizing exploitation as a moral or social problem."
You think that EA should add reducing exploitation to its list of objectives. You may also want them to give extra weight to avoiding actively supporting exploitation as opposed to simply failing to prevent it. So for example, if hypothetically, sweatshops would decrease exploitation in the long run, it might not be worth supporting them because of this doing/allowing distinction.
Can you recommend what you think is the best approximate conceptual analysis of exploitation or at least of aspects thereof? [I know John Roemer had an early book on this sort of thing; I hope to read it sometime.] I think some such analysis would be very useful when deciding how to best to allocation billions of dollars and thousands of people's careers towards the prevention of exploitation.
Do you also have criticisms along the lines of #2? I view 1 and 2 as totally separate; do you see them as more interlinked? I'm aware that those in the tradition of G.E.M. Anscombe often deny that there is a sharp distinction between positive and normative issues.
Prof. Chappell's reading of the essays in this book, which I contributed to, is not merely ungenerous but tendentious--he rejects the authors' positions mainly because we reject the faulty epistemic and political terrain of EA itself, i.e. cost-benefit analysis. Of my own chapter ("Effective Altruism and the Reified Mind"), Chappell writes:
"Elsewhere, we’re informed that EA 'misidentifies the biggest problems today as global health, factory farming, and existential threats' when really 'the global poor suffer from adverse health outcomes because of capitalist social relations.'" (p. 218)
Nu? Prof. Chappell appears to disagree with me, but does not say so. Instead, he merely highlights the words "capitalist social relations," as though that were evidence of my public idiocy, rather than of his own ignorance of critical theory and sociology. He continues:
"For a moment, I wondered whether the low quality of this book might constitute positive evidence in support of effective altruism....Unfortunately, many of the authors seem so ideologically opposed to cost-effectiveness evaluation that I expect they would’ve written the same tripe even if there was strong evidence available that EA interventions really were worse in expectation."
Again, rather than grapple with my argument--which concerns reification--he simply dismisses it as "tripe." Well, perhaps it *is* tripe--far be it from me to deny it. But Prof. Chappell, alas, hasn't shown it. Here as in the rest of this lazy essay, Prof. Chappell shows he is less interested in exploring the positions than in circling the wagons around his own utilitarian cult. Like others in EA, he is unable to see past his own bad methodology, and his even worse ideology. I gasped when I read this: "What if billionaires and financiers could actually do more good than grass-roots activists and radicals? This thought is verboten." What is he talking about? No, it isn't verboten--on the contrary, it's the banal self-understanding of the bourgeoisie and therefore of society at large. In reality, billionaires and financiers are destroying the Earth and all of its life forms, but Prof. Chappell hasn't yet gotten the memo.
It is in fact apparent that Prof. Chappell has no "empirical" comprehension of the state, society, economics, or the nature of politics, and is blithely unaware of the fact. That too is a symptom of reification--the Effective Altruist's failure to heed the ancient dictum, "know thyself."
-- John Sanbonmatsu, Worcester Polytechnic Institute
Hmm. There's something very odd about the dialectic here. My blog post laments that an OUP-published book of "critical essays" contains no real arguments or evidence that EA is bad, just ideological posturing and unsupported assertions. I didn't notice anything new, informative, or challenging in the book. Just bare assertions, with no reasons to indicate why I should take them seriously. So, for one who doesn't share the authors' ideology, there's just not enough *substance* here to constitute a challenge or prompt constructive dialogue.
If you think I've overlooked a substantive, non-question-begging argument, you're welcome to take another stab at explaining what your argument is supposed to be. But simply throwing insults ("lazy", "cult", etc.) doesn't provide any *reason* to change my mind. Indeed, this is the fundamental problem with the book as a whole. All insults, no persuasion.
re: "verboten", I was talking about what is apparently unthinkable *to the book authors*. Merely calling something "bourgeoisie" doesn't show that it's false. Nor does merely asserting the contrary view, and pompously declaring that anyone who doesn't agree with you "hasn't yet gotten the memo."
> "we reject the faulty epistemic and political terrain of EA itself, i.e. cost-benefit analysis."
This point is worth dwelling on. How can you claim that EA is "bad" if you cannot show that its costs outweigh its benefits? Rejecting the very *idea* of cost-benefit analysis is incoherent. It's like rejecting the very idea of *rational argumentation*. It's going to be very hard for you to establish that anyone has any reason to take you seriously, if you do not even attempt to make rational arguments or to analyze costs vs benefits when offering practical prescriptions.
To be clear, my arguments about EA are in my chapter in the book--I made no attempt to repeat them here. I was merely pointing out that here you've ironically done what you accuse me of: viz., criticizing my argument without actually bothering to explain it or understand it. In any event, I am not rejecting all forms of consequentialist reasoning (which, indeed, I say in my piece), but taking issue with the positivistic, ahistorical, and ideologically-laden way that EA approaches social problems. In my opinion, EA has a childishly naive conception of the world--and that makes it dangerous. Or, if you prefer, its costs outweigh its benefits.
Anyone interested in forming their own conclusions about my views on EA can find a PDF of my chapter on my website: https://www.johnsanbonmatsu.com/articles--essays.html.
Best,
JS
Professor Sanbonmatsu,
I've had a very quick skim of your article (apologies I'm short on time). I think it is useful to distinguish two criticisms of EA. 1) criticism of their objectives (normative), and 2) criticism of their strategies for pursuing those objectives (positive). Your passage below seems to fall within #2.
> "Today we find these same asocial assumptions embedded in EA discourse as well. MacAskill’s morally repugnant call for an increase in the number of sweatshops in the Third World (2016, 128–132) is merely the artifact of a utilitarian ideology incapable of recognizing exploitation as a moral or social problem."
You think that EA should add reducing exploitation to its list of objectives. You may also want them to give extra weight to avoiding actively supporting exploitation as opposed to simply failing to prevent it. So for example, if hypothetically, sweatshops would decrease exploitation in the long run, it might not be worth supporting them because of this doing/allowing distinction.
Can you recommend what you think is the best approximate conceptual analysis of exploitation or at least of aspects thereof? [I know John Roemer had an early book on this sort of thing; I hope to read it sometime.] I think some such analysis would be very useful when deciding how to best to allocation billions of dollars and thousands of people's careers towards the prevention of exploitation.
Do you also have criticisms along the lines of #2? I view 1 and 2 as totally separate; do you see them as more interlinked? I'm aware that those in the tradition of G.E.M. Anscombe often deny that there is a sharp distinction between positive and normative issues.