A year ago, I wrote a post lamenting the lack of “cross-camp” engagement in philosophy, and highlighting the challenges I’d most like to see addressed (by non-consequentialists, opponents of effective altruism, and proponents of “neutrality” in population ethics).
I was reminded of this by a recent comment requesting “more back and forth with… opponents.” I could see three main benefits to this (none of which require radically changing the mind of either participant): (i) It might at least improve understanding — I think most opponents of utilitarianism have demonstrable misunderstandings, and so should at least be persuadable that my view is better than the stereotype they had in mind; others might similar improve my understanding of their view. (ii) If nothing else, I find it helpful to identify “cruxes” or “sticking points” where more argumentative attention may be needed. And (iii) as Huemer notes, often the main target worth persuading is not the participants themselves so much as the audience of open-minded “undecideds” (though it remains difficult to predict one’s prospects for changing minds).
My Comments Policy
Just to make it explicit: I welcome polite, reasoned, on-topic disagreement in the comments of this blog. If you ever think a post of mine is poorly reasoned, or makes some mistaken claims, you should feel very welcome to explain why. (I’m not so interested in brute assertions of disagreement. As I explain to my students, I don’t really care what they think—the mere fact that an opinion is theirs doesn’t make it special—but if they can suggest good reasons why the rest of us plausibly ought to agree that a claim is true, then that could be interesting to hear.)
Two restrictions:
(1) I can be a stickler for staying “on-topic”. I may issue a temporary ban if a commenter engages in what strikes me as thread-hijacking or off-topic ranting.
(2) I draw a hard distinction between attacking ideas vs people. I’m generally unbothered by forceful disagreement with ideas that I happen to agree with. But I have very low tolerance for personal attacks or criticism, and may issue longer bans if I feel that someone’s being rude or personally unpleasant to interact with.1 A good rule of thumb: if you wouldn’t say something while invited to my living room for a philosophy chat over afternoon tea, you probably shouldn’t write it here either. (Or: you should find a way to say it that you’d still be comfortable saying in person, and that wouldn’t get you kicked out of my house.)
An Open Thread
For comments on this post (only), I’m setting an unusually broad topic. Feel free to (politely) expound upon whatever you think I’m most wrong about. You might pick what you take to be my most obvious mistake, or the issue on which I seem furthest from the truth, or else what you take to be my most important mistake.
I’d especially welcome thoughts on the latter, given the priority I place on importance-weighted accuracy. It would be helpful to also flag whether you expect that it’s a bedrock, irresolvable disagreement between us, or whether you think that with sufficient time, argument, and reflection, you could eventually bring me around. If the latter, what is the most helpful and persuasive point you can make now to start me down the right track?
What I think I’m most right about
You might find some fodder for disagreement in my review of my “big ideas”. But more than any particular first-order conclusion—even the central importance of beneficence—I guess I’m most wedded to my general philosophical orientation, on which:
Reasoned inquiry is valuable and the best means we have of getting at the truth (including in ethics and politics). The higher the stakes, the more important open, critical inquiry becomes.
Part of the reason for this is that the world is complicated, and people are biased and extremely subject to groupthink (doubly so when it comes to politics). So the radical “everything is politics/ideology” crowd tend to strike me as intellectually irresponsible.2 People are clever apes, naturally drawn to demonizing the outgroup and rationalizing in-group interests and status. Unless you’re strongly guided by an asymmetric weapon like rigorous critical inquiry, your ethical reasoning could easily be worse than useless.3
At least for academics, intellectual virtue is the most important virtue. As I explain here: “It’s obviously valuable for society to have truth-seeking institutions and apolitical ‘experts’ who can be trusted to communicate accurate information about their areas of expertise. When academics behave like political hacks for short-term political gain, they are undermining one of the most valuable social institutions that we have.”
As the pandemic illustrated, trusted authorities (like the medical establishment) are morally incompetent. They can’t even get no-brainers like vaccine challenge trials right.4 So there’s a lot of work for moral philosophers to do to help improve society’s ethical understanding.
Status quo bias is a powerful force for ill in the world. Most people are bad at thinking—including about ethics—and significant cultural stupidity (or at least inflexibility) is baked into the unreflective verdicts of “common sense morality”. It’s really vital to think critically and to be open to principled revisions to ordinary thought. We all know that past generations were horrendously misguided on many important points; we should fully expect the “common sense” of our own time to also contain very significant moral mistakes.
Accordingly, anyone who unreflectively dismisses moral pioneers, without argument, just for “sounding weird”, is being intellectually vicious. Don’t do this. Philosophers, especially: you have a professional obligation to assess the arguments, not just sneer and mock people who think differently.
Omission bias is epistemic as well as practical. People are too scared of being wrong, and not worried enough about overlooking what’s right. Too focused on procedural confidence-policing, and not enough on determining what credence is substantively best justified. Errors of omission are less subject to social sanction; but if you care about what’s important, and not just what you can get away with, you should find it all the more concerning—because you know your ape brain isn’t so alert to this kind of risk.
Words don’t matter: rather than getting hung up on semantics, we should focus on the first-order issues and try to get really clear on what’s at stake in any given debate: what follows from accepting one view rather than another.
Constructive Disagreement
Generally speaking, I expect disagreements to be most fruitful when they start from significant common ground. So I’d most welcome disagreements from those who share my general philosophical orientation, as outlined above, and just have various first-order disagreements to pursue. (You’re welcome to try regardless, if you’re interested—though if you don’t value rational inquiry, I’m not sure why you would be.)
Maybe you think I’ve missed (or underestimated) an important objection to utilitarianism. Or you have some more fundamental objection to my whole approach to moral theory. Whatever your main objection, feel free to make your case below. (And again, bonus points if you can frame it in a way that I might conceivably find persuasive.)
Here’s a recent example. Calling me “dishonest” or an “ideological fanatic” is a quick way to wear out your welcome.
Relatedly, a lot of popular “political” stances from activist types strike me as painfully stupid and bad for the world. My previous post on ‘Utopian Enemies of the Better’ gives a sense of why.
Rigorous critical reasoning is also fallible, of course, but it at least improves our chances of actually getting things right, relative to any alternative method. (This is a slight oversimplification. Some degree of deference to expert opinion and/or “tried and true” methods will often be a better idea than trying to work out everything for oneself from first principles. But this still requires that the experts engage in rigorous critical inquiry.)
See ‘Imagining an Alternative Pandemic Response’ for how I think a competent society would’ve responded.
I believe you are most wrong about moral realism. I don't know exactly where you stand on every auxiliary issue, but:
(a) I don't think there are any good arguments for moral realism, and I think much of moral realism's appeal stems from misleading implications about the supposed practical consequences of antirealism. I think many of these misleading ways of framing antirealism are rhetorical in nature and rooted in biases and misunderstandings of antirealism, and that this is quite similar to how people critical of utilitarianism uniquely frame it as some kind of monstrous and absurd position.
(b) I don't think most nonphilosophers are moral realists or antirealists, and more generally I don't think moral realism enjoys any sort of presumption in its favor, e.g., I don't grant that it's "more intuitive" or that it does a better job of capturing how people are typically disposed to speak or think than antirealism (though I also don't think people are antirealists).
You ask what the most helpful or persuasive point I can make to start you down the right track may be. I don’t know for sure, since I am not super familiar with many of your background beliefs, but I’d start with this: I think there is little good reason to think that most nonphilosophers speak, think, or act like moral realists. Rather, I think moral realism is a position largely confined to academics and people influenced by academics. I think the questions about whether people are moral realists or not are empirical questions, and that the empirical data simply doesn’t support the notion that moral realism is a “commonsense” view. I don’t know where you stand on this issue, but I think it’s an important place to start.
I came to this conclusion after many years of specifically focusing on the psychology of metaethics and in particular the question of whether nonphilosophers are moral realists or not. Early studies suggested that most people would give realist responses to some questions about metaethics and antirealist responses to other questions. However, I came to question the methods used in these studies and launched a large project (which culminated in my dissertation) to evaluate how participants interpreted these questions. I came to the conclusion that most people were not interpreting them as researchers intended (and frequently didn’t interpret them as questions about metaethics at all). I suspect the best explanation for this is that ordinary people don’t have explicit stances or implicit commitments to metaethical theories, and that metaethics has very little to do with ordinary moral thought and language. The case for this is largely derived from my own data and my critiques and analyses of research on this topic. It’d be very difficult to summarize it but I could elaborate on specific points.
The more general takeaway is that I don’t think moral realism enjoys any special kind of intuitive priority, and I suspect that the reason why some people are disposed towards moral realism has more to do with path dependent idiosyncrasies in their particular cultural backgrounds and educations.
I'll preface this by saying I'm a new reader, so if you have written on this topic elsewhere, I apologize. Can you explain to me how utilitarians think about utility and preferences?
It is intuitive to me that an individual can have a complete and transitive preference ordering across the infinite possible states of the world. But from Arrow, we know that you cannot aggregate individual preferences into a social preference ranking without violating unacceptable conditions. So in order to determine the "greatest good for the greatest number of people," I think you have to accept that cardinal utility exists.
Unlike rankings (ordinal utility), I find the notion of cardinal utility, in the sense of preferring world-state X over world-state Y by some amount z percent, much less intuitive. I suppose you might be able to deduce your own cardinal utility over world states by ranking gambles across states, but I don't know how this can be applied across people.
For a concrete concern, what determines the pleasure and pain scale? Suppose a painless death, or the state of non-existence (absent considerations of an afterlife), is assigned zero. Then a slightly painful death might be assigned negative one, which is infinitely(?) worse than a painless death. A slightly more painful death is negative two, which is twice(?) as bad as a slightly painful death. I suppose a state of infinite pain could be assigned zero, but that is problematic because a state of infinite pain doesn't exist, in the sense that there can always be a worse state of pain.
This is less an objection and more an expression of my own curiosity over how utilitarians think about this stuff.