Share this comment
Ah, thanks for clarifying. My point was just that insofar as *any* pattern of verdicts to the puzzle cases will involve biting some bullet or other, there isn't really any "problem" here for utilitarianism that is "solved" by anyone else. The critics who sneer at biting bullets haven't appreciated that *they too* would have to "choose …
© 2025 Richard Y Chappell
Substack is the home for great culture
Ah, thanks for clarifying. My point was just that insofar as *any* pattern of verdicts to the puzzle cases will involve biting some bullet or other, there isn't really any "problem" here for utilitarianism that is "solved" by anyone else. The critics who sneer at biting bullets haven't appreciated that *they too* would have to "choose one of the usual bullets to bite" if they were to consider the full range of cases. And it's no distinctive virtue of a theory that it refuses to even *consider* a problem case.
You suggest that the particularist can "can simply shrug and move on", but I think much the same is true of the systematic theorist. It's not as though pondering the repugnant conclusion forces us to make terrible decisions in any real-life cases. Some further argument would be needed to show that "systematic theorizing may in fact be detrimental to moral thinking"; I'm not aware of any evidence for that claim. (Quite the opposite, given the track record of utilitarians like Bentham and Mill being ahead of their time on moral issues like women's rights, animal welfare, and the wrongness of anti-sodomy laws.)
https://www.utilitarianism.net/introduction-to-utilitarianism#track-record
>"I think much the same is true of the systematic theorist. It's not as though pondering the repugnant conclusion forces us to make terrible decisions in any real-life cases."
I don't see how we can be confident this is correct. If we know that a moral theory works well for everyday cases but badly when extended to weird hypothetical cases, it seems there must be some region of the spectrum between "everyday life" and "weird hypotheticals" that the theory's stipulations start to diverge from what we consider to be morally sound. But we don't need a systematic theory for everyday cases where our intuitions are obvious, or for weird hypothetical cases we will never encounter: we need it for case: if a systematic moral theory is to have any practical implications at all, it is precisely in this intermediate region that we hope it might provide us with some useful guidance. And it is in this intermediate region that we can never be sure whether we believe what the theory is telling us.
> "Some further argument would be needed to show that 'systematic theorizing may in fact be detrimental to moral thinking'; I'm not aware of any evidence for that claim."
The history of the 20th century? Ok, that's perhaps a little too glib — but you are surely aware that Bentham, Mill, and yourself occupy only one very small corner in the vast realm of systematic theorizing. I won't try to defend or refine the claim any further here, though, since I don't think substack comments are a suitable medium for that discussion.
I think we are starting from very different intuitions about what moral philosophy can achieve. You seem optimistic about the possibility of developing a logically consistent, systematic theory that preserves our most basic moral intuitions and can serve to guide action. I start from the assumption that our most basic moral intuitions are irresolvably inconsistent, so it is *only* by "leaving details blank" that moral reasoning can provide any practical guidance at all.
Why would you trust *irresolvably inconsistent* intuitions (/implicit principles) to give you any useful guidance at all? My stance is very much to insist upon solving the inconsistency, and work through which intuitions are least costly to give up (and hence the implicit principles they represent seem least likely to be true).
> "Bentham, Mill, and yourself occupy only one very small corner in the vast realm of systematic theorizing"
But surely the most relevant corner if you're wanting to argue that the kind of systematic theorizing that I'm engaged in is likely to be "detrimental". It seems, on the contrary, that systematic theorizing *by utilitarian philosophers* has been straightforwardly extremely good for the world, and so we should all want to see more of it! See also: https://rychappell.substack.com/p/is-non-consequentialism-self-effacing
Ok, here we go...
> "insist upon solving the inconsistency, and work through which intuitions are least costly to give up"
The problem with this is that I don't think there is any principled way to decide "which intuitions are least costly to give up" — only a sort of meta-intuition about which intuitions we hold more strongly than others. The best that systematic theorizing can offer is thus a choice of bullets to bite: it always comes down to modus ponens vs. modus tollens.
Take the dilemma presented in your post, which (plausibly) assumes the reader holds two fairly widespread moral intuitions that turn out to be surprisingly difficult to reconcile systematically:
(1) Utopia is better than World Z.
(2) Utopia is better than a barren rock.
You hold intuition (2) strongly and intuition (1) weakly, so you accept utilitarianism as the best systematization of your intutions (and you organize your life accordingly). But if I hold intuition (1) strongly and intuition (2) weakly, your proposed metaethical procedure would lead me to accept something like "annihilation indifference" as the best systematization of my intuitions (and I would organize my life accordingly). It is not that I am genuinely indifferent between Utopia and a barren rock, any more than you genuinely believe World Z is preferable to Utopia; it is just that the procedure of translating intuition (1) into principles (e.g. the "neutrality principle") and working through the logical implications leads me to this conclusion, and I am forced to accept it in order to avoid another that I find even more unpalatable.
So, following your proposal to "insist upon solving the inconsistency, and work through which intuitions are least costly to give up," I end up indifferent (at best) when contemplating the annihilation of all sentient life. But if I resist the temptation to formulate moral principles and work through their implications, I can preserve my more conventional moral intuitions (including utilitarian ones) — and, on a good day, perhaps even act on them.
This, in a nutshell, is why I am happy to abandon moral principles and systematic theorizing. A little inconsistency seems a small price to pay for the preservation of the universe.
Ha, fair enough! It's true that if one starts with the wrong intuitions (e.g. only holding (2) "weakly"), then systematization could lead one further from the truth.
Though I guess I am optimistic -- at least more so than you are, by the sounds of it -- that there are better alternative systematizations available for those who remain strongly committed to (1), as I tried to indicate in the OP. I certainly don't think it's a forced choice between Total Utilitarianism and Neutrality.
But I take your point that for many individuals, sticking (however inconsistently) to common sense could have better results than their inclinations would lead them to if they tried to be more systematic. So I should restrict myself to the more limited claim that I think it's important and valuable for moral philosophers to pursue the task of systematic moral philosophy to try to discover and clarify these options. So I don't think others should be dismissive of that task. And it should be appreciated that being prima facie counterintuitive in some of its verdicts really doesn't do much to suggest that the view isn't nonetheless correct. These puzzles suggest that if there is any moral truth at all, it will have to in some way surprise us (be counterintuitive) in these cases.
>"It's true that if one starts with the wrong intuitions (e.g. only holding (2) "weakly"), then systematization could lead one further from the truth."
I can agree with most of what you say here, but this strikes me as too hasty a dismissal of people who have the "wrong intuitions." Why is it so obvious to you that rejecting the RC is "less costly" than rejecting annihilation indifference? What would it take for you to change your mind? If you can't imagine changing your mind on this point, why might someone with the opposite convictions ever change their mind?
Afterthought: Under what circumstances would you be willing to say to someone, "Thinking systematically about moral philosophy is good, but you in particular would be better off sticking to common sense and leaving systematic thought to the philosophers."?
Further afterthought: Under what circumstances would you be willing to say this to yourself?
Lots going on here!
(1) In general, you can only "change someone's mind" on philosophical matters by showing that an alternative view better captures the bulk of what they consider importantly true. So I wouldn't expect to ever convince someone with literally "opposite" convictions. But if someone has convictions that are in many respects close to mine, but they've just been misled into accepting a less-coherent branch view (without realizing the inconsistencies with their other commitments), then my arguments could help them with their pruning.
(2) On the substantive issue of whether RC or annihilation-indifference is worse, one reason I'm confident that the latter is worse is because it violates the extremely weak general principle that *positive value is possible*, whereas RC is counterintuitive in itself, but doesn't violate any such incredibly obvious principles. I think we should be firmly committed to anti-nihilism, and generally open to revising our initial verdicts about particular cases (especially puzzle cases).
(3) Distinguish personal accuracy vs contributing to collective inquiry. Any individual (including professional philosophers!) would be better off (in terms of personal accuracy) sticking to common sense whenever their attempts at systematizing would lead them further astray. (I can only judge that based on my own beliefs, of course. Absent magic oracles saying so, it's hard to imagine first-personal evidence that further thinking would be counterproductive, unless perhaps one has a track record of going out on a (theoretical) limb and then later regretting it.)
But even wrongheaded systematic philosophers may be doing important work by clarifying logical space. So I'm generally happy to see people systematically exploring views, even ones that I think are deeply misguided. (One reason for this is that I might be mistaken, and clarifying the alternative options might ultimately serve to bring a better alternative view to my attention!)
>"Why would you trust *irresolvably inconsistent* intuitions (/implicit principles) to give you any useful guidance at all?"
I'm not sure I accept the substitution of "implicit principles" for "intuitions." This substitution seems to be smuggling in the assumption that ethical thinking must always be based on some sort of principles, which is precisely what I am questioning here. (I don't think this terminological difference lies at the heart of our disagreement, but it is worth noting in passing.) As for why I should trust them — I don't! But it doesn't really matter. They are going to guide my moral thinking, whether I trust them or not.
> "insist upon solving the inconsistency, and work through which intuitions are least costly to give up"
[I think this is our most important point of disagreement, so I'll address it in a separate comment.]
> "if you're wanting to argue that the kind of systematic theorizing that I'm engaged in is likely to be 'detrimental'."
I should clarify a linguistic ambiguity in my original comment — my intended meaning was "there may be certain situations where commitment to systematicity leads to bad moral thinking", not "it may be the case that thinking systematically is (generally) detrimental to moral thought". I agree completely with your suggestion that the world would be a better place if more people thought about ethics more systematically, and in more explicitly consequentialist terms, than they currently do.