Share this comment
Why would you trust *irresolvably inconsistent* intuitions (/implicit principles) to give you any useful guidance at all? My stance is very much to insist upon solving the inconsistency, and work through which intuitions are least costly to give up (and hence the implicit principles they represent seem least likely to be true).
> "Bentha…
© 2025 Richard Y Chappell
Substack is the home for great culture
Why would you trust *irresolvably inconsistent* intuitions (/implicit principles) to give you any useful guidance at all? My stance is very much to insist upon solving the inconsistency, and work through which intuitions are least costly to give up (and hence the implicit principles they represent seem least likely to be true).
> "Bentham, Mill, and yourself occupy only one very small corner in the vast realm of systematic theorizing"
But surely the most relevant corner if you're wanting to argue that the kind of systematic theorizing that I'm engaged in is likely to be "detrimental". It seems, on the contrary, that systematic theorizing *by utilitarian philosophers* has been straightforwardly extremely good for the world, and so we should all want to see more of it! See also: https://rychappell.substack.com/p/is-non-consequentialism-self-effacing
Ok, here we go...
> "insist upon solving the inconsistency, and work through which intuitions are least costly to give up"
The problem with this is that I don't think there is any principled way to decide "which intuitions are least costly to give up" — only a sort of meta-intuition about which intuitions we hold more strongly than others. The best that systematic theorizing can offer is thus a choice of bullets to bite: it always comes down to modus ponens vs. modus tollens.
Take the dilemma presented in your post, which (plausibly) assumes the reader holds two fairly widespread moral intuitions that turn out to be surprisingly difficult to reconcile systematically:
(1) Utopia is better than World Z.
(2) Utopia is better than a barren rock.
You hold intuition (2) strongly and intuition (1) weakly, so you accept utilitarianism as the best systematization of your intutions (and you organize your life accordingly). But if I hold intuition (1) strongly and intuition (2) weakly, your proposed metaethical procedure would lead me to accept something like "annihilation indifference" as the best systematization of my intuitions (and I would organize my life accordingly). It is not that I am genuinely indifferent between Utopia and a barren rock, any more than you genuinely believe World Z is preferable to Utopia; it is just that the procedure of translating intuition (1) into principles (e.g. the "neutrality principle") and working through the logical implications leads me to this conclusion, and I am forced to accept it in order to avoid another that I find even more unpalatable.
So, following your proposal to "insist upon solving the inconsistency, and work through which intuitions are least costly to give up," I end up indifferent (at best) when contemplating the annihilation of all sentient life. But if I resist the temptation to formulate moral principles and work through their implications, I can preserve my more conventional moral intuitions (including utilitarian ones) — and, on a good day, perhaps even act on them.
This, in a nutshell, is why I am happy to abandon moral principles and systematic theorizing. A little inconsistency seems a small price to pay for the preservation of the universe.
Ha, fair enough! It's true that if one starts with the wrong intuitions (e.g. only holding (2) "weakly"), then systematization could lead one further from the truth.
Though I guess I am optimistic -- at least more so than you are, by the sounds of it -- that there are better alternative systematizations available for those who remain strongly committed to (1), as I tried to indicate in the OP. I certainly don't think it's a forced choice between Total Utilitarianism and Neutrality.
But I take your point that for many individuals, sticking (however inconsistently) to common sense could have better results than their inclinations would lead them to if they tried to be more systematic. So I should restrict myself to the more limited claim that I think it's important and valuable for moral philosophers to pursue the task of systematic moral philosophy to try to discover and clarify these options. So I don't think others should be dismissive of that task. And it should be appreciated that being prima facie counterintuitive in some of its verdicts really doesn't do much to suggest that the view isn't nonetheless correct. These puzzles suggest that if there is any moral truth at all, it will have to in some way surprise us (be counterintuitive) in these cases.
>"It's true that if one starts with the wrong intuitions (e.g. only holding (2) "weakly"), then systematization could lead one further from the truth."
I can agree with most of what you say here, but this strikes me as too hasty a dismissal of people who have the "wrong intuitions." Why is it so obvious to you that rejecting the RC is "less costly" than rejecting annihilation indifference? What would it take for you to change your mind? If you can't imagine changing your mind on this point, why might someone with the opposite convictions ever change their mind?
Afterthought: Under what circumstances would you be willing to say to someone, "Thinking systematically about moral philosophy is good, but you in particular would be better off sticking to common sense and leaving systematic thought to the philosophers."?
Further afterthought: Under what circumstances would you be willing to say this to yourself?
Lots going on here!
(1) In general, you can only "change someone's mind" on philosophical matters by showing that an alternative view better captures the bulk of what they consider importantly true. So I wouldn't expect to ever convince someone with literally "opposite" convictions. But if someone has convictions that are in many respects close to mine, but they've just been misled into accepting a less-coherent branch view (without realizing the inconsistencies with their other commitments), then my arguments could help them with their pruning.
(2) On the substantive issue of whether RC or annihilation-indifference is worse, one reason I'm confident that the latter is worse is because it violates the extremely weak general principle that *positive value is possible*, whereas RC is counterintuitive in itself, but doesn't violate any such incredibly obvious principles. I think we should be firmly committed to anti-nihilism, and generally open to revising our initial verdicts about particular cases (especially puzzle cases).
(3) Distinguish personal accuracy vs contributing to collective inquiry. Any individual (including professional philosophers!) would be better off (in terms of personal accuracy) sticking to common sense whenever their attempts at systematizing would lead them further astray. (I can only judge that based on my own beliefs, of course. Absent magic oracles saying so, it's hard to imagine first-personal evidence that further thinking would be counterproductive, unless perhaps one has a track record of going out on a (theoretical) limb and then later regretting it.)
But even wrongheaded systematic philosophers may be doing important work by clarifying logical space. So I'm generally happy to see people systematically exploring views, even ones that I think are deeply misguided. (One reason for this is that I might be mistaken, and clarifying the alternative options might ultimately serve to bring a better alternative view to my attention!)
I'll be just a bit nitpicky here — not because I think you haven't considered these details, but just to clarify my own position — and note that rejecting annihilation indifference requires more than just accepting the possibility of positive value:
(1) It requires accepting that positive value, in appropriate forms and sufficient quantities, can offset the continuing presence of whatever negative value(s) we might be able to eliminate through the hypothetical annihilation. (So for example, although I don't think lexical negative utilitarianism offers a good way of thinking about the relationship between positive and negative aspects of individual welfare, I don't believe an "extremely weak" set of intuitions/assumptions is sufficient to reject it.)
(2) It requires some account of the relationship between positive value and sentience or consciousness. Stipulating that "positive value is possible" is consistent with the claim that there is some special positive value in barrenness, so that the barren rock is in fact the ideal state of the universe. We need additional intuitions or assumptions to explain how and why conscious or sentient life has more value than the barren rock. There are many possible arguments to support the latter position, but I don't think the premises of any of them can be called "extremely weak."
>"Why would you trust *irresolvably inconsistent* intuitions (/implicit principles) to give you any useful guidance at all?"
I'm not sure I accept the substitution of "implicit principles" for "intuitions." This substitution seems to be smuggling in the assumption that ethical thinking must always be based on some sort of principles, which is precisely what I am questioning here. (I don't think this terminological difference lies at the heart of our disagreement, but it is worth noting in passing.) As for why I should trust them — I don't! But it doesn't really matter. They are going to guide my moral thinking, whether I trust them or not.
> "insist upon solving the inconsistency, and work through which intuitions are least costly to give up"
[I think this is our most important point of disagreement, so I'll address it in a separate comment.]
> "if you're wanting to argue that the kind of systematic theorizing that I'm engaged in is likely to be 'detrimental'."
I should clarify a linguistic ambiguity in my original comment — my intended meaning was "there may be certain situations where commitment to systematicity leads to bad moral thinking", not "it may be the case that thinking systematically is (generally) detrimental to moral thought". I agree completely with your suggestion that the world would be a better place if more people thought about ethics more systematically, and in more explicitly consequentialist terms, than they currently do.