39 Comments
⭠ Return to thread

I was thinking mostly about your last paragraph, which suggests that our only possibilities when faced with moral questions are either "systematic theorizing" or "stop thinking." If there are no universal moral principles, we can indeed think about ethical cases without systematic theorizing, and systematic theorizing may in fact be detrimental to moral thinking. So the proposed solution is not "stop thinking," but "stop thinking systematically."

I don't think it is correct to say that particularists "need to offer verdicts on the full array of cases." A general theory needs to do this (because if it can't, it isn't a general theory). But why would a particularist spend time thinking about these particular hypothetical cases, out of the infinitely many possible moral questions that they might spend time thinking about? Even if they agree to consider the case and choose one of the usual bullets to bite, after doing so they can simply shrug and move on, since the choice they make in that particular hypothetical case has no strong implications for the choices they might make elsewhere.

Expand full comment

Ah, thanks for clarifying. My point was just that insofar as *any* pattern of verdicts to the puzzle cases will involve biting some bullet or other, there isn't really any "problem" here for utilitarianism that is "solved" by anyone else. The critics who sneer at biting bullets haven't appreciated that *they too* would have to "choose one of the usual bullets to bite" if they were to consider the full range of cases. And it's no distinctive virtue of a theory that it refuses to even *consider* a problem case.

You suggest that the particularist can "can simply shrug and move on", but I think much the same is true of the systematic theorist. It's not as though pondering the repugnant conclusion forces us to make terrible decisions in any real-life cases. Some further argument would be needed to show that "systematic theorizing may in fact be detrimental to moral thinking"; I'm not aware of any evidence for that claim. (Quite the opposite, given the track record of utilitarians like Bentham and Mill being ahead of their time on moral issues like women's rights, animal welfare, and the wrongness of anti-sodomy laws.)

https://www.utilitarianism.net/introduction-to-utilitarianism#track-record

Expand full comment

>"I think much the same is true of the systematic theorist. It's not as though pondering the repugnant conclusion forces us to make terrible decisions in any real-life cases."

I don't see how we can be confident this is correct. If we know that a moral theory works well for everyday cases but badly when extended to weird hypothetical cases, it seems there must be some region of the spectrum between "everyday life" and "weird hypotheticals" that the theory's stipulations start to diverge from what we consider to be morally sound. But we don't need a systematic theory for everyday cases where our intuitions are obvious, or for weird hypothetical cases we will never encounter: we need it for case: if a systematic moral theory is to have any practical implications at all, it is precisely in this intermediate region that we hope it might provide us with some useful guidance. And it is in this intermediate region that we can never be sure whether we believe what the theory is telling us.

> "Some further argument would be needed to show that 'systematic theorizing may in fact be detrimental to moral thinking'; I'm not aware of any evidence for that claim."

The history of the 20th century? Ok, that's perhaps a little too glib — but you are surely aware that Bentham, Mill, and yourself occupy only one very small corner in the vast realm of systematic theorizing. I won't try to defend or refine the claim any further here, though, since I don't think substack comments are a suitable medium for that discussion.

I think we are starting from very different intuitions about what moral philosophy can achieve. You seem optimistic about the possibility of developing a logically consistent, systematic theory that preserves our most basic moral intuitions and can serve to guide action. I start from the assumption that our most basic moral intuitions are irresolvably inconsistent, so it is *only* by "leaving details blank" that moral reasoning can provide any practical guidance at all.

Expand full comment

Why would you trust *irresolvably inconsistent* intuitions (/implicit principles) to give you any useful guidance at all? My stance is very much to insist upon solving the inconsistency, and work through which intuitions are least costly to give up (and hence the implicit principles they represent seem least likely to be true).

> "Bentham, Mill, and yourself occupy only one very small corner in the vast realm of systematic theorizing"

But surely the most relevant corner if you're wanting to argue that the kind of systematic theorizing that I'm engaged in is likely to be "detrimental". It seems, on the contrary, that systematic theorizing *by utilitarian philosophers* has been straightforwardly extremely good for the world, and so we should all want to see more of it! See also: https://rychappell.substack.com/p/is-non-consequentialism-self-effacing

Expand full comment

Ok, here we go...

> "insist upon solving the inconsistency, and work through which intuitions are least costly to give up"

The problem with this is that I don't think there is any principled way to decide "which intuitions are least costly to give up" — only a sort of meta-intuition about which intuitions we hold more strongly than others. The best that systematic theorizing can offer is thus a choice of bullets to bite: it always comes down to modus ponens vs. modus tollens.

Take the dilemma presented in your post, which (plausibly) assumes the reader holds two fairly widespread moral intuitions that turn out to be surprisingly difficult to reconcile systematically:

(1) Utopia is better than World Z.

(2) Utopia is better than a barren rock.

You hold intuition (2) strongly and intuition (1) weakly, so you accept utilitarianism as the best systematization of your intutions (and you organize your life accordingly). But if I hold intuition (1) strongly and intuition (2) weakly, your proposed metaethical procedure would lead me to accept something like "annihilation indifference" as the best systematization of my intuitions (and I would organize my life accordingly). It is not that I am genuinely indifferent between Utopia and a barren rock, any more than you genuinely believe World Z is preferable to Utopia; it is just that the procedure of translating intuition (1) into principles (e.g. the "neutrality principle") and working through the logical implications leads me to this conclusion, and I am forced to accept it in order to avoid another that I find even more unpalatable.

So, following your proposal to "insist upon solving the inconsistency, and work through which intuitions are least costly to give up," I end up indifferent (at best) when contemplating the annihilation of all sentient life. But if I resist the temptation to formulate moral principles and work through their implications, I can preserve my more conventional moral intuitions (including utilitarian ones) — and, on a good day, perhaps even act on them.

This, in a nutshell, is why I am happy to abandon moral principles and systematic theorizing. A little inconsistency seems a small price to pay for the preservation of the universe.

Expand full comment

Ha, fair enough! It's true that if one starts with the wrong intuitions (e.g. only holding (2) "weakly"), then systematization could lead one further from the truth.

Though I guess I am optimistic -- at least more so than you are, by the sounds of it -- that there are better alternative systematizations available for those who remain strongly committed to (1), as I tried to indicate in the OP. I certainly don't think it's a forced choice between Total Utilitarianism and Neutrality.

But I take your point that for many individuals, sticking (however inconsistently) to common sense could have better results than their inclinations would lead them to if they tried to be more systematic. So I should restrict myself to the more limited claim that I think it's important and valuable for moral philosophers to pursue the task of systematic moral philosophy to try to discover and clarify these options. So I don't think others should be dismissive of that task. And it should be appreciated that being prima facie counterintuitive in some of its verdicts really doesn't do much to suggest that the view isn't nonetheless correct. These puzzles suggest that if there is any moral truth at all, it will have to in some way surprise us (be counterintuitive) in these cases.

Expand full comment

>"It's true that if one starts with the wrong intuitions (e.g. only holding (2) "weakly"), then systematization could lead one further from the truth."

I can agree with most of what you say here, but this strikes me as too hasty a dismissal of people who have the "wrong intuitions." Why is it so obvious to you that rejecting the RC is "less costly" than rejecting annihilation indifference? What would it take for you to change your mind? If you can't imagine changing your mind on this point, why might someone with the opposite convictions ever change their mind?

Afterthought: Under what circumstances would you be willing to say to someone, "Thinking systematically about moral philosophy is good, but you in particular would be better off sticking to common sense and leaving systematic thought to the philosophers."?

Further afterthought: Under what circumstances would you be willing to say this to yourself?

Expand full comment

>"Why would you trust *irresolvably inconsistent* intuitions (/implicit principles) to give you any useful guidance at all?"

I'm not sure I accept the substitution of "implicit principles" for "intuitions." This substitution seems to be smuggling in the assumption that ethical thinking must always be based on some sort of principles, which is precisely what I am questioning here. (I don't think this terminological difference lies at the heart of our disagreement, but it is worth noting in passing.) As for why I should trust them — I don't! But it doesn't really matter. They are going to guide my moral thinking, whether I trust them or not.

> "insist upon solving the inconsistency, and work through which intuitions are least costly to give up"

[I think this is our most important point of disagreement, so I'll address it in a separate comment.]

> "if you're wanting to argue that the kind of systematic theorizing that I'm engaged in is likely to be 'detrimental'."

I should clarify a linguistic ambiguity in my original comment — my intended meaning was "there may be certain situations where commitment to systematicity leads to bad moral thinking", not "it may be the case that thinking systematically is (generally) detrimental to moral thought". I agree completely with your suggestion that the world would be a better place if more people thought about ethics more systematically, and in more explicitly consequentialist terms, than they currently do.

Expand full comment