40 Comments
Sep 10, 2022Liked by Richard Y Chappell

I loved this post - this is a big pet peeve of mine as well and I think you nailed it.

However, I think a lot of times when I see similar arguments 'in the wild', even if they are initially framed narrowly as critiques of utilitarianism they are in fact motivated by a broader feeling that there are limits to moral reasoning. Something like, we shouldn't expect our theories to have universal domain, and we don't get much leverage by trying to extend our theories far beyond the intuitions that initially motivated them.

The main example I have in mind is Tyler Cowen's recent conversation with Will. Tyler raises a number of objections to utilitarianism. At times I found this frustrating, because if viewed from the lens of figuring out the best moral theory he is making isolated demands for rigor. But I think Tyler's point instead is something more like the above, that we shouldn't rely too much on our theories outside of everyday contexts.

You do touch on this in the post, but only briefly. I'd be interested to hear more about your thoughts on this issue.

Expand full comment
author

I think something in that vicinity is often reasonable, at least from a purely practical perspective.

In particular, I would be pretty worried about anyone trying to actually bring about world Z, or making other practical recommendations that truly *depended* upon taking the repugnant conclusion to be acceptable after all. I think we should be *immensely uncertain* about all of this.

(Fortunately, one's choice of poison in these hypothetical cases doesn't really make much practical difference to real life policy disputes, as far as I can tell. The main thing, I think, it just to be clear that we should reject the sort of strict "neutrality" that implies that extinction wouldn't really matter. But that leaves open a very wide array of remaining options!)

But such critics go wrong when they make the stronger claim that there's something inherently bad or misguided about trying to identify the best systematic theory. It's not that there are strict limits in principle to moral reasoning, but just that it gets really *difficult*, so we shouldn't have much confidence in our verdicts when reasonable theories diverge. But it's not like we can instead just be supremely confident in our gut reactions or untutored moral assumptions -- there's a long history of those going quite horribly wrong.

So yes, we shouldn't "rely too much on our theories" outside of everyday contexts. But nor should we rely too much on anything else! We should tread carefully.

Expand full comment
Sep 13, 2022·edited Sep 13, 2022Liked by Richard Y Chappell

Why do such critics go wrong when they make the stronger claim? In particular, why are you so quick to rule out the claim "there are limits in principle to systematic moral theorising"? That claim seems to fall out quite naturally of arguments like Cowen's, which don't necessarily entail abandoning all attempts at constructing or evaluating systematic theories but only insist on an awareness of the context and limits of that enterprise. Meanwhile, the claim "we should be careful about systematic moral theorising, because it is a difficult enterprise", while it may be true, is just part of a different discourse and irrelevant to arguments like Cowen's.

More precisely, arguments like Cowen's are based on premises of the following form: provably, all attempts at systematic moral theorising in domain X run into absurdities or paradoxes or contradictions. This is true of both population axiology and decision theory. This premise is completely irrelevant to the claim "we should be more careful about our moral reasoning in domain X": we have *proven* that these paradoxes are not due to sloppy reasoning, that they are in some sense inevitable; maybe being careful will prevent us from making additional mistakes, but that is irrelevant here. However, Cowen's premise could be interpreted as good evidence for the claim 'there are limits to moral reasoning in domain X': the paradoxes are taken to illustrate more-or-less exactly where the limits lie. If you believe that one of the paradoxes is not so paradoxical, or one of the absurdities not so absurd, you can reasonable reject this argument; but it's at least not obviously wrongheaded.

Indeed, arguments of the form 'paradoxes are inevitable when we apply X widely; therefore, there must be limits on the application of X' are in general perfectly good arguments, if not strictly deductively valid. Consider the argument that there are limits to the principle of set-theoretic comprehension, because paradoxes are inevitable when it is applied too widely - a near-universally accepted inference in the philosophy of mathematics. By contrast, arguments of the form 'paradoxes are inevitable when we apply X widely; therefore, we should be apply X widely but be careful when we do so, lest we become too confident in one view over another' seems obviously invalid - the premise just irrelevant to the conclusion.

Expand full comment
author

I think your argument here rests on conflating very different senses of "paradox". The unrestricted set-theoretic paradoxes involve showing that the (unrestricted) principle straightforwardly entails contradictions, and so cannot be true. The "paradoxes" of population ethics instead show that "common sense" moral verdicts are inconsistent, and so we must bite some bullet or other. In such a context, I think the putative "absurdity" of rejecting a prima facie intuitive verdict is undermined. We can try to judge which verdict is least bad, and there's no great cost to accepting one counter-intuitive verdict in order to avoid others -- or in order to *enable* us to make more positively intuitive, plausible claims.

In fact, I'd say there are three major reasons to reject your analogy here:

(1) It's conceivable (broadly speaking) that one might actually be faced with the choices described in puzzle cases, so a complete theory should give an answer as to what the correct choice to make in such a scenario would be. By contrast, you could never find yourself in a town where the barber shaves all and only those who do not shave themselves. Logical paradoxes limit the space of possibilities. Moral ones plainly don't; instead, they simply make it challenging to *know what to say* about the possibilities in question.

To say "there are limits to systematic theorizing" doesn't answer what one should do if one finds oneself in the actual situation described in a puzzle case. Rather, it is simply to give up. I don't see any reason to do that.

(2) Silence in puzzle cases (due to restricting the domain of one's moral verdicts) doesn't avoid absurdity. As mentioned in the OP, it's not just that we want to *refrain* from asserting that Z is better than A; it's that we *positively* want to say that A is better than Z. Silence can't achieve that.

(3) As indicated above, I don't think biting the bullet on one of these cases is all that big a cost, in context. There's a big difference between *prima facie counterintuitive* and *outright absurd*, and I think it takes systematic theorizing to determine which category one falls into. Being supported by overwhelmingly plausible principles is precisely how we can show that a prime facie counterintuitive result need not be counterintuitive all things considered, or at the end of reflective equilibrium. In other cases, we might take the putative counterexample to illuminate what is now an obvious flaw or oversight in a principle that turned out to only be prima facie plausible. So it all depends on the details. (And that's just as it should be.)

Expand full comment
Sep 14, 2022·edited Sep 14, 2022Liked by Richard Y Chappell

[this comment was based on a misreading of Richard's comment, I'm not sure I hold to this.]

I'll grant you that the analogy to unrestricted comprehension was perhaps specifically poor; but there are related set-theoretic paradoxes that I could have used instead, and the analogy stands. Perhaps think about the rejection of nonwellfoundedness. If we think of a paradox as a collection of individually intuitive premises that jointly entail a contradiction, then the paradoxes of population ethics and the paradoxes of set theory are on a par: Russell's paradox is special in requiring only one (non-definitional) premise to generate a contradiction, but that isn't true of all set-theoretical paradoxes. The argument schema I proposed still stands, as one that is often quite good albeit definitely not deductively valid. Going point-by-point:

>To say "there are limits to systematic theorizing" doesn't answer what one should do if one finds oneself in the actual situation described in a puzzle case. Rather, it is simply to give up. I don't see any reason to do that.

This seems completely wrong to me. Systematic theorising (in your sense, which I think means 'trying to figure out a complete, globally consistent ranking across decisions') is indeed one way in which philosophy can help us make decisions in puzzle cases. But it's hardly the only one. Virtue ethics here is the obvious example: rather than trying to identify some algorithmic rule that you apply in puzzle cases, philosophy tells you how to cultivate dispositions that you will then rely on to make your decisions. (To be clear, I'm not a virtue ethicist and I think you can reasonably say that this is an unhelpful way to approach ethics - it's just an example.) Approaches that emphasise the need for context-specific judgment - including particularism, but also certain strains of Kantian thinking that draw from Theory and Practice and the Third Critique - also fit in here. To be sure, if you take any of these routes, you can no longer answer the question 'what is the content of the global better-than relation on decisions?'. But I don't think that's an objection, it's just a redescription of the position 'there are limits to systematic theorising'. I'm 'giving up' on the project of systematic theorising, but I'm not 'giving up' on thinking (as you imply), or even on the project of trying to help people make tricky ethical decisions.

>Silence in puzzle cases (due to restricting the domain of one's moral verdicts) doesn't avoid absurdity. As mentioned in the OP, it's not just that we want to *refrain* from asserting that Z is better than A; it's that we *positively* want to say that A is better than Z. Silence can't achieve that.

I agree with this, but I don't see the relevance. As mentioned in another comment of mine, a sufficiently weak deontic logic could get around this. Or if you don't want to go down that route, you can assert 'A is better than Z' and 'utopia is better than a barren rock' but then remain silent about the general principles that are required to bridge the gap between 'utopia is better than a barren rock' and 'A is not better than Z'. (Yes, this is a 'gappy' moral view, so it's not a systematic moral theory - but again, my entire position is that this is not necessarily a bad thing, so it's question-begging to call this 'giving up' or 'not thinking'.) Or you can invoke a certain kind of value pluralism: these various principles all represent real values, but they are incommensurable and can conflict. General, all-purpose silence about every ethical question is indeed absurd; reasonable silence on certain questions based on informed judgments about the limits of systematic theorising need not be.

>There's a big difference between *prima facie counterintuitive* and *outright absurd*, and I think it takes systematic theorizing to determine which category one falls into.

Certainly, *one* of the things that can help us distinguish those two categories is systematic theorising. But it's not clear that it is the *only* way to do that. David Lewis makes an interesting point somewhere writing about Graham Priest: while Priest may have pushed those who believe in LNC towards greater clarity in their systematic theorising, ultimately there was no systematic way to decide 'yay' or 'nay' for LNC. Yet Lewis (and I!) still regard ~LNC as outright absurd, not just prima facie counterintuitive; and I think we're justified in that. When it comes to population ethics, I think the same is true of the Absurd Conclusion (for example).

--------------------------------------------------------

Anyway, the analogy with set theory was not one on which my argument turned. My point was just that insisting on the limits of systematic theorising is a *reasonable position* here (it happens to also be my position, but I wasn't directly defending it), and that it is invalid to argue as you did:

> such critics go wrong when they make the stronger claim that there's something inherently bad or misguided about trying to identify the best systematic theory. It's not that there are strict limits in principle to moral reasoning, but just that it gets really *difficult*, so we shouldn't have much confidence in our verdicts when reasonable theories diverge.

This is just a non-sequitur; the critics' stronger claim really is a defensible response to the philosophical difficulties (although certainly not one anyone should be 100% confident in), one that is not defused either by your post or by your invocation of the difficulties of moral philosophy.

Expand full comment
author

What you quote there isn't an *argument*, but just a presentation of my view. I'm just explaining two things I think, not suggesting that one logically follows from the other!

Expand full comment

Ah, then I misread - my apologies!

Expand full comment

I think the repugnant conclusion may actually be just an artifact of oversimplification. Clearest argument I heard for it involved oscillating between subtly contradictory premises. If you set things up as an agent-based model, with incomplete information, something to represent the difficulty of enforcing complex policies on those who don't personally benefit therefrom, and basic physics stuff like conservation of mass, problem mostly goes away.

Expand full comment
May 17, 2023·edited May 17, 2023Liked by Richard Y Chappell

Caveat: I'm not a philosopher, but rather an economist.

I think many of these paradoxes (Quinn's Self-Torturer, Parfit's "mere addition," etc.) have the following form:

> Start from state S. Operation O(S) is locally preferable (i.e., it produces a preferred state S'.) But if we iterate ad infinitum, we end up with S* that's not preferable to S.

The conclusion is usually either that S* actually _is_ preferable (i.e., our preferences are "rational" and therefore transitive), or that our preferences are seriously suspect. To the point where "maximizing" them is a hopelessly muddled concept.

I think there's another way to approach this. Behavioral economics deals with such problems ("time-inconsistent preferences") routinely. Consider a would-be smoker. He doesn't smoke his first cigarette, because he knows that his preferences display habit formation --- his first cigarette leads to the second, and so on.

In other words, the time 0 self has a genuinely different axiology than the time _t_ self. (Equivalently, preferences are state-dependent.) It would definitely be _cleaner_ if our rankings of future worlds were invariant to where we are today, but if the choice is between axiomatic hygiene and uncomfortable paradoxes, I'll take the mess.

(I think this also has something to say about, e.g., the demandingness objection. It's always locally preferable to save one more child, but the agent is justifiably wary of committing to a sequence of operations which turns him into a child-rescuing drone.)

Expand full comment
Sep 10, 2022·edited Sep 10, 2022Liked by Richard Y Chappell

This is a great article! You don't need to accept the repugnant conclusion to be an effective altruist. You just need to think helping people is important; the most obvious conclusion ever. Rejecting this trivial conclusion would be the really repugnant conclusion.

Expand full comment
Sep 11, 2022Liked by Richard Y Chappell

The best argument for "stop thinking" might be Joseph Henrich's one that for most of human existence trying to think for yourself rather than imitating tradition was one of the worst things you could do. Of course he had to do a lot of research & thinking to arrive at that abstract point!

https://slatestarcodex.com/2019/06/06/asymmetric-weapons-gone-bad/

Expand full comment
Sep 10, 2022Liked by Richard Y Chappell

I would say that I am against systematic theorizing of sorts, but I wouldn't say I've stopped thinking. My views are largely in line with Huemer, who doesn't have a clearly defined axiomatic system but clearly hasn't stopped thinking. (Unless I misunderstand what you mean). But I do accept the Repugnant Conclusion like Huemer. Actually, your article on population ethics on utilitarianism.net was influential in that regard. In fact, I mistakenly thought you took the total view because of that article. Whoops! (But great article)

I found some of Hoel's arguments weak, and I am saddened to see that he deleted your comment. Also, hyperbolic to analogize utilitarianism to a poison even if you disagree. I recall seeing your comment, but now I can't find it. Very dissapointing behavior.

Hoel's critique isn't the best. He doesn't allow for tradeoffs between certain values which is results in some absurdities. For example hiccups for shark attack. But clearly we always do this probalistically. I responded:

"I also want to provide a possible critique of the shark example. Surely, you would acknowledge that when people go swimming they risk being ripped to shreds by a shark. If you don't find it immoral for little girls to swim in the ocean, it means there is a probability of a little girl getting eaten by a shark that you find acceptable to trade off for playing in the ocean. Perhaps it's as small as 0.00001%. But what this says is that we can make these sorts of comparisons between something horrific and something trivial like hiccups. Unless, you don't think little girls should be allowed to swim in the ocean or that swimming in the ocean is a higher good or something."

Expand full comment
author

Oh, yes, I don't mean to suggest that very simple axiomatic theories are the only option! I just mean that one needs to be willing to think about the whole range of possibilities, and to take verdicts in one case to constrain what you can say about others.

I don't really know of many philosophers to whom my critique would apply wholesale. But there are at least particular objections/arguments that seem to rest on not fully thinking through the costs of the alternatives, which is what I'm really trying to highlight here.

Expand full comment
Sep 10, 2022Liked by Richard Y Chappell

I'm basically sympathetic to your arguments here, but I really don't grasp intuitively why rejecting

(ii) utopia is better than a barren rock

is so repugnant.

Or, maybe more to the point, the reason to prefer Utopia to a barren rock is because of the preferences of people who actually exist... But in the absence of people to hold such a preference, it feels much less nihilistic.

Preferences also seem to me to solve the intrapersonal version of the neutrality paradox: we prefer future moments of value to future non existence because of our preferences, not because of some overriding reason that the former is better than the latter.

I think I can imagine someone who genuinely feels like they have gotten all they want out of life, and is indifferent between continuing to live or dying, and while I might find that alien or unfamiliar, I don't find it _wrong_.

Am I missing something here?

Expand full comment
author

re: utopia, I guess it could partly be an extension of my feeling very glad that the people I love have gotten to exist. If I imagine Amun-Ra offering Cleopatra a massive party if she agrees to subsequently blow up the world, I think it's really important that she answer 'No' to that -- otherwise none of us would get to exist, and that's not a matter of indifference!

Another thought is that it is very literally nihilistic to deny positive value. If utopia is no better than a barren rock, then you're saying utopia has *no value*. That's nihilism! Seems bad to me. I think we should appreciate good lives as something that's genuinely good, not just kinda-pretend-good-while-those-people-are-around-to-complain-about-our-saying-otherwise.

Expand full comment
author

Interesting suggestion, thanks!

I think one major concern with an exclusively preferentist account of the harm of death comes from cases of temporary depression. Suppose a teenager falls into a funk and, for the moment, is truly indifferent about their future. But suppose their depression will in fact pass, and they would have a really happy, flourishing future. I think it's bad for the teenager to die! It deprives them of a valuable future, that would genuinely make their life better.

That said, I'm not committed to the claim that it's *always* better to add more life. Just that it *can* be good. For all I've said, it might be perfectly reasonable for an elderly person to reject further life extension, especially if their additional time would be below average compared to their past life.

Though I guess I do think it would always be irrational to reject above-average additional time. E.g. imagine an elderly person who abandoned their family and lived pretty miserably, and now doesn't care about anything at all. But if they kept living, they would not only meet their grandkids but also form a genuine bond and feel really good about it. In that case, like the temporarily depressed teenager, I think it's important that they not be deprived of the valuable future! I think what's often going on in less well-described cases is that we implicitly imagine an old person with a drab future that wouldn't seem to add any real value to their life, and might even detract from it. And then, sure, there doesn't seem any moral reason to add something that doesn't appear to have any real value.

Expand full comment
Sep 12, 2022Liked by Richard Y Chappell

Thanks for the responses!

I agree with that concern, although I think your example of the depressed teenager has some features that I think muddle the issue a bit. In particular, the fact that their depression is temporary, and can be regarded as a sort of impairment of their thinking and judgement, strike me as important features in shaping my intuition.

Basically, I think the problem is in deciding what counts as a preference, and how to prioritize or decide which preferences to give weight to: we don't want to just count our moment-to-moment preferences, and let them override any other considerations, precisely because my preference at a particular moment may be a poor representation of my preferences overall, or may find me in a moment of poor judgement.

If you show me someone for whom the prospect of a happy future has _never_ motivated them to want to live beyond a certain point, even when they are not depressed, I am less convinced that it is bad for them to stop living at that point.

Same goes for the elderly person meeting their future grandchild: if an elderly person tells me that yes, they know they will be able to form a valuable and meaningful bond with their grandkids, and yes, they know this will give them joy and make their life feel worth living; but still, they don't want to go on living, I find this _strange_ and I would definitely want to make sure we're understanding each other properly and all that, but if they stick to their guns, I'm not sure I feel the intuition that it's _bad_ for them to do this, just really weird.

re: Utopia vs. Barren Rock

"otherwise none of us would get to exist, and that's not a matter of indifference!"

I guess this is where we disagree; it's not a matter of indifference _to us, because we happen to now exist and have interests and preferences_; but 2000 years ago, when we didn't exist and have preferences, I am really not sure I have the same intuition as you--it does feel to me like a matter of indifference.

"Seems bad to me. I think we should appreciate good lives as something that's genuinely good, not just kinda-pretend-good-while-those-people-are-around-to-complain-about-our-saying-otherwise."

I guess maybe I am a little sympathetic to nihilism on this front, though I'm not sure it commits to me to the view that good lives aren't good: I think good lives are good to the people who have them, and to the people who can imagine them in the future; but if there's no one around to have good lives, and no one around to anticipate future good lives, there's no one for whom those lives are good.

This sounds a little close to a person-affecting view, and I know there are problems with those, but at least as a matter of intuition, it really does feel right to me: in a world with no people, it is a matter of indifference whether there will be people in the future.

Anyway, I appreciate the responses, and I really enjoy the blog in general, so thanks!

Expand full comment

I don't recall the comment you're referencing, but likely the reason your comment was deleted was because it was against the moderation policy of The Intrinsic Perspective, which disallows hostility, yelling at people IN ALL CAPS, name-calling, or just general cantankerousness.

Expand full comment

I tried to leave a comment linking to here, and I don't know what policy I violated.

Expand full comment

Well said. Like Parfit literally discusses the repugnant conclusion and the non-identity problems as two sides of the same coin, and it's not as if deontology doesn't run head first into the latter through the usual slave creation problems.

For what it's worth, I don't believe that total utilitarianism commits you to biting the bullet over the repugnant conclusion (as opposed to non-identity problem). You can be a total utilitarian (as I am), but just have a restricted scope on what you thing is valuable/important and hence in need of maximization.

I believe it really does depend on your substantive meta-ethical views. If you are in some sense constructivist about value (i.e. you believe things are good because we value them and not vice versa), then there is no question begging argument to create people (see: people should create people because life is valuable; but their lives are valuable only if these people value their lives; but they would only value their lives if they existed in the first place; and they would only exist if we should create them, and we're back at the beginning).

Expand full comment
author

Depends a lot on the details of the constructivism! E.g. it could be that *our* (or anyone's) valuing sentient lives in general is enough to make such lives valuable. So I don't see that constructivism per se forces you in any particular direction here. I'd think it should more come down to which first-order moral verdicts you find most plausible on their merits.

Expand full comment

I think this post overlooks the way that utilitarian systems only work if they can solve all problems, while many other systems of ethics continue to work if their axiom sets are incomplete. Utilitarianism claims that a simple set of axioms can answer all moral problems. Before you can use utilitarianism to answer any questions, you need to choose a set of axioms. These then necessarily apply to all problems - otherwise, you would need some rule saying where they stop applying, and I don’t recall seeing any serious utilitarians proposing these. Without a stopping point given in an explicit rule, utilitarianism depends on something that is repugnant.

Most philosophical systems I know have some kind of repugnant consequence - that’s why philosophy isn’t solved. But the repugnant conclusions arising from those systems are different. Kant’s categorical imperative says you should tell a murderer where their next victim is, and you can’t accept the categorical imperative unless you’re okay with that. But you can accept the categorical imperative without having clear opinions about extreme world states. Utilitarianism has a different trade-off. You can’t accept any specific set of utilitarian axioms without unless you’re okay with the corresponding claim about extreme world-states.

Silence isn’t necessarily a virtue, but sometimes it’s better than being wrong.

Expand full comment

What are your thoughts on moral particularism? Is it so obviously misguided as to not even require explicit rejection?

Expand full comment
author

How do you see this as connected to the OP? Particularists don't think there are useful universal/exceptionless moral principles. But they still need to offer verdicts on the full array of cases. So they will still need to bite some unpalatable bullet or other when stepping through the cases in Parfit's mere addition paradox, or Beckstead & Thomas' spectrum arguments. I don't see that it offers any easy escape here.

Expand full comment

I'm not sure particularists actually do need to offer verdicts on the full array of cases - part of the attraction of their moral epistemology is that it seems to allow gappiness. But even if we reject that, they don't have to offer theoretically consistent verdicts on all cases (if their verdicts were theoretically consistent, they would implicitly have a general moral theory!). On the spectrum argument and the mere addition paradox, I think the particularist can and should do what Scott Alexander does (https://astralcodexten.substack.com/p/book-review-what-we-owe-the-future) and simply reject the train of logic at the price of ethical inconsistency.

More formally, particularists can - and I think should - use a deontic logic weaker than K+D. As such, they can allow a certain kind of ethical inconsistency without it infecting their overall belief system with logical inconsistency. You might say that this logic is too weak, but the particularist can supplement universal rules of logic with context-specific ethical judgments to get an *overall* ethical epistemology that is sufficiently strong. I read Bernard Williams (a particularist, although he didn't use the label) as arguing exactly this point in 'Ethical Consistency' and in parts of Shame and Necessity, although obviously he doesn't use the language of deontic logic.

Expand full comment

I was thinking mostly about your last paragraph, which suggests that our only possibilities when faced with moral questions are either "systematic theorizing" or "stop thinking." If there are no universal moral principles, we can indeed think about ethical cases without systematic theorizing, and systematic theorizing may in fact be detrimental to moral thinking. So the proposed solution is not "stop thinking," but "stop thinking systematically."

I don't think it is correct to say that particularists "need to offer verdicts on the full array of cases." A general theory needs to do this (because if it can't, it isn't a general theory). But why would a particularist spend time thinking about these particular hypothetical cases, out of the infinitely many possible moral questions that they might spend time thinking about? Even if they agree to consider the case and choose one of the usual bullets to bite, after doing so they can simply shrug and move on, since the choice they make in that particular hypothetical case has no strong implications for the choices they might make elsewhere.

Expand full comment
author

Ah, thanks for clarifying. My point was just that insofar as *any* pattern of verdicts to the puzzle cases will involve biting some bullet or other, there isn't really any "problem" here for utilitarianism that is "solved" by anyone else. The critics who sneer at biting bullets haven't appreciated that *they too* would have to "choose one of the usual bullets to bite" if they were to consider the full range of cases. And it's no distinctive virtue of a theory that it refuses to even *consider* a problem case.

You suggest that the particularist can "can simply shrug and move on", but I think much the same is true of the systematic theorist. It's not as though pondering the repugnant conclusion forces us to make terrible decisions in any real-life cases. Some further argument would be needed to show that "systematic theorizing may in fact be detrimental to moral thinking"; I'm not aware of any evidence for that claim. (Quite the opposite, given the track record of utilitarians like Bentham and Mill being ahead of their time on moral issues like women's rights, animal welfare, and the wrongness of anti-sodomy laws.)

https://www.utilitarianism.net/introduction-to-utilitarianism#track-record

Expand full comment

>"I think much the same is true of the systematic theorist. It's not as though pondering the repugnant conclusion forces us to make terrible decisions in any real-life cases."

I don't see how we can be confident this is correct. If we know that a moral theory works well for everyday cases but badly when extended to weird hypothetical cases, it seems there must be some region of the spectrum between "everyday life" and "weird hypotheticals" that the theory's stipulations start to diverge from what we consider to be morally sound. But we don't need a systematic theory for everyday cases where our intuitions are obvious, or for weird hypothetical cases we will never encounter: we need it for case: if a systematic moral theory is to have any practical implications at all, it is precisely in this intermediate region that we hope it might provide us with some useful guidance. And it is in this intermediate region that we can never be sure whether we believe what the theory is telling us.

> "Some further argument would be needed to show that 'systematic theorizing may in fact be detrimental to moral thinking'; I'm not aware of any evidence for that claim."

The history of the 20th century? Ok, that's perhaps a little too glib — but you are surely aware that Bentham, Mill, and yourself occupy only one very small corner in the vast realm of systematic theorizing. I won't try to defend or refine the claim any further here, though, since I don't think substack comments are a suitable medium for that discussion.

I think we are starting from very different intuitions about what moral philosophy can achieve. You seem optimistic about the possibility of developing a logically consistent, systematic theory that preserves our most basic moral intuitions and can serve to guide action. I start from the assumption that our most basic moral intuitions are irresolvably inconsistent, so it is *only* by "leaving details blank" that moral reasoning can provide any practical guidance at all.

Expand full comment
author

Why would you trust *irresolvably inconsistent* intuitions (/implicit principles) to give you any useful guidance at all? My stance is very much to insist upon solving the inconsistency, and work through which intuitions are least costly to give up (and hence the implicit principles they represent seem least likely to be true).

> "Bentham, Mill, and yourself occupy only one very small corner in the vast realm of systematic theorizing"

But surely the most relevant corner if you're wanting to argue that the kind of systematic theorizing that I'm engaged in is likely to be "detrimental". It seems, on the contrary, that systematic theorizing *by utilitarian philosophers* has been straightforwardly extremely good for the world, and so we should all want to see more of it! See also: https://rychappell.substack.com/p/is-non-consequentialism-self-effacing

Expand full comment
Sep 14, 2022Liked by Richard Y Chappell

Ok, here we go...

> "insist upon solving the inconsistency, and work through which intuitions are least costly to give up"

The problem with this is that I don't think there is any principled way to decide "which intuitions are least costly to give up" — only a sort of meta-intuition about which intuitions we hold more strongly than others. The best that systematic theorizing can offer is thus a choice of bullets to bite: it always comes down to modus ponens vs. modus tollens.

Take the dilemma presented in your post, which (plausibly) assumes the reader holds two fairly widespread moral intuitions that turn out to be surprisingly difficult to reconcile systematically:

(1) Utopia is better than World Z.

(2) Utopia is better than a barren rock.

You hold intuition (2) strongly and intuition (1) weakly, so you accept utilitarianism as the best systematization of your intutions (and you organize your life accordingly). But if I hold intuition (1) strongly and intuition (2) weakly, your proposed metaethical procedure would lead me to accept something like "annihilation indifference" as the best systematization of my intuitions (and I would organize my life accordingly). It is not that I am genuinely indifferent between Utopia and a barren rock, any more than you genuinely believe World Z is preferable to Utopia; it is just that the procedure of translating intuition (1) into principles (e.g. the "neutrality principle") and working through the logical implications leads me to this conclusion, and I am forced to accept it in order to avoid another that I find even more unpalatable.

So, following your proposal to "insist upon solving the inconsistency, and work through which intuitions are least costly to give up," I end up indifferent (at best) when contemplating the annihilation of all sentient life. But if I resist the temptation to formulate moral principles and work through their implications, I can preserve my more conventional moral intuitions (including utilitarian ones) — and, on a good day, perhaps even act on them.

This, in a nutshell, is why I am happy to abandon moral principles and systematic theorizing. A little inconsistency seems a small price to pay for the preservation of the universe.

Expand full comment
Sep 14, 2022Liked by Richard Y Chappell

>"Why would you trust *irresolvably inconsistent* intuitions (/implicit principles) to give you any useful guidance at all?"

I'm not sure I accept the substitution of "implicit principles" for "intuitions." This substitution seems to be smuggling in the assumption that ethical thinking must always be based on some sort of principles, which is precisely what I am questioning here. (I don't think this terminological difference lies at the heart of our disagreement, but it is worth noting in passing.) As for why I should trust them — I don't! But it doesn't really matter. They are going to guide my moral thinking, whether I trust them or not.

> "insist upon solving the inconsistency, and work through which intuitions are least costly to give up"

[I think this is our most important point of disagreement, so I'll address it in a separate comment.]

> "if you're wanting to argue that the kind of systematic theorizing that I'm engaged in is likely to be 'detrimental'."

I should clarify a linguistic ambiguity in my original comment — my intended meaning was "there may be certain situations where commitment to systematicity leads to bad moral thinking", not "it may be the case that thinking systematically is (generally) detrimental to moral thought". I agree completely with your suggestion that the world would be a better place if more people thought about ethics more systematically, and in more explicitly consequentialist terms, than they currently do.

Expand full comment

This is a great post! Somehow Utilitarianism has become an easy target for difficult problems, likely as you say because it is sufficiently rigorous to surface them.

I'm curious as to whether anyone has done work around moral uncertainty and randomness for some of these cases: for example, with the Repugnant Conclusion, what does it get us to recognise that we are going to be uncertain about the actual day to day experience of future people? And that it will, in fact, vary from hour to hour in any case - as ours does every day? So by pushing a vast number of people on average close to the "barely worth living" line, at any particular time many of them will actually be under that line, due to the stochastic nature of human experience.

Does it buy us anything to say that this world is, at any particular time, clearly worse for (say) the current bottom 10% than an alternative world with fewer, happier, people, and that this bottom 10% might in practice represent a very considerable number? How might we account for this in our reasoning?

Expand full comment
Comment deleted
Expand full comment
author

For more on the advantages and disadvantages of different theories of well-being, see: https://www.utilitarianism.net/theories-of-wellbeing

Expand full comment
Removed (Banned)Sep 14, 2022
Expand full comment
author
Sep 14, 2022·edited Sep 14, 2022Author

Passio, please stop creating new accounts in order to circumvent my ban. I'm not interested in interacting with you, and I do not want to see you in my personal online space. [Updated to tone down my annoyance since the circumvention was not intentional.]

Expand full comment

Hi, I comment on substack by typing in a name and then a random email. That's it. Haven't seen any prior ban. On a previous post you wrote "this post is not the place for that" and removed a comment. Here you did make neutralism the topic so I thought you'd be open to counterarguments and tried twice. Sorry for that mistake, I'll leave now before you run out of insults.

Expand full comment