42 Comments

If anyone’s interested in seeing a defence of the full bulletbiting totalist view, here is one such defence. https://benthams.substack.com/p/utilitarianism-wins-outright-part-67f

Expand full comment
author

How does the iteration argument work? Suppose I'm OK with replacing population-1 with (vastly better-off) population-99, but not with replacing population-1 with population-2 (or likewise for any other consecutively number pairs). Are you assuming that replacing p1 directly with p99 is morally equivalent to replacing p1 with p2, then p2 with p3, then ... then p98 with p99? Notice that the latter process involves murdering like a hundred times as many people. I think that makes it a lot worse!

Expand full comment

The iteration argument just says that constant replacement would be bad when it seems good. If everyone after a bunch of painless replacement is living lives that are so good that every moment is as good as horrific torture is bad, the idea that the iterated replacement is bad seems implausible -- but the intuition is in the mind of the beholder.

Expand full comment

Your first argument against totalism—that it can’t account for what you see as an intrinsic harm in some deaths—is axiological. You don’t think it’s a decisive argument, however, so you propose a second argument that you think is stronger—that totalism can’t account for us having both “person-directed” and “undirected” reasons, with person-directed reasons being the stronger of the two.

However, this second argument is about moral reasons, not value. It seems you've given up on rejecting totalism as an axiology and are content to only reject it as a moral theory. That would be fine for a non-utilitarian who can accept a significant divergence between axiology and morality, but a utilitarian should, I think, be able to give an axiological account of our moral reasons. If death’s intrinsic harm can't explain why person-directed reasons are stronger than undirected reasons, we need some other axiological explanation.

Among other things, axiological total utilitarianism implies: (a) there is no distinction in value between ceasing to exist or failing to come into existence, and (b), there is no distinction in value between coming into an existence with wellbeing x and continuing a prior existence which then goes on to provide wellbeing x. (There's also no distinction in value between creating a new life with wellbeing x or adding wellbeing x to a prior existence while holding the lifespan of this prior existence fixed.)

In your post, you argue against (a) but not (b). Since you don’t think you have a strong argument against (a), and since death’s harm being partially intrinsic is controversial anyway, maybe you can make up for this with an argument against (b).

Jeff McMahan thinks death is a purely comparative harm, so he wouldn’t agree with your argument against (a). However, he argues against (b) in “Causing People to Exist and Saving People’s Lives.” Like you, he wants to defend a hybrid view. He thinks we have some moral reason to create new happy lives, which is significant but weaker than the moral reason to benefit already existing lives. He just calls these “narrow individual-affecting reasons” and “wide individual-affecting reasons” instead of “person-directed reasons” and “undirected reasons.” I don’t agree with McMahan's argument against (b), but maybe you do, or maybe you could come up with a different argument against it. Or there might be some other key component to axiological totalism that you could identify and dispute in order to give an axiological basis to a weak asymmetry.

Expand full comment
author

Thanks for flagging McMahan's paper -- clearly very relevant here!

fwiw, I'm not too concerned about whether the resulting view qualifies as strictly "utilitarian" or not. I'm opposed to the standard non-consequentialist ways of divorcing morality from axiology (constraints, anti-aggregative principles, etc.), but supplementing impersonal beneficence with person-directed beneficence seems unobjectionable to me.

Expand full comment

Out of curiosity, do you mostly agree with totalism as an axiological theory, aside from its ignoring the intrinsic harm of death?

Expand full comment
author

I'm very torn! My sympathies lies more with variable value views -

https://www.utilitarianism.net/population-ethics#variable-value-theories

- but they're difficult to make work. So I can certainly see the appeal of totalism (especially if we add that individual lives contain some "value blur" around the neutral point, yielding a kind of critical range view).

Expand full comment
deletedDec 16, 2022Liked by Richard Y Chappell
Comment deleted
Expand full comment
author

Yeah, definitely world 2. I think it's fine to give moderately more weight to relieving suffering than to promoting the good, but not lexical priority or anything close to it.

If you actually vividly imagine world 2, picture some of those wonderful lives, and then ask yourself: would it be morally better for 99% of those wonderful lives never to have existed, just to prevent this *one* other bad life? That doesn't seem "intuitive" to me at all.

Expand full comment

To clarify the position I've defended: what Rebecca and I said in our paper was not that World 1 was better (Richard and I agree that World 2 is better) but that, if you can actualize any one of an infinitely ascending hierarchy of better and better worlds--including World 3, which is just like World 2 except that the suffering person instead gets a great life--it seems objectionable to create World 2, screwing over that person for no reason, in a way that it doesn't seem objectionable to create World 1, even though World 2 is better than World 1. To get that, you need person-directed reasons to factor into deliberation in some special way that's different from how undirected reasons factor in--which Richard agrees with. We were thinking that's not really compatible with consequentialism in the sense people usually mean it, but I guess Richard suggests above that he's not too worried about that--so there may not really be a deep difference here.

Expand full comment
Comment deleted
Expand full comment
Dec 15, 2022Liked by Richard Y Chappell

Off Topic:

I like this blog including the comment section. I wonder why there are not more philosophy blogs and forums (I know there are some) for scholars or well-versed amateurs to discuss things like normative ethics or other philosophical topics.

I suspect one reason we don't see more online discourse by Philosophers is that they are worried that non-philosophers (like me) will come in and try to get a say in things in order to feel like we are important like the real philosophers.

If so, some degree of exclusivity could be brought in. Like maybe there could be an application to the forum to allow only professors or only those with (e.g.) philosophy PhDs to comment. Or maybe people would earn reputation through upvotes or something [I believe such systems have been tried in other settings].

As it is, it's very difficult to figure what others think.

Expand full comment

Here's one weird implication: causing someone definitely to exist with well-being level 0 and separately increasing their well-being by 5 is better than creating someone with well-being level 5.

Expand full comment
author

Part of the issue there is the shifting context for evaluating "better"-ness. Once someone exists, you have full-strength reasons to want to improve their well-being (or, equally, to want it to have been higher all along). But it's not as though you have any reason beforehand to prefer the first prospect over the second one. So it's not really any better (assuming that betterness entails reasons to prefer), once the context is held fixed.

Expand full comment

But you think that causing someone's well-being to increase by 5 is more valuable than creating a person with well-being of five. Thus, disaggregating it would increase the value. Making someone with well-being of zero isn't bad, increasing their well-being of five is more valuable than just creating a person with well-being level of 5 from the start?

Expand full comment
author

Value is time-relative, on this account, so those inferences don't go through. You have to think separately about what's preferable from the perspective of t1 (before creation) and what's preferable from the perspective of t2 (after creation). Increasing from welfare level zero to five makes the t2 perspective more salient, which is the perspective from which the individual has full moral weight, whereas the "create at level 5" option makes the prior time t1 more salient. But there is no time from which it is preferable (or more valuable) to separately create + increase well-being than to just create the individual at the higher welfare level to begin with.

Expand full comment

So then if the second benefit well-being boost would occur after creation, wouldn’t the inference go through?

Expand full comment
author

I don't see how. At any given time, either the person's existence is settled, or it isn't. If it is, then the person has full weight, and the two options (that affect their interests equally) have equal value. If it isn't settled, then there are only impersonal reasons in play, and the two options (that result in equal impersonal value) again have equal value. In neither case do the two options look any different, when assessed from the same point in time.

It's true that, once the person exists at welfare 0, you (now) have more reason to boost their welfare to 5 than you (previously) had reason to create them at welfare level 5. But again, that's just an artifact of switching contexts. I don't think it's counterintuitive once we're clear that each point in time evaluates the two options equivalently.

Expand full comment

"It's true that, once the person exists at welfare 0, you (now) have more reason to boost their welfare to 5 than you (previously) had reason to create them at welfare level 5. But again, that's just an artifact of switching contexts. I don't think it's counterintuitive once we're clear that each point in time evaluates the two options equivalently."

This seems really unintuitive. If you first create a person and then separately enable their good experiences later, that clearly isn't better than just creating them with the good experiences from the start.

Expand full comment

OK, but then how do you avoid the problem of Average Utilitarianism demanding the deaths of miserable homeless people who have no social connections, have negative expected utility over the remainder of their lives, and will be missed by no one after death?

Expand full comment
author

Are you sure? Your source says that "The value of a world is a function (namely, the average) of the welfare values of each individual's whole life." and then "The 'killing to promote average utility' objection only makes sense against the type-1, momentary view. On the second view, where we take a timeless perspective, killing someone does not reduce the (eternal) population. It merely makes one of the lives shorter than it otherwise would be. "

This exchanges one problem for another, because such utilitarianism encourages us to kill even happy people whose lives are now expected to be clearly less happy than they were in the past. Doing so raises the average lifetime utility of the person who has been killed.

Expand full comment
author

Lifetime well-being isn't given by the average of one's momentary utilities. That'd be a terrible view, for just the reason you point out.

For the record, I do think that average utilitarianism is a bad view in population ethics, for the (first and third) reasons explained here - https://www.utilitarianism.net/population-ethics#the-average-view - but I don't think it has the particular implication you attributed to it, since average utilitarians can care about the average lifetime (not momentary) well-being of the eternal population.

But my original post wasn't arguing for average utilitarianism (it defends a variant of the Total view), so I'm a bit puzzled about why you brought this up at all?

Expand full comment

Forgive me; I'm trying to understand your position. It looked like you were criticizing total utilitarianism in favor of a hybrid view that shifted more towards average utilitarianism. Looking again, I do see that this isn't what your essay is trying to do, but, now I don't understand why you say "Lifetime well-being isn't given by the average of one's momentary utilities" when your source has "The value of a world is a function (namely, the average) of the welfare values of each individual's whole life." That source uses "welfare" rather than "utility." Would you be willing to clarify what you really believe?

Expand full comment
author

That's talking about the average across the population of whole lives, not the average of times within a life. That is, according to the average view in population ethics, the value of a world is given by the average of x1, x2, x3 .... where each xn is the lifetime well-being of a different individual who ever lives in that world. And each xn value is NOT itself an average.

Expand full comment

OK, I think I follow you. So roughly speaking, what do you propose for f(x_n) that differs significantly from just f(x_n) being their lifetime average utility?

Expand full comment

Unless I'm misunderstanding the case, this seems to imply that a glass bottle that will cut open the foot of someone is less bad if the person whose foot it will cut open hasn't been born yet. This doesn't seem plausible.

Expand full comment
author
Dec 14, 2022·edited Dec 14, 2022Author

Depends whether that same person would end up existing in the absence of the glass bottle. If so, they qualify as "antecedently actual" (i.e. existing independently of the choice under consideration): the bottle makes them worse off than they otherwise would have been. But if not -- if it's an "identity-affecting" act -- then yeah, that makes it somewhat less bad, because no-one has been counterfactually harmed, or made worse-off than they otherwise would have been.

Update: Actually, that might not be quite right. In 'Rethinking the Asymmetry', I suggest a principle *proscribing the predictably regrettable*, according to which we may discount possible people’s interests in existence, but not their interests in *non-existence*, since violating the latter would end up creating actual people whose interests at that point would speak with full force. If that's right, then we shouldn't think there's a difference in the moral force of suffering between antecedently actual vs future contingent people.

Expand full comment

Okay, so then if you create someone and then make them step on a bottle, before disappearing, would that be less bad than creating a bottle for an existing person to step on? That seems unintuitive.

Expand full comment
author

See update to previous comment!

Expand full comment

"For example, we clearly have much stronger moral reasons to save the life of a young child (e.g. by funding anti-malarial bednets) than to simply cause an extra child to exist (e.g. by funding fertility treatments or incentivizing procreation)."

Just to make sure I'm understanding, you mean that's clear based on moral intuition?

Expand full comment
author

Right.

Expand full comment

Hi Richard, thanks for the excellent article. Having read it though, I'm not sure I understand your basis for "failing to create is importantly morally different from killing"? I think this could be rephrased as asking you to expand on "person-directed reasons explain this common-sense distinction: we have especially strong reasons not to harm or wrong particular individuals."

I believe we are, by construction, ignoring the additional harm that killing a particular individual would cause their friends & family which would not occur with failing to create a new individual, but correct me if wrong?

Thanks!

Expand full comment
author
Dec 14, 2022·edited Dec 14, 2022Author

Yes, just considering the directly affected individual: killing makes them worse off than they otherwise would be. Failing to create them does not make them worse off, because in that case there is no "them" that ever exists. The world has one less person than it might have had. But there is no particular person who might have existed but doesn't.

The key passage: "We have weak impersonal reasons to bring an extra life into existence, while we have both impersonal and person-directed reasons to aid an existing individual."

Expand full comment

Understood. That still leaves me with the question of what the basis for valuing the moral worth of the already-here is more than the could-be-here ie what actually are the person-directed reasons and why might the impersonal reasons be stronger? Let me know if you’ve answered this best in a previous article or I missed something in this article?

Expand full comment
author

It's not that there are two particular individuals, and you should care about one of them more than the other. Rather, the worry is that mere possibilia are not existing entities at all, so there is no-one there to care about. See: https://www.philosophyetc.net/2009/01/reifying-possibilia.html

Now, we've some ("impersonal") reason to care about the world in general, and want it to go better, such as by having more wonderful future lives in it. For those who end up existing with positive lives, it is good for them to exist, and we can rightly be moved by the anticipation of their being happy to have been brought into existence. But if we fail to act on this reason, there is no-one with a complaint against us, in the way that there is basis for complaint when we make an existing person worse off. And I think that plausibly makes *some* difference. All else equal, it is worse to wrong someone than to simply fail to make the world better. That's the rough thought, anyway.

Expand full comment

Appreciate the responses Richard. I think I would be wrong to interpret complaint literally (and it would open us up to counter-arguments e.g. mute babies). That being said, if complaint is non-literal, I see no reason why future people (even if their existence is not realised or probabilistic) can't complain.

Also, to the extent we are not talking about existence, but instead welfare, if I emit a load of emissions today, future people will (in the literal sense) complain about me if their quality of life is lower as a result, even though they have no existence today.

I know nothing about the philosophy of the self, but I also wonder whether there are good arguments for saying that future people don't exist in the same way that my future self (or any future selves of existing people) don't exist?

I think a religious philosophy (e.g. existing people have the soul of God in them) would be a coherent reason for weighting the moral value of existing people more than future people (or possibilia), but I'm struggling to get at a good secular & consequentialist reason.

Expand full comment
deletedDec 14, 2022Liked by Richard Y Chappell
Comment deleted
Expand full comment
author

Yes, that's definitely an important further question! I think most people probably undervalue new lives. So while I've here defended the idea that we have *more* reason to save or improve existing lives, I certainly wouldn't want the prospect of new good lives to be devalued too severely. But it does just seem a really difficult question -- similar to tricky questions about the relative weights of human vs non-human interests. I have a wide range of uncertainty, and don't have a good sense of how to make further progress on the question.

Expand full comment