27 Comments

Thanks for the discussion of my post, Richard! This is interesting argument in favor of the second horn, if we’re willing to pay the cost. I don’t think I accept the general principle the post relies on, but I do interpret my advice to avoid creating AI of disputable moral status as defeasible policy advice rather than strict requirement, which can be outweighed in the right circumstances.

I’m inclined to think there’s an important difference between the human farming case and the baby farm case, though. I discuss my own version of this in the “Argument from Existential Debt” section of Schwitzgebel & Garza 2015. The case: Ana and Vijay would not have a child except under the conditions that they could terminate the child’s life at will. They raise the child happily for 9 years, then kill him painlessly. Is it wrong to have the child under these conditions? And is it wrong, after having had the child, to kill him — or is it like the humane meat case (as interpreted by defenders of that practice)?

Expand full comment
author
Mar 16, 2023·edited Mar 16, 2023Author

Thanks! To be clear, I don't think that "existential debt" makes it permissible to kill anyone. Rather, my suggestion is that the fact that someone (e.g. Ana and Vijay) would -- even *wrongly* -- kill X, is not a reason to prevent X from coming into existence.

E.g., if Ana and Vijay are wondering whether to conceive, given their plan to (wrongly) kill the child after 9 years, the principle of acceptable harm implies that it is permissible for them to conceive this child (so long as the child's life will be positive).

They shouldn't kill, but they also shouldn't make other decisions with an eye to preventing this killing in a way that equally prevents the better (long-lived) future by comparison to which the killing was deemed wrong in the first place.

Expand full comment

Let’s stipulate that it’s a package deal: Either Ana and Vijay don’t have the child, or they have the child and then kill. On utilitarian grounds, the second choice is better, no?

Expand full comment
author
Mar 16, 2023·edited Mar 16, 2023Author

Sure, they should have the child. But you can still criticize the killing part of the "package". Unless you construct the case so that there literally is not a separate action of killing. For example, if any genetic child of theirs would automatically die after nine years (due to a genetic defect), then they do nothing wrong at any point in time. Alternatively, if they deliberate select a defective gamete when they could have instead chosen a healthy one, then that part of the choice is wrong.

(I should add that I don't take my original argument to assume utilitarianism. Even deontologists should accept the principle of acceptable harms, as suggested by the examples in the section, 'Why the Distinction Matters'. It would be really costly to reject this principle!)

Expand full comment

Right. On your view, have they overall done a good thing / are they overall praiseworthy? Arguably, the world is better for having the child in it for 9 years, right? They didn't do the best combination of things, yes. But they did a good thing (bring the child into the world for 9 years) plus a bad thing (kill the child), but the fact that murder is bad gets no weight on your account, so it looks like the sum total value of their actions is good, no? Would this be like choosing B in your original formulation, except divided into two separate actions?

Expand full comment
author

"Overall good" is compatible with being blameworthy, i.e. below minimal expectations. Suppose there are two kids drowning and you save just one of them, and deliberately watch the other drown. If that's the "do a little good" option in my original "deontic leveling down" argument, then yes, I think your case is roughly parallel. We shouldn't prefer an even worse outcome. But we also shouldn't minimize how bad this choice is, and how poorly it reflects on the agent (who seems deeply vicious).

If they were to pre-empt the whole choice situation (e.g. whether by becoming paralyzed in my original case, or by refraining from conceiving in your variation) then they would avoid acting in such a blameworthy manner. But that would be a mistake, because we should not take the avoidance of such blameworthiness as a goal, esp. when it would lead to an even worse result (e.g. both kids drowning, or the child never existing).

Expand full comment

Thanks, that clarifies! Probably you've written about this elsewhere and I didn't see or don't recall, but I wonder about the good of creating lives. Does this run you into Parfit's Repugnant Conclusion? If creating good lives is not good, or if it somehow has less value than improving existing lives, that also might make it harder to justify taking the second horn of the dilemma posed in my original post, since the good of creating possibly good AI lives will be discounted.

Expand full comment
Apr 28, 2023Liked by Richard Y Chappell

I really enjoyed this post! Hilary Greaves has a paper, "Against "the badness of death,"" that also discusses how focusing on merely comparative can warp our thinking

Expand full comment
author

Cool, thanks for the pointer!

Expand full comment

"I’m not aware of any discussions of the issue at the broad level of generality and applicability that I’ve attempted here; but further references would be most welcome!"

See "Your death might be the worst thing ever to happen to you (but maybe you shouldn't care)" by Travis Timmerman. His Sick Sally and Naive Ned case gets at the same idea. Short version: Sick Sally is inevitably dying of some condition by some particular date. Her friend Naive Ned wants to help her out, so he promises to torture her starting that particular date, if she somehow survives the condition. By promising this, Naive Ned makes Sick Sally's death comparatively good for her instead of comparatively bad for her, since her death allows her to escape his torture. But of course it's completely pointless for Naive Ned to promise to torture her to make her death comparatively good for her. He's not benefitting her at all, because he doesn't change the intrinsic/non-comparative/absolute value outcomes for her. The comparative value of her death doesn't matter. Only the intrinsic/non-comparative/absolute value of her life does.

I wrote about this as well in my BPhil thesis. I call this idea that comparative value doesn't matter in itself the "Axiological Grounding for the No-Difference View." Basically, I claim this assumption is what underlies Parfit's deontic No-Difference View. Coincidentally, today I have been writing a conference abstract on the Axiological Grounding for the No-Difference View! I am close to finishing the abstract but got distracted and then saw this post.

By the way, all this contradicts what you say about death in Value Receptacles. I have some draft of something somewhere in which I refer to your view about death's intrinsic harm in Value Receptacles as a form of opposition to the Axiological Grounding for the No-Difference View which I needed to refute. It sounds like you've come around!

Expand full comment
Mar 15, 2023·edited Mar 15, 2023Liked by Richard Y Chappell

Also relevant is Frances Kamm's discussion of the harm of death in "Creation and Abortion" (which I believe I mentioned to you in an email years ago, in response to Value Receptacles). She also argues that it's not worth preventing short, "experientially adequate" lives from coming into existence even if they quickly die and so are deprived of a lot, because death is not the sort of harm that makes it better to never exist and have nothing (and be deprived of nothing) rather than to briefly exist and have a little bit and be deprived of a lot. If death's harm is the comparative deprivation of good things, it doesn't help anything to fail to create someone so they can avoid deprivation, since this does not increase the amount of absolute goods being enjoyed.

Expand full comment
author

Very cool - thanks for the references!

I remain open to the possibility that there might be some slight absolute badness to death (via thwarted preferences and such), but as discussed in an earlier post, we can't give it very much weight. So I now prefer to distinguish killing vs failing to create via appeal to supplemental person-directed reasons that apply only in the former case: https://rychappell.substack.com/p/killing-vs-failing-to-create

Still, as I emphasize in the present post, it isn't worth preventing someone's existence *merely* to ensure that you don't subsequently violate these reasons (i.e. wrongly kill the individuals in question).

Do you recall whether any of those other authors discuss this sort of case? i.e., not just short lives, but specifically whether it would be worth preventing someone's existence in order to prevent them from being *wrongfully killed*?

Expand full comment
Mar 16, 2023·edited Mar 16, 2023Liked by Richard Y Chappell

I think Travis Timmerman took that kind of stance in a talk he gave on Zoom which I believe related to the logic of the larder. If I remember right, his stance was that creating happy animals and killing them for food is wrong, but the practice makes the world better and we should hope it occurs. I think when I asked him more about this, he said that wrongness occurring does not in itself affect the value of worlds. If we are choosing between creating two worlds, and the only difference between them is that one includes wrongness and the other lacks it, we can flip a coin to decide which of the two worlds we create. (I'm not certain he said this in the Q&A for that talk, but I'm pretty sure he has told me something like this at some point.) At the time he said he had no plans to turn this talk into a paper.

This makes me think of Podgorski's "The Diner's Defence: Producers, Consumers, and the Benefits of Existence." In this paper Podgorski argues that people who buy animal products from farms that give animals good lives do nothing wrong because in purchasing the products, they are casually responsible for creating happy animals who will be wrongly killed, but they are not casually responsible for the wrongful killing. So he seems to think it's not wrong to create someone whom we know will be wrongfully killed. I can't remember how directly he addresses that question, but that answer is at least clearly implied.

As for Kamm, I don't know, but I wouldn't be surprised if she thought we should prevent very short lives from popping into existence if the reason these lives would be short is their wrongful killing. However, she made this point about the harm of death in the context of abortion, which does involve killing, so maybe I'm wrong about that.

Expand full comment

Here's a relevant passage from "The Diner's Defence":

"The implication of this argument is that it is not wrong for harm-based reasons to cause someone to exist who is then abused by someone else, provided that her life is worth living, and there was no alternative act that would have caused her to exist with a better life. This is the typical position of the diner in relation to the animals that their purchase affects.

"This principle, I claim, is plausible even when applied to uncontroversial full-moral-status human beings..."

Expand full comment
author

Perfect, thanks! I'll be sure to cite this if I end up expanding this post into a paper.

Expand full comment
Mar 17, 2023·edited Mar 17, 2023

(a) Generally agree that merely comparative harms shouldn't bother us.

(b) However, on the specific matter of creation of lives - apologies for beating the same drum again, but I really depends on meta-ethics. I made the same point in the article on Don't Valorize the Void, but basically: If welfare is good because (and only because) we as individuals care about our welfare (Premise C) then things being good requires actual people (whether past/present/future) to exist in the first place - otherwise there is no source of value, and no basis on which to judge what is good and what is not.

The main arguments for Premise C I've made before, so won't rehash them so much as append them as an annex below for anyone who wants to consider them. One other interesting consideration however, is what you can call the Harry Potter Argument.

(Premise 1: Totalist View) The interests of merely contingent people (specifically, people whose existence is contingent on us choosing to create them, vs them existing anyway) matter.

(Premise 2: Non-Experiential Interests/Preferences) People have non-experiential welfare interests/preference. For example, an author might reasonably want his book to have success even after his death; or someone might want their partner to remain faithful even if the infidelity is something they will never know about (or in more extreme cases, have a preference that they/their partner not remarry after one partner's death, on the basis that this is more romantic & better honours the concept of love). I don't believe any of this is too controversial, for those of us who reject hedonism - there's no reason why what we care about has to overlap only with the class of things that affect what we experience.

(Conclusion: Implausible Result) But if the interests of merely contingent people matter, and these people have non-experiential interests, then the non-experiential interests of merely contingent people matter, and we would have reason to advance them, even in the sub-set of cases where *these merely contingent people end up not existing at all*. For example, if a merely contingent person (call him Harry Potter) would - if he existed - would end up marrying a non-contingent real person (call her Ginny Weasley). Harry has a selfish and strong preference that Ginny not marry anyone else after he is dead - and this also extends to having a preference that she not marry anyone else even if he never existed at all. If we took the whole argument seriously, we would have to say that real person Ginny would have at least *some* pro tanto reason not to marry, based on the wishes of a merely hypothetical person - and this, I advance, is implausible.

----- Annex: Main Arguments for Premise C -----

In any case, the main arguments for Premise C are two-fold:

(1) At a positive level, we do obviously care about our own lives/freedom/happiness/etc, and as a result these things are good (possess ought-to-be-ness, have reason for existence, whatever). And if you take a step back, and asked what would happen if you didn't care about these things, there *doesn't seem to be any reason for the universe to care* - there doesn't appear to be any reason separate from your caring for these things to matter.

(2) It would be an extremely implausibly metaphysical coincidence that our welfare just happens to be good from the point of view of the universe, separately from us caring about it. For the sake of argument, consider that there metaphysically could be a planet of anti-humans - with the residents here telically desiring the anti-welfare of humans (i.e. that we die, are made slaves, are unhappy etc), and have the same pro-attitudes towards the inverse of the things we have pro-attitudes to. And it's just hard to justify why we would be cosmically right and them cosmically wrong - why it just happens that the stuff we value (and not the stuff the anti-humans value) is what the universe also values in the Mackie/Platonic sense. In other words, debunking arguments are compelling, unless you have Premise C and some sort a meta-ethical view that links valuation to objective value in a way that (a) avoids the coincidence, and (b) still gets you a sufficiently strong sense of mind-independence as to defeat the radical moral sceptic who keeps asking why we should care about others

You previously raised the issue of the problem of temporary depressives, but (i) most people will have the most plausible desire-based theories of welfare be global/life-cycle (i.e. what does X person, as a chain-of-continuous-consciousness, intrinsically want?). That is to say, from the perspective of who we are over time, XYZ things matter, and normal human blips from depression/fatigue/etc, don't change what these XYZ things are. Moreover, this gets around logical issues like there being infinitely many people at t1, t1.5, 1.25 etc, wanting the same thing Y, such that Y has infinite value.

(ii) I'm not even certain that it makes sense to say that a depressed person doesn't want to be happy. They may not see the meaning of life, because life isn't happy - that doesn't mean they don't see the value of happiness. If you asked any depressed person if they would press a button that would make them magically happy, they would! The upshot is that this wouldn't be a fair/useful intuition pump

Expand full comment
author

The Harry Potter argument is confused. People only have well-being levels when they exist. Fulfiling the hypothetical desires of a hypothetical person doesn't actually do anyone any good: no person newly has positive well-being as a result. By contrast, bringing a happy person into existence *does* do some good (the created person now has positive well-being that they otherwise would not have had). These are not the same.

Expand full comment

I’m not that sure what I mean either. But something seems to be missing. I’m trying to put my finger on it.

When we discuss the moral assessment of large-scale policies, we can consider how do we know it is best and what institutional structure allows us to implement it, among other things. The topic implicitly addresses questions such as, when is it good to take an action that affects moral agents without their knowledge? Without their consent? If someone became dictator, how would they constrain themselves or even know if they were acting as a benevolent dictator? The discussion seems to assume that everything being discussed is independent of such issues, but I don’t think it is.

Asking moral questions as if once we knew the answer, we would then be justified in unilaterally imposing it on the world seems odd to me, unless I were much more confident of my own judgement and others' ability to be persuaded by good reason.

When would it be wise to impose the best policy on a population that unanimously opposed and misunderstood it? What effect would treating moral agents as moral patients have on them, if we don’t assume they agree with our conclusions and consent? How is the best still the best in such circumstances? I think they would have to be very unusual.

Beings can't consent to being created. But this means that whoever is creating them is acting as their proxy. Such proxies could ask whether they could *expect* that their creations would subsequently grant their retrospective consent, or whatever else they thought respected them as potential moral agents and actual moral patients. What do the proxies owe to their creations if either they calculate incorrectly, or decide by some entirely different criteria?

Expand full comment
author

"Asking moral questions as if once we knew the answer, we would then be justified in unilaterally imposing it on the world seems odd to me..."

I strongly disagree that this "as if" claim is apt. Asking moral questions does NOT imply that "once we knew the answer, we would then be justified in unilaterally imposing it on the world." We can ask moral questions as part of ordinary democratic discourse. That's precisely what I'm doing here: offering public reasons that (ideally) could convince *everyone* that one policy approach is better than another (and inviting correction if my arguments are mistaken).

Expand full comment

So there is a presumption that the decision is being made within an institutional structure that preserves the moral agency of the participants. But… this is framed in consequentialist, not deontological terms, and then the constraints this imposes are not addressed within the analysis. This could be explained in a couple of ways, either that the safeguards embedded in the process are so reliable that we can feel confident that no solution will be implemented that violates them, or that we don’t need safeguards. I think this ambiguity has made me uncomfortable.

Expand full comment
author

I'm presuming the former, and find it weird that you find this "ambiguous". It would never occur to me, in reading a philosopher argue that we (society) should do X, to imagine that they mean we should impose a dictator who will do X against everyone else's will. That's just a ridiculously uncharitable reading of any ordinary moral-political argument.

For future reference, whenever I am arguing for a policy, you should take it as given that I am arguing for it to be implemented via the usual democratic processes.

P.S. Footnote 4 explicitly notes that I take my arguments here to be compatible with deontology.

Expand full comment

Well, it’s a bit of a pet peeve with me. I don’t see the usual processes as particularly democratic. The US has weak safeguards, and philosophers tend to ignore this. A solution that depends on the existence of an adequately non-corrupt state doesn’t make a lot of sense in an environment that lacks this prerequisite. it is much easier to imagine a benevolent dictator than to deal with the actual obstacles to implementation.

I didn’t understand footnote 4, so I am not sure what it means to be compatible with deontology here.

Expand full comment

I’m trying to put my finger on the reason the point about relative and absolute harms seems so unimportant to me. The post discusses morality. But it never mentions consent.

I see consent as central, but not determinative. Every moral theory has implications regarding consent, or premises about it. The usual approach of this blog dismisses extreme interpretations of utilitarianism, but this depends on this all working out somehow. But does it?

Expand full comment
author

I'm not sure what you mean. I'm primarily interested here in the moral assessment of large-scale policies like whether to ban (even humane) animal farming, or the development of sentient AI. Where would consent enter the picture? Beings can't consent to being created. The best we can do is ask whether we could *expect* that they would subsequently grant their retrospective consent, if we were to bring them into existence under such-and-such conditions. But that presumably reduces to the question of whether their existence is good for them, or has positive welfare value. (At least, I'm not sure what other basis there would be for making a prediction either way.)

Expand full comment

Consent enters the picture if there is not unanimity regarding the policy. A justification for imposing “the best” on unwilling participants is perhaps implicit.I want to make it explicit.

When are we justified in imposing a policy on unwilling persons? What effect does it have on them to treat moral agents as if they had no such agency? How much confidence in our judgement must we have before we are justified in overruling either democratic or personal choices?

Beings can't consent to being created. In the ideal circumstance, their creators act as their proxies, and perhaps they ask whether they could *expect* that their creations would subsequently grant their retrospective consent. What would they owe their creations if they miscalculate, or if they use some other criteria? If they abuse their creations, they are accountable as in other cases where a moral patient's rights are violated. But what would they owe for the act of creating them under mistaken or poorly judged expectations?

I responded to this earlier, but I think my internet connection swallowed it. If it shows up later, I hope I can delete one of them.

Expand full comment