9 Comments
⭠ Return to thread

Thanks for the discussion of my post, Richard! This is interesting argument in favor of the second horn, if we’re willing to pay the cost. I don’t think I accept the general principle the post relies on, but I do interpret my advice to avoid creating AI of disputable moral status as defeasible policy advice rather than strict requirement, which can be outweighed in the right circumstances.

I’m inclined to think there’s an important difference between the human farming case and the baby farm case, though. I discuss my own version of this in the “Argument from Existential Debt” section of Schwitzgebel & Garza 2015. The case: Ana and Vijay would not have a child except under the conditions that they could terminate the child’s life at will. They raise the child happily for 9 years, then kill him painlessly. Is it wrong to have the child under these conditions? And is it wrong, after having had the child, to kill him — or is it like the humane meat case (as interpreted by defenders of that practice)?

Expand full comment

Thanks! To be clear, I don't think that "existential debt" makes it permissible to kill anyone. Rather, my suggestion is that the fact that someone (e.g. Ana and Vijay) would -- even *wrongly* -- kill X, is not a reason to prevent X from coming into existence.

E.g., if Ana and Vijay are wondering whether to conceive, given their plan to (wrongly) kill the child after 9 years, the principle of acceptable harm implies that it is permissible for them to conceive this child (so long as the child's life will be positive).

They shouldn't kill, but they also shouldn't make other decisions with an eye to preventing this killing in a way that equally prevents the better (long-lived) future by comparison to which the killing was deemed wrong in the first place.

Expand full comment

Let’s stipulate that it’s a package deal: Either Ana and Vijay don’t have the child, or they have the child and then kill. On utilitarian grounds, the second choice is better, no?

Expand full comment

Sure, they should have the child. But you can still criticize the killing part of the "package". Unless you construct the case so that there literally is not a separate action of killing. For example, if any genetic child of theirs would automatically die after nine years (due to a genetic defect), then they do nothing wrong at any point in time. Alternatively, if they deliberate select a defective gamete when they could have instead chosen a healthy one, then that part of the choice is wrong.

(I should add that I don't take my original argument to assume utilitarianism. Even deontologists should accept the principle of acceptable harms, as suggested by the examples in the section, 'Why the Distinction Matters'. It would be really costly to reject this principle!)

Expand full comment

Right. On your view, have they overall done a good thing / are they overall praiseworthy? Arguably, the world is better for having the child in it for 9 years, right? They didn't do the best combination of things, yes. But they did a good thing (bring the child into the world for 9 years) plus a bad thing (kill the child), but the fact that murder is bad gets no weight on your account, so it looks like the sum total value of their actions is good, no? Would this be like choosing B in your original formulation, except divided into two separate actions?

Expand full comment

"Overall good" is compatible with being blameworthy, i.e. below minimal expectations. Suppose there are two kids drowning and you save just one of them, and deliberately watch the other drown. If that's the "do a little good" option in my original "deontic leveling down" argument, then yes, I think your case is roughly parallel. We shouldn't prefer an even worse outcome. But we also shouldn't minimize how bad this choice is, and how poorly it reflects on the agent (who seems deeply vicious).

If they were to pre-empt the whole choice situation (e.g. whether by becoming paralyzed in my original case, or by refraining from conceiving in your variation) then they would avoid acting in such a blameworthy manner. But that would be a mistake, because we should not take the avoidance of such blameworthiness as a goal, esp. when it would lead to an even worse result (e.g. both kids drowning, or the child never existing).

Expand full comment

Thanks, that clarifies! Probably you've written about this elsewhere and I didn't see or don't recall, but I wonder about the good of creating lives. Does this run you into Parfit's Repugnant Conclusion? If creating good lives is not good, or if it somehow has less value than improving existing lives, that also might make it harder to justify taking the second horn of the dilemma posed in my original post, since the good of creating possibly good AI lives will be discounted.

Expand full comment

Creating good lives can be non-instrumentally good. This no more commits us to the repugnant conclusion than does believing that it can be non-instrumentally good to add good moments to an existing life (as we surely all accept). I'm honestly baffled that so many philosophers have the impression that "neutrality" about lives is some kind of solution to the general problem of "repugnant" quality-quantity tradeoffs. It just isn't: https://rychappell.substack.com/i/85599184/conclusion

> "We all agree that you can harm someone by bringing them into a miserable existence, so there’s no basis for denying that you can benefit someone by bringing them into a happy existence. It would be crazy to claim that there is literally no reason to do the latter. And there is no theoretical advantage to making this crazy claim. (As I explain in ‘Puzzles for Everyone’, it doesn’t solve the repugnant conclusion, because we need a solution that works for the intra-personal case — and whatever does the trick there will automatically carry over to the interpersonal version too.)"

As the rest of that linked post explains, I'm personally sympathetic to a hybrid view that allows for some extra weight being given to already-existing people's interests.*

So I've no beef with the view that we shouldn't create sentient AI if it wouldn't be in humanity's interests to do so -- if the consequences of giving them rights, etc, would be too burdensome to us in absolute terms, actually outweighing the benefits we receive from them in return. I'd just caution that you need to make that comparison to the benefits. Simply pointing to the costs alone won't do it, especially if some of the costs are "merely comparative" -- costly relative to a carefree implementation, but *not* relative to the proposed alternative of not creating these (presumably quite useful?) beings at all.

* = But note that this is independent of the question how to adjudicate quality-quantity tradeoffs, on which I don't have a settled opinion, but like aspects of both 'variable value' and 'critical range' views, as surveyed here: https://www.utilitarianism.net/population-ethics/

Expand full comment

Thanks for that clarification and link, Richard! That makes sense. I’m inclined to agree with you against neutrality but in favor of discount relative to existing lives.

Expand full comment