5 Comments
⭠ Return to thread

Right. On your view, have they overall done a good thing / are they overall praiseworthy? Arguably, the world is better for having the child in it for 9 years, right? They didn't do the best combination of things, yes. But they did a good thing (bring the child into the world for 9 years) plus a bad thing (kill the child), but the fact that murder is bad gets no weight on your account, so it looks like the sum total value of their actions is good, no? Would this be like choosing B in your original formulation, except divided into two separate actions?

Expand full comment

"Overall good" is compatible with being blameworthy, i.e. below minimal expectations. Suppose there are two kids drowning and you save just one of them, and deliberately watch the other drown. If that's the "do a little good" option in my original "deontic leveling down" argument, then yes, I think your case is roughly parallel. We shouldn't prefer an even worse outcome. But we also shouldn't minimize how bad this choice is, and how poorly it reflects on the agent (who seems deeply vicious).

If they were to pre-empt the whole choice situation (e.g. whether by becoming paralyzed in my original case, or by refraining from conceiving in your variation) then they would avoid acting in such a blameworthy manner. But that would be a mistake, because we should not take the avoidance of such blameworthiness as a goal, esp. when it would lead to an even worse result (e.g. both kids drowning, or the child never existing).

Expand full comment

Thanks, that clarifies! Probably you've written about this elsewhere and I didn't see or don't recall, but I wonder about the good of creating lives. Does this run you into Parfit's Repugnant Conclusion? If creating good lives is not good, or if it somehow has less value than improving existing lives, that also might make it harder to justify taking the second horn of the dilemma posed in my original post, since the good of creating possibly good AI lives will be discounted.

Expand full comment

Creating good lives can be non-instrumentally good. This no more commits us to the repugnant conclusion than does believing that it can be non-instrumentally good to add good moments to an existing life (as we surely all accept). I'm honestly baffled that so many philosophers have the impression that "neutrality" about lives is some kind of solution to the general problem of "repugnant" quality-quantity tradeoffs. It just isn't: https://rychappell.substack.com/i/85599184/conclusion

> "We all agree that you can harm someone by bringing them into a miserable existence, so there’s no basis for denying that you can benefit someone by bringing them into a happy existence. It would be crazy to claim that there is literally no reason to do the latter. And there is no theoretical advantage to making this crazy claim. (As I explain in ‘Puzzles for Everyone’, it doesn’t solve the repugnant conclusion, because we need a solution that works for the intra-personal case — and whatever does the trick there will automatically carry over to the interpersonal version too.)"

As the rest of that linked post explains, I'm personally sympathetic to a hybrid view that allows for some extra weight being given to already-existing people's interests.*

So I've no beef with the view that we shouldn't create sentient AI if it wouldn't be in humanity's interests to do so -- if the consequences of giving them rights, etc, would be too burdensome to us in absolute terms, actually outweighing the benefits we receive from them in return. I'd just caution that you need to make that comparison to the benefits. Simply pointing to the costs alone won't do it, especially if some of the costs are "merely comparative" -- costly relative to a carefree implementation, but *not* relative to the proposed alternative of not creating these (presumably quite useful?) beings at all.

* = But note that this is independent of the question how to adjudicate quality-quantity tradeoffs, on which I don't have a settled opinion, but like aspects of both 'variable value' and 'critical range' views, as surveyed here: https://www.utilitarianism.net/population-ethics/

Expand full comment

Thanks for that clarification and link, Richard! That makes sense. I’m inclined to agree with you against neutrality but in favor of discount relative to existing lives.

Expand full comment