Creating good lives can be non-instrumentally good. This no more commits us to the repugnant conclusion than does believing that it can be non-instrumentally good to add good moments to an existing life (as we surely all accept). I'm honestly baffled that so many philosophers have the impression that "neutrality" about lives is some kind…
Creating good lives can be non-instrumentally good. This no more commits us to the repugnant conclusion than does believing that it can be non-instrumentally good to add good moments to an existing life (as we surely all accept). I'm honestly baffled that so many philosophers have the impression that "neutrality" about lives is some kind of solution to the general problem of "repugnant" quality-quantity tradeoffs. It just isn't: https://rychappell.substack.com/i/85599184/conclusion
> "We all agree that you can harm someone by bringing them into a miserable existence, so there’s no basis for denying that you can benefit someone by bringing them into a happy existence. It would be crazy to claim that there is literally no reason to do the latter. And there is no theoretical advantage to making this crazy claim. (As I explain in ‘Puzzles for Everyone’, it doesn’t solve the repugnant conclusion, because we need a solution that works for the intra-personal case — and whatever does the trick there will automatically carry over to the interpersonal version too.)"
As the rest of that linked post explains, I'm personally sympathetic to a hybrid view that allows for some extra weight being given to already-existing people's interests.*
So I've no beef with the view that we shouldn't create sentient AI if it wouldn't be in humanity's interests to do so -- if the consequences of giving them rights, etc, would be too burdensome to us in absolute terms, actually outweighing the benefits we receive from them in return. I'd just caution that you need to make that comparison to the benefits. Simply pointing to the costs alone won't do it, especially if some of the costs are "merely comparative" -- costly relative to a carefree implementation, but *not* relative to the proposed alternative of not creating these (presumably quite useful?) beings at all.
* = But note that this is independent of the question how to adjudicate quality-quantity tradeoffs, on which I don't have a settled opinion, but like aspects of both 'variable value' and 'critical range' views, as surveyed here: https://www.utilitarianism.net/population-ethics/
Thanks for that clarification and link, Richard! That makes sense. I’m inclined to agree with you against neutrality but in favor of discount relative to existing lives.
Creating good lives can be non-instrumentally good. This no more commits us to the repugnant conclusion than does believing that it can be non-instrumentally good to add good moments to an existing life (as we surely all accept). I'm honestly baffled that so many philosophers have the impression that "neutrality" about lives is some kind of solution to the general problem of "repugnant" quality-quantity tradeoffs. It just isn't: https://rychappell.substack.com/i/85599184/conclusion
> "We all agree that you can harm someone by bringing them into a miserable existence, so there’s no basis for denying that you can benefit someone by bringing them into a happy existence. It would be crazy to claim that there is literally no reason to do the latter. And there is no theoretical advantage to making this crazy claim. (As I explain in ‘Puzzles for Everyone’, it doesn’t solve the repugnant conclusion, because we need a solution that works for the intra-personal case — and whatever does the trick there will automatically carry over to the interpersonal version too.)"
As the rest of that linked post explains, I'm personally sympathetic to a hybrid view that allows for some extra weight being given to already-existing people's interests.*
So I've no beef with the view that we shouldn't create sentient AI if it wouldn't be in humanity's interests to do so -- if the consequences of giving them rights, etc, would be too burdensome to us in absolute terms, actually outweighing the benefits we receive from them in return. I'd just caution that you need to make that comparison to the benefits. Simply pointing to the costs alone won't do it, especially if some of the costs are "merely comparative" -- costly relative to a carefree implementation, but *not* relative to the proposed alternative of not creating these (presumably quite useful?) beings at all.
* = But note that this is independent of the question how to adjudicate quality-quantity tradeoffs, on which I don't have a settled opinion, but like aspects of both 'variable value' and 'critical range' views, as surveyed here: https://www.utilitarianism.net/population-ethics/
Thanks for that clarification and link, Richard! That makes sense. I’m inclined to agree with you against neutrality but in favor of discount relative to existing lives.