I’m not convinced of the supposed reasons to discount the people that don’t exist yet. It seems to me that the reason that “resolving to have another child” is not an “adequate replacement” for saving the currently drowning child is that it’s both unlikely to work nearly as well (a currently existing child likely already has a loving family and broader support network that would be glad to make their life good, and all you need to do to activate that is a few minutes of work right now to save the child; while “resolving” to have an additional child is committing to a long future course of action that you will probably only actually do if you were likely to do so independently of this resolution, so that it’s not really an addition at all) and also for all the general reasons that we don’t think that doing one good thing “adequately replaces” doing another good thing you could equally have done (if I regularly walk past a lake that frequently has drowning children, it wouldn’t be appropriate for me to say, “eh, I don’t feel like saving this child - I’ll come back tomorrow and save the next child instead”).
Yeah, I'm certainly much *more* confident that the hybrid view is an improvement over narrow person-affecting views than that it's an improvement over the total view. If someone rejects totalism, they should embrace this sort of moderate pluralism instead (much as anyone who rejects utilitarianism should still include a beneficence component in their overall view).
You suggest a nice account of how the totalist could either accommodate or explain away my anti-replacement intuitions without needing to discount future people's existential benefits at all. One additional feature I like about the hybrid view is that it strikes me as *theoretically* well-motivated to recognize narrowly person-directed reasons as having some weight, which seems to imply that there's *more* reason to provide comparative benefits than existential benefits, all else equal. But I don't think this point is decisive or anything. (I could imagine coming to believe that this thought is ultimately just confused.)
Well, what if the drowning child is only 2 years old and you think they're gonna be more work to raise than the average new child you could make (even with the extra 2+ years of work to start over)? [I have no idea what your ethical views are, so maybe you want to appeal to some deontological stuff here about caring about your kid or just some instrumental utilitarian stuff about not letting your kid die?]
There’s going to be versions of stipulations where I bite a bullet, but those versions are going to need someone to have weird bits of knowledge, like knowing that this particular drowning kid doesn’t have anyone who cares about them and is going to raise them, while also knowing that you’ll be able to raise your own kid well only if you don’t save this particular drowning kid.
"We can instead combine the life-affirming aspects of total utilitarianism with extra weight for those who exist antecedently." I don't like that approach.
Sounds like you're advocating what I call partial weight presentism -- essentially welfare of future people gets partial weight and existing people gets full weight? [If that's not you're position, please skip to the last full paragraph "Or ...".] Here's a thought experiment that I think makes partial weight presentism look bad.
Suppose you know that next week 1 million new people will be instantaneously created in Antartica (out of thin air). To get rid of issues about fetuses/infants, assume they will begin life with the maturity of a 5 year old.* Assume that if very expensive supplies are not brought to Antartica in advance of their creation, they will die a painful death.
Do you continue to give them a fixed partial weight level (e.g. 2/3 weight) up until the exact instant they are created at which point you discontinuously switch to giving them full weight? If so, that seems like a very strange type of dynamic inconsistency. But isn't that what partial weight presentism does? Or does it change the weight gradually as their creation time approaches?
Now, to incorporate uncertainty, change the thought experiment so that there's only a 50-50 chance that they will be created --- 50% that none will be created. How much weight do you give them before they (might) exist.
Another version: 50% chance that such people will be created at the south pole, and 50% that a "different" group of people (but a similar number) will be created at the north pole. Either one or the other.
Or are you instead making a distinction between people we causally or intentionally create and people we that will be created independently of what we do? That seems a very hard distinction to maintain, but (in any case) will become really weird for reasons I could elaborate.
* If you wanna protest that's so unrealistic as to make the experiment irrelevant, we can discuss that further.
Yeah, I expect the best version of the view will need to be more complicated than the very rough sketch in the OP might suggest.
One option, as you say, is to move to the contingent vs guaranteed (to exist independently of our actions) distinction. But even there, we probably need to do more to avoid dynamic inconsistency. For example, we don't want to justify choosing to bring someone into existence at low well-being with small benefits to existing people, when we could instead have brought that same person into existence with far greater well-being (albeit lesser benefits to existing people, but not so reduced as to make this an impartially worse option).
So it may be exclusively *existential benefits* - the benefits of existing rather than not existing - that it makes sense to "discount" slightly, in order to secure the result that creating a new life isn't adequate compensation for watching a child drown.
I think the guaranteed vs. contingent thing gets pretty hard to maintain due to both the non-identity problem and the butterfly effect. Arrhenius called this "necessitarianism". And (as Caspar Hare mentioned), if one of your options is to destroy the world, now no future people are guaranteed.
Re "So it may be exclusively *existential benefits* - the benefits of existing rather than not existing - that it makes sense to 'discount' slightly".
That sounds like what I'd call (partial-weight) "actualism". A distinction between who will "actually" exist and who is merely possible. [Edit: Am I understanding correctly?] I'd argue that once you account for uncertainty, actualism will end up collapsing to just the regular total view because you don't know who is going to be actual.
There are two kinds of uncertainty. "Causal uncertainty" is uncertainty resulting from the fact that you haven't made up your mind about whether you will create a person. "Non-causal uncertainty" results from uncertainty about who will be created by forces independent of you. The former gets into really weird stuff in causal/evidential decision theory (and the notion of "ratifiability"). But I'd argue a close examination of these issues supports the total view.
I wrote a thesis about this stuff years ago. Maybe I'll make my own substack and try to summarize it at some point. I'm not sure anyone will read it tho.
Fwiw, I'd be very interested in reading your thoughts on this. If you do ever post about it, and if Richard doesn't mind, I'd appreciate you making some sort of announcement in the comments here
I’m not convinced of the supposed reasons to discount the people that don’t exist yet. It seems to me that the reason that “resolving to have another child” is not an “adequate replacement” for saving the currently drowning child is that it’s both unlikely to work nearly as well (a currently existing child likely already has a loving family and broader support network that would be glad to make their life good, and all you need to do to activate that is a few minutes of work right now to save the child; while “resolving” to have an additional child is committing to a long future course of action that you will probably only actually do if you were likely to do so independently of this resolution, so that it’s not really an addition at all) and also for all the general reasons that we don’t think that doing one good thing “adequately replaces” doing another good thing you could equally have done (if I regularly walk past a lake that frequently has drowning children, it wouldn’t be appropriate for me to say, “eh, I don’t feel like saving this child - I’ll come back tomorrow and save the next child instead”).
Yeah, I'm certainly much *more* confident that the hybrid view is an improvement over narrow person-affecting views than that it's an improvement over the total view. If someone rejects totalism, they should embrace this sort of moderate pluralism instead (much as anyone who rejects utilitarianism should still include a beneficence component in their overall view).
You suggest a nice account of how the totalist could either accommodate or explain away my anti-replacement intuitions without needing to discount future people's existential benefits at all. One additional feature I like about the hybrid view is that it strikes me as *theoretically* well-motivated to recognize narrowly person-directed reasons as having some weight, which seems to imply that there's *more* reason to provide comparative benefits than existential benefits, all else equal. But I don't think this point is decisive or anything. (I could imagine coming to believe that this thought is ultimately just confused.)
Well, what if the drowning child is only 2 years old and you think they're gonna be more work to raise than the average new child you could make (even with the extra 2+ years of work to start over)? [I have no idea what your ethical views are, so maybe you want to appeal to some deontological stuff here about caring about your kid or just some instrumental utilitarian stuff about not letting your kid die?]
There’s going to be versions of stipulations where I bite a bullet, but those versions are going to need someone to have weird bits of knowledge, like knowing that this particular drowning kid doesn’t have anyone who cares about them and is going to raise them, while also knowing that you’ll be able to raise your own kid well only if you don’t save this particular drowning kid.
"We can instead combine the life-affirming aspects of total utilitarianism with extra weight for those who exist antecedently." I don't like that approach.
Sounds like you're advocating what I call partial weight presentism -- essentially welfare of future people gets partial weight and existing people gets full weight? [If that's not you're position, please skip to the last full paragraph "Or ...".] Here's a thought experiment that I think makes partial weight presentism look bad.
Suppose you know that next week 1 million new people will be instantaneously created in Antartica (out of thin air). To get rid of issues about fetuses/infants, assume they will begin life with the maturity of a 5 year old.* Assume that if very expensive supplies are not brought to Antartica in advance of their creation, they will die a painful death.
Do you continue to give them a fixed partial weight level (e.g. 2/3 weight) up until the exact instant they are created at which point you discontinuously switch to giving them full weight? If so, that seems like a very strange type of dynamic inconsistency. But isn't that what partial weight presentism does? Or does it change the weight gradually as their creation time approaches?
Now, to incorporate uncertainty, change the thought experiment so that there's only a 50-50 chance that they will be created --- 50% that none will be created. How much weight do you give them before they (might) exist.
Another version: 50% chance that such people will be created at the south pole, and 50% that a "different" group of people (but a similar number) will be created at the north pole. Either one or the other.
Or are you instead making a distinction between people we causally or intentionally create and people we that will be created independently of what we do? That seems a very hard distinction to maintain, but (in any case) will become really weird for reasons I could elaborate.
* If you wanna protest that's so unrealistic as to make the experiment irrelevant, we can discuss that further.
Yeah, I expect the best version of the view will need to be more complicated than the very rough sketch in the OP might suggest.
One option, as you say, is to move to the contingent vs guaranteed (to exist independently of our actions) distinction. But even there, we probably need to do more to avoid dynamic inconsistency. For example, we don't want to justify choosing to bring someone into existence at low well-being with small benefits to existing people, when we could instead have brought that same person into existence with far greater well-being (albeit lesser benefits to existing people, but not so reduced as to make this an impartially worse option).
So it may be exclusively *existential benefits* - the benefits of existing rather than not existing - that it makes sense to "discount" slightly, in order to secure the result that creating a new life isn't adequate compensation for watching a child drown.
"For example, ..." Interesting example!
I think the guaranteed vs. contingent thing gets pretty hard to maintain due to both the non-identity problem and the butterfly effect. Arrhenius called this "necessitarianism". And (as Caspar Hare mentioned), if one of your options is to destroy the world, now no future people are guaranteed.
Re "So it may be exclusively *existential benefits* - the benefits of existing rather than not existing - that it makes sense to 'discount' slightly".
That sounds like what I'd call (partial-weight) "actualism". A distinction between who will "actually" exist and who is merely possible. [Edit: Am I understanding correctly?] I'd argue that once you account for uncertainty, actualism will end up collapsing to just the regular total view because you don't know who is going to be actual.
There are two kinds of uncertainty. "Causal uncertainty" is uncertainty resulting from the fact that you haven't made up your mind about whether you will create a person. "Non-causal uncertainty" results from uncertainty about who will be created by forces independent of you. The former gets into really weird stuff in causal/evidential decision theory (and the notion of "ratifiability"). But I'd argue a close examination of these issues supports the total view.
https://web.mit.edu/~casparh/www/Papers/HareHeddenSelfReinforcing.pdf
I wrote a thesis about this stuff years ago. Maybe I'll make my own substack and try to summarize it at some point. I'm not sure anyone will read it tho.
Fwiw, I'd be very interested in reading your thoughts on this. If you do ever post about it, and if Richard doesn't mind, I'd appreciate you making some sort of announcement in the comments here