[#4 in my series of excerpts from Questioning Beneficence: Four Philosophers on Effective Altruism and Doing Good.]1
How much should we care about future people? Total utilitarians answer, “Equally to our concern for presently-existing people.” Narrow person-affecting theorists answer, “Not at all”—at least in a disturbingly wide range of cases.2 I think the most plausible answer is something in-between.
Person-Directed and Impersonal Reasons
Total utilitarianism is the view that we should promote the sum total of well-being in the universe. In principle, this sum could be increased by either improving people’s lives or by adding more positive lives into the mix (without making others worse off). I agree that both of these options are good, but it seems misguided to regard them as equally good. If you see a child drowning, resolving to have an extra child yourself is not (contra total utilitarianism) an adequate substitute for saving the existing child. In general, we’re apt to think, we have stronger reasons to make people happy than to make happy people.
On the other hand, the narrow person-affecting view can seem disturbing and implausibly extreme in its own way. Since it regards happy future lives as a matter of moral indifference, it implies that—if it would make us happier—it’d be worth preventing a future utopia by sterilizing everyone alive today and burning through all the planet’s resources before the last of us dies off. Utopia is no better than a barren rock, on this view, so if faced with a choice between the two, we’ve no moral reason to sacrifice our own interests to bring about the former.
Our own value—and that of our children—are seen as merely conditional: given that we exist, it’s better to make us better-off, just like if you make a promise, then you had better keep it. But there’s no reason to make promises just in order to keep them: kept promises are not in themselves or unconditionally good. And narrow person-affecting theorists think the same of individual persons. Bluntly put: we are no better than nothing at all, on this bleak view.
Fortunately, we do not have to choose between total utilitarianism and the narrow person-affecting view. We can instead combine the life-affirming aspects of total utilitarianism with extra weight for those who exist antecedently. On a commonsense hybrid approach, we have both (1) strong person-directed reasons to care especially about the well-being of antecedently existing individuals, and (2) weaker impersonal reasons to improve the world by bringing additional good lives into existence. When the amount of value at stake is sufficiently large, even reasons of the intrinsically weaker kind may add up to be very significant indeed. This can explain why avoiding human extinction should be a very high priority on a wide range of reasonable, life-affirming views, without depending on anything as extreme as total utilitarianism.
In Defense of Good Lives
There are three other common reasons why people are tempted to deny value to future lives, and they’re all terrible. First, some worry that we could otherwise be saddled with implausible procreative obligations. Second, some think that it allows them to avoid the paradoxes of population ethics. And, third, some are metaphysically confused about how non-existent beings could generate reasons. Let’s address these concerns in turn.
Imagine thinking that the only way to reject forced organ donation was to deny value to the lives of individuals suffering from organ failure. That would be daft. Commonsense morality grants us strong rights to bodily integrity and autonomy. However useful my second kidney may be to others, it is my body, and it would be supererogatory—above and beyond the call of duty—for me to give up any part of it for the greater good of others.
Now, what holds of kidneys surely holds with even greater stringency of uteruses, as being coerced into an unwanted pregnancy would seem an even graver violation of one’s bodily integrity than having a kidney forcibly removed. So recognizing the value of future people does not saddle us with procreative obligations, any more than recognizing the value of dialysis patients saddles us with obligations to donate our internal organs. Placing our organs in service to the greater good is above and beyond the call of duty. This basic commitment to bodily autonomy can survive whatever particular judgments we might make about which lives contribute to the overall good. It does not give us any reason to deny value to others’ lives, including future lives3
The second bad argument begins by noting the paradoxes of population ethics, such as Parfit’s “Mere Addition Paradox,” which threatens to force us into the “Repugnant Conclusion” that any finite utopian population A can be surpassed in value by a sufficiently larger population Z of lives that are barely worth living. Without getting into the details, the mere addition paradox can be blocked by denying that good lives are absolutely good at all, and instead regarding different-sized populations as incomparable in value.
But this move ultimately avails us little, for two reasons: (1) it cannot secure the intuitively desirable result that the utopian world A is better than the repugnant world Z; and (2) all the same puzzles about quantity-quality tradeoffs can re-emerge within a single life, where it is not remotely plausible to deny that “mere additions” of future time can be of value or increase the welfare value of one’s life. Since we’re all committed to addressing quantity-quality tradeoffs within a life, we might as well extend whatever solution we ultimately settle upon to the population level too. So there’s really no philosophical gain to temporarily dodging the issue by denying the value of future lives.
The third argument rests on a simple confusion between absolute and comparative disvalue. Consider Torres:
[T]here can’t be anything bad about Being Extinct because there wouldn’t be anyone around to experience this badness. And if there isn’t anyone around to suffer the loss of future happiness and progress, then Being Extinct doesn’t actually harm anyone.
I call this the ‘Epicurean fallacy,’ as it mirrors the notorious reasoning that death cannot harm you because once you’re dead there’s no longer anyone there to be harmed. Of course, death is not an absolutely bad state to be in (it’s not a state that you are ever in at all, since to be in a state you must exist at that time). Death’s intrinsic neutrality instead makes you worse off in comparison to the alternative of continued positive existence. And so it goes at a population level: humanity’s extinction, while absolutely neutral, would be awful compared to the alternative of a flourishing future containing immensely positive lives (and thus value). If you appreciate that death can be bad—even tragic—then you should have no difficulty appreciating the metaphysical possibility that extinction could be even more so. (Though we can imagine worse things than extinction, just as we can imagine worse fates than death.)
An Agnostic Case for Longtermism in Practice
William MacAskill defines Longtermism as “the idea that positively influencing the longterm future is a key moral priority of our time.” After all, the future is vast. If all goes well, it could contain an astronomical number of wonderful lives. If it goes poorly, it might soon contain no lives at all—or worse, overwhelmingly miserable, oppressed lives. Because the stakes are so high, we have extremely strong moral reasons to prefer better long-term outcomes.
That in-principle verdict strikes me as difficult to deny. The practical question of what to do about it is much less clear, because it may not be obvious what we can do to improve long-term outcomes. But longtermists suggest that there is at least one clear-cut option available, namely: research the matter further. Longtermist investigation is relatively cheap, and the potential upside is immense. So it seems clearly worthwhile to look more into the matter.
MacAskill himself suggests two broad avenues for securing positive longterm impact: (1) contributing to economic, scientific, and (especially) moral progress—such as by building a morally exploratory world that can continue to improve over time; and (2) working to mitigate existential risks—such as from nuclear war, super-pandemics, or misaligned artificial intelligence—to ensure that we have a future at all.
This all seems very sensible to me. I personally doubt that misaligned AI will take over the world—that sure doesn’t seem the most likely outcome. But a bad outcome doesn’t have to be the “most likely” in order for it to be prudent to guard against. I don’t think any given nuclear reactor is likely to suffer a catastrophic failure, either, but I still think society should invest (some) in nuclear safety engineering, just to be safe.4 Currently, the amount that our society invests in reducing global catastrophic risks is negligible (as a proportion of global GDP). I could imagine overdoing it—e.g., in a hypothetical neurotic society that invested the majority of its resources into such precautionary measures—but, in reality, we’re surely erring in the direction of under-investment.
So, while I don’t know precisely what the optimal balance would be between “longtermist” and “neartermist” moral ends, it’s worth noting that we don’t need to answer that difficult question in order to at least have a directional sense of where we should go from here. We should not entirely disregard the long-term future: it truly is immensely important. But we (especially non-EAs) currently do almost entirely disregard the long-term future. So it would seem wise to remedy this.
In the subsequent discussion, Arnold and Brennan press me on whether tiny chances of averting extinction could really be worth more than saving many lives for certain. I argue that this result is basically undeniable, given the right kind of (objective) probabilities.
Note that I haven’t bothered to add in most of the footnotes, and I’ve added links that weren’t in the printed text.
They allow that we shouldn’t want future individuals to suffer. And they allow that we should prefer any given future individual to be better off rather than existing in a worse-off state. But they think we have no non-instrumental reason to want the happy future individual to exist at all. And also [at least on most such views] no non-instrumental reason to prefer for a happier individual to exist in place of a less well-off, alternative future person. For a general introduction to population ethics, see “Population Ethics,” in Chappell, Meissner, and MacAskill 2023.
This basic argument is further developed in Chappell 2017.
Of course, that’s not to endorse pathological regulation that results in effectively promoting coal power over nuclear, or other perverse incentives.
I’m not convinced of the supposed reasons to discount the people that don’t exist yet. It seems to me that the reason that “resolving to have another child” is not an “adequate replacement” for saving the currently drowning child is that it’s both unlikely to work nearly as well (a currently existing child likely already has a loving family and broader support network that would be glad to make their life good, and all you need to do to activate that is a few minutes of work right now to save the child; while “resolving” to have an additional child is committing to a long future course of action that you will probably only actually do if you were likely to do so independently of this resolution, so that it’s not really an addition at all) and also for all the general reasons that we don’t think that doing one good thing “adequately replaces” doing another good thing you could equally have done (if I regularly walk past a lake that frequently has drowning children, it wouldn’t be appropriate for me to say, “eh, I don’t feel like saving this child - I’ll come back tomorrow and save the next child instead”).
"We can instead combine the life-affirming aspects of total utilitarianism with extra weight for those who exist antecedently." I don't like that approach.
Sounds like you're advocating what I call partial weight presentism -- essentially welfare of future people gets partial weight and existing people gets full weight? [If that's not you're position, please skip to the last full paragraph "Or ...".] Here's a thought experiment that I think makes partial weight presentism look bad.
Suppose you know that next week 1 million new people will be instantaneously created in Antartica (out of thin air). To get rid of issues about fetuses/infants, assume they will begin life with the maturity of a 5 year old.* Assume that if very expensive supplies are not brought to Antartica in advance of their creation, they will die a painful death.
Do you continue to give them a fixed partial weight level (e.g. 2/3 weight) up until the exact instant they are created at which point you discontinuously switch to giving them full weight? If so, that seems like a very strange type of dynamic inconsistency. But isn't that what partial weight presentism does? Or does it change the weight gradually as their creation time approaches?
Now, to incorporate uncertainty, change the thought experiment so that there's only a 50-50 chance that they will be created --- 50% that none will be created. How much weight do you give them before they (might) exist.
Another version: 50% chance that such people will be created at the south pole, and 50% that a "different" group of people (but a similar number) will be created at the north pole. Either one or the other.
Or are you instead making a distinction between people we causally or intentionally create and people we that will be created independently of what we do? That seems a very hard distinction to maintain, but (in any case) will become really weird for reasons I could elaborate.
* If you wanna protest that's so unrealistic as to make the experiment irrelevant, we can discuss that further.