One topic that came up a few times at Peter Singer’s farewell conference was that of replaceability: whether we should be OK with an individual (person or animal) dying so long as a new replacement individual is created to take their place, with a future at least as good as the original otherwise would have had.
It’s a fascinating question, but often discussed a bit carelessly. In particular, I worry that people often conflate being replaceable (in the sense that the replacement is overall acceptable) with being fungible (or replaceable without regret). The latter is more clearly objectionable, but also is easily avoidable—even by utilitarians.
Replaceability without regret?
I think it would be clearly objectionable to treat people as fungible, like money. If you have to pay $5 to receive $10, there’s no real loss here. You’d prefer to have all $15, of course, but to move from having just the five to just the ten is a pure improvement. There is no normative “separateness of dollars”: you do not separately care about each dollar bill as a metaphysically unique individual. They are all perfect substitutes for each other. You merely care about the aggregate.
Lives aren’t like this. Killing five people to save ten may be worth doing (depending on the details), but involves a very real loss. It’s not just that you would prefer to have all fifteen lives safely secured (though you certainly should also have that preference). You should value each life separately, in a way that leads you to regard the shift from the five to a (different) ten as a costly improvement, not a pure improvement. (There is, as some ethicists put it, a “moral remainder”.)
As I argue in ‘Value Receptacles’, utilitarians can respect the separateness of persons in this way:
There is not just one thing, the global happiness, that is good. Instead, there is my happiness, your happiness, Bob’s, and Sally’s, which are all equally weighty but nonetheless distinct intrinsic goods. What this means is that the morally fitting agent should have a corresponding plurality of non-instrumental desires: for my welfare, yours, Bob’s, and Sally’s. Tradeoffs between us may be made, but they are acknowledged as genuine tradeoffs: though a benefit to one may outweigh a smaller harm to another, this does not cancel it. The harm remains regrettable, for that person’s sake, even if we ultimately have most reason to accept it for the sake of more greatly benefiting another.
A decade since that paper’s publication, one still hears philosophers make this (refuted) claim that utilitarianism “neglects the separateness of persons.” Too few have learned that we can mark significance with attitudes. But I think it’s a vitally important lesson for ethical theory.
Overall Replaceability
Okay, so let’s grant that “replaceability without regret” is a straw man. Even utilitarianism can avoid fungibility. But what about overall replaceability?
The first thing to note is that we clearly shouldn’t be absolutists in our opposition to replacement. For example, it’s plausibly better to have generational turnover rather than one permanent generation of immortals.1 Even more obviously, an infertile couple might reasonable prefer to have children than to extend their elderly parents’ lives by an extra decade (if we imagine a genie offering them this forced choice). People don’t like to be forced into tradeoffs or invidious comparisons like this—it feels uncaring to say that something matters more than another person’s life. But things it would seem uncaring to say can still be true, even in ethics.2
So if there is a normative barrier to replaceability, it isn’t absolute. Still, we may think it’s important for there to be some (moderate) barrier here, lest we collapse the normative distinction between death and failure to create. In the linked post, I suggest that this is the strongest objection to total utilitarianism, and motivates a hybrid view on which there are person-directed as well as undirected welfarist reasons. (McMahan defends an appealing view along these lines.)
But I now wonder whether, in that previous post, I failed to sufficiently heed my own methodological advice. There is an obvious difference in attitude which distinguishes death from failure to create, since only in the former case do we have a comparative harm to lament (for an individual’s sake). There’s a person-directed regret we have there, that we lack in the case of a mere failure to create (where the opportunity cost is more impersonal or “undirected”—there’s no particular person who we failed to create; no lonely soul stuck with the lamentable property of “non-existence”).
So there is a normative difference, even for totalists. But still, one might naturally feel, not enough. Intuitively, we should (often) more strongly prefer that existing people not die (very prematurely) than that new people be created. To accommodate this intuition, one would need to move to the hybrid view.
That seems reasonable to me. I still think the hybrid view has a lot going for it. But I no longer find it so obvious that totalism is wrong, given the availability of an attitudinal explanation of the moral difference between death and failure to create.
Attitudinal vs Hybrid views
To test this, imagine that a replacement occurs. Bob dies, lamentably, but Sally is created. Now that Sally exists, we have person-directed reasons to be glad that Sally exists (balancing out our reasons to regret Bob’s death). As in other balanced-tradeoff cases, we should feel ambivalent (pulled in both directions) rather than indifferent about this overall development. That all seems right to me so far. And now ask: should we overall regret that Bob was replaced by Sally? It seems not. (It would seem disrespectful to Sally, at this point, to say that the replacement was an outright bad thing: why should she count for less than Bob?)
The hybrid theorist might agree with this verdict. They can agree that, once Sally exists, we now have person-directed reasons to be glad that she exists, balancing out our person-directed reasons to lament Bob’s death. But they think the applicable normative standards have changed. Before Sally was determined to exist, we had less reason to want her to exist. (We had only undirected, not person-directed reasons, at that time.) Procreation changes what is morally preferable, on this view.
I wonder if this may somewhat undermine the normative force of the hybridist’s prohibition on replacement. If someone “wrongly” causes replacement, the hybridist will cease to (overall) regret this as soon as it occurs.3 How terrible can it really be to do something that will, upon doing it, not be overall regrettable? I discuss some complications in an extended footnote.4
Comments welcome — maybe you can help me figure out how best to think about these cases.
I think a blog commenter might have previously suggested this thought experiment to me? Please share the link if you recall it!
Incidentally, I think this is a big part of why philosophical ethics is so important. Most people aren’t willing to countenance moral thoughts that sound bad. A degree of philosophical hard-headedness is necessary in order to actually think about ethics.
At least, thinking in terms of what is “objectively” fitting. Realistically, human emotions may not update so quickly.
It may depend on why it’s not regrettable—as Liz Harman points out in 'I'll Be Glad I Did It' Reasoning and the Significance of Future Desires, our partiality towards actual people may be morally distorting. This is most clearly so in “non-identity” cases where we opt for the impartially worse outcome. Suppose less-happy Lois comes to exist when happier Harriett might have been created instead. Our attachment to Lois might make us glad we made the worse choice; but it was a bad decision nonetheless. But I think the relevant consideration here is not so much that our reasons-at-time-t1 favored choosing Harriett, but rather that timeless, impartial reasons favored this. After all, had we created Harriett instead of Lois, we would have (rightly) been even more glad of that choice. This comparative fact seems morally relevant.
The replacement scenario is different from this, in ways that make it unclear what to conclude. Timeless, impartial reasons are neutral between keeping Bob or replacing him with Sally. (We could even vary the case so that the replacement is slightly positive, impartially speaking.) If we go through with the replacement, our person-directed reasons will mirror the impartial ones. But if we hadn’t done the replacement, we would have special reason to be glad that we hadn’t. World w1 has a strong pro-Bob bias, we might say, and so evaluates w1 as strongly preferable to the replacement world w2. Should we average out the two worlds’ evaluations, and so conclude (with milder pro-Bob bias) that w1 is “objectively” preferable? Or should we disregard contingent pro-Bob bias and only consider the impartial reasons? It seems very unclear what “objectivity” calls for in such a case.
Thank you for this thoughtful discussion. I've been thinking a lot about this lately. I don't have any settled views or real arguments (unfortunately) on this matter, but one thing I've been thinking of is to consider what happens w/ intergenerational replacement when people in the older generation die. They are as you say not fungible tokens, they leave certain causal traces and they leave certain legacies (e.g., their intentions, policies they put in place, certain final wishes). The current generation can honor those things, or alternatively, they can try to rid themselves of burdensome past things the previous generation installed. Still, this downstream causation creates an asymmetry between the newly-created person (say, the baby in your example) and the people already existing (the aging parents).
In cultures where we feel strong obligations to past generations--e.g. some Indigenous societies, Confucian cultures--those earlier-gen traces can become deeply entrenched and very strong. I find it a separate interesting issue how strong we should weigh those ideas, wishes, values of earlier generations. We can get weighed down by them. On the other hand, it can be valuable (and by doing it, we sort of reassure ourselves there will be a chance we will not be "erased" the moment we die.)
As you say, sometimes the new life we welcome outweighs the value of keeping the old. Not because we are fungible and replaceable, but because (in my naturalistic pic where you can skip rather easily from "is" into "ought") this is just how it is and should be. I draw a lot of comfort walking in a park nearby and seeing dead trees lie there. They used to remove them, now they let them lie and you can see mushrooms grow on them. Ideally, one's legacy is like this: not erased, not fungible, but fertile ground for the future. The future deserves a place and deserves a shot, and should not feel overtly weighed down by the past.
I disagree with the argument that "you will cease to regret this change once this change has happened" undermines the moral force against making the change in the first place. To use a very extreme example, think of fictional monsters like the Borg or the Cybermen who forcibly transform their victims into more of them. One common argument they use to try to get their victims to submit is that once they have made you into one of them, you will want to be one of them. I do not think this argument is persuasive. It is possible to recognize that some events can change what you value while also having some meta-values about what kinds of changes to your values are desirable and when.
In Parfit's "What Makes your Life Go Best" he discusses a case where someone gives you a drug that you have a strong preference take more of, and a lifetime supply of the drug. Parfit argues, correctly, that doing that does not make your life go better, because you have "global preferences," which are meta-preferences about your life as a whole and what kinds of preferences you will have in the future. Because of these "global preferences" you can regret being addicted to the drug, even though now that you are addicted it is a good thing that you have a lifetime supply. Similarly, you can have global moral meta-values about when it is good or bad to create new "person-directed reasons." (Obviously this metaphor only goes so far. Creating Sally to replace Bob seems less analogous to addicting someone to a drug and more analogous to something like forcing someone to give up a fulfilling romantic relationship or life project and to adopt a different one)
Because of this, I do not think that a hybridist has to commit to a time inconsistent view where their normative standards change depending on whether it is before or after Sally's birth. I think that they can have timeless, impartial meta-values about when it is good, bad, or regrettable to create new "person-directed reasons." This can allow them to timelessly and impartially say that the world where Bob lives is the better one, overall. I do not think this is disrespectful to Sally or that it is saying that Bob counts more than she does. It is saying that the "person-directed reasons" for valuing Bob and Sally are exactly equal, but that there are timeless, impersonal reasons to regret the addition of more "person-directed reasons."
I think it is very important for a moral system to have some kind of "global preferences"/meta-values about what values it is acceptable to add or change. I think lacking these meta-values creates all sorts of problems, like the Mere Addition Paradox, or (more extremely) not being able to explain why the Federation should resist the Borg!