The non-identity problem is often taken to show that an act can be wrong despite not wronging—or even harming—anyone in particular (because different people exist in either outcome, the worse one isn’t worse for any individual). In her thought-provoking 2004 paper, ‘Can we harm and benefit in creating?’, Elizabeth Harman disputes this conventional wisdom. She suggests it stems from a misguided conception of harm as making worse-off. With a different conception of harm, we can explain the wrongness in non-identity cases as being due to harm to individuals, even if the outcome isn’t worse for them.
I wasn’t wholly convinced. I agree that we can harm and benefit in creating, but I don’t think that this undermines the conventional wisdom on the non-identity problem.
Rival Conceptions of Harm
In Reasons and Persons, Parfit introduced a technical conception of “harming” as making someone worse-off, as he takes this counterfactual comparison to be what morally matters (even if it doesn’t always match ordinary linguistic usage).
Harman instead proposes that “causing pain, early death, bodily damage, and deformation is harming.” (p. 92) Sometimes harming in this way can be justified, as when a surgeon causes us bodily damage in order to avert yet greater harms (such as death).
I guess the most principled account of harm in this vicinity would be one on which you are said to harm someone when you cause them to be in a pro tanto intrinsically bad state.1 (On this view, death is technically not a harm, but just a deprivation of benefits. This seems accurate. But of course, death is no less normatively serious than paradigmatic harms. So to accommodate this, we should replace harm-avoidance with sufficiency-seeking in our normative principles. For example, any harm/benefit asymmetry should be replaced with a distinction between harm-avoidance and securing basic needs, on the core side, versus provision of mere “luxury benefits” on the discounted side.)
I’m happy to work with an absolute (or noncomparative) conception of harm along these lines. As McMahan argues, we need some non-comparative conception of harms and benefits in order to make sense of the moral datum that a miserable person can be harmed by existence—and that we have moral reason, for this individual’s sake, not to create them. (While we can loosely say that a bad existence is “worse than nothing”, it seems most accurate to just say that it is bad in itself—which is enough to explain why it is worth avoiding.)
On Harman’s view, which allows for fine-grained pro tanto harms, even a happy, flourishing individual may be pro tanto harmed by existence, for example if they have a congenital disability. But that doesn’t mean that their existence is overall bad, because they are even more greatly benefited by existence. (I find it a bit misleading to say that someone is “harmed by X” when the harm of X is merely pro tanto and vastly outweighed by greater benefits. We should usually be more interested in net harms and benefits. But as long as we are clear on what we mean, I can play along with this way of talking. We just need to be careful about what we infer from it.)
Explaining Non-Identity Verdicts
The non-identity problem is to explain how it can be wrong to pursue policies like depletion that result in a worse future for humanity despite no individual being worse off as a result. Harman’s central goal is to “solve the non-identity problem by relying on reasons against harm.” The basic idea: individuals may be (pro tanto) harmed even if they are neither overall badly-off nor made worse-off. In such a case, the reasons against causing harms can explain why we should create better lives instead.
But I think that last step of the argument fails, for extremely subtle reasons.2 To see why, first note that when the individuals in question have overall good lives, the reasons against “harming” them by bringing them into existence are fully compensated by the ways in which existence benefits them.
To illustrate: suppose we can either create Happy Harry (who will live a full and happy life) or Middling Moe (who has a genetic condition that will end his happy life at the early age of 30). We assume it would be wrong to create Moe over Harry. But we want to know why.
The answer clearly has something to do with the fact that Moe, but not Harry, would suffer this genetic condition leading to early death. “Because Moe suffers extra harm,” you might say. But saying that could be misleading. For it might suggest, falsely, that your reason to prefer that Harry exists is for Moe’s sake, and that is clearly false. There is no reason for Moe’s sake to prefer that Moe not exist at all. Moe’s life is great! It’s just not as great as Harry’s. So your reason to want Moe to exist, for Moe’s sake, is not as strong as your reason to want Harry to exist, for Harry’s sake.
The real explanation, in other words, is that Harry benefits more from his existence than Moe does from his. This is not, at root, an explanation based on “reasons against harm” at all; it is one based on reasons to benefit.
An alternative route to this same conclusion is to note that we do not wrong Moe when we bring him into existence, if there was no better alternative. (In such a case, it isn’t wrong at all to create Moe. It is only wrong to create Moe when we have a better alternative, such as Harry, available.) But whether we wrong Moe in creating him cannot depend upon whether the alternative involves someone else happily existing in his place—that extrinsic fact is of no essential interest to Moe. So when it is wrong for us to create Moe over Harry, this cannot be because our action wrongs Moe.
So we still have the conventional “non-identity” conclusion that non-identity choices can be wrong without wronging anyone. Accordingly, the role played by the “harm” to Moe is not that concern for Moe should lead us to oppose Moe’s creation. Again, the harm instead means that concern for Moe gives us weaker reasons to support Moe’s creation than Harry’s. But in that case, the role of the harm is not to ground “reasons against harm” in the usual (prohibitive) sense. It’s instead to undercut enticing reasons to benefit.
Respecting Moe’s Normative Perspective
My argument here appeals to a kind of deference to Moe’s normative perspective: if Moe would (rightly) be glad to exist, our concern for Moe cannot ground reasons against his existence. Liz addresses a related “No Regret” objection in section 7 of her paper. She claims:
An individual can be harmed by an action and can have legitimate complaint about the action—indeed, she can be impermissibly harmed by the action—although she does not wish (nor should she wish) that the action had not occurred, and although the action makes her better off than she would otherwise be.
She illustrates this with examples of (i) rape resulting in a loved child, and (ii) a Nazi Prisoner who attains great wisdom through his time of suffering in a concentration camp. Of course, the relevant acts are naturally understood as subjectively wrong since the malicious agents presumably had no reason to expect their harmful acts to prove beneficial to their victims. But Harman wants to insist that they are also objectively wrong, and would still be impermissible even if the agents had been acting beneficently towards their victims on the basis of a miraculous insight into the future.
I’m dubious. Suppose the Nazi Prisoner’s time-travelling future self visits the beneficent guard beforehand and says, “I know you will want to help me escape. But please: don’t do it! No matter how much I beg at the time, you must ignore my ignorant pleas. For it will truly be best for me that I remain in the camp. This will be hard for you, I know; but please do it, for me, knowing that this is what I truly want, from a position of full information.”
How could it be wrong to do as the fully-informed Nazi Prisoner would truly and reasonably want, and to do it for that very reason? Such a verdict seems entirely senseless to me. (“No! I refuse to do as you ask and as would be objectively better for you! I deny you, for your own good… uh… something…”)
But whatever you want to say about these extreme cases, it just seems very clear (as explained in the previous section) that respect for Moe in particular should not lead us to deprive Moe of their happy existence. Recalcitrant intuitions about extremely anti-paradigmatic cases involving “helpful Nazis” shouldn’t lead us astray about this much more straightforward case.
I have similar concerns about Harman’s more general proposal that we allow greater benefits to be morally outweighed by lesser harms to the same individual. There’s something very strange about preferring an outcome, for S's sake, that diverges from what S would (rationally) prefer or choose for themselves. It seems disrespectful of S’s normative perspective: like an objectionable form of paternalism, only worse because you’re not even serving their interests. When beneficence and respect for autonomy both point in the same direction, it’s hard to see how any competing reason could stand against their combined force. Other reasons just can’t possibly be so significant.
Compensated harms and extra benefits
Suppose that a policy will result in a small proportion of the future population suffering (compensated) severe harms (they are still happy to exist). So there’s some reason to prefer an alternative in which the correspondingly placed (but distinct) people avoid these harms, and so have higher welfare. But suppose that the policy will also result in significantly more people existing, and the extra people will get to have very happy lives. So the policy overall results in significantly greater total welfare, comparable (or greater) average welfare, and no outright bad lives. Seems like a good deal!
Harman suggests that endorsing such a policy would be a “devastating result.” (p.103) To avoid this implication, she again appeals to the claim that benefits have less moral weight than comparable harms. Again, I think this claim unacceptably conflicts with respect for the normative perspectives of the individuals involved. I also don’t think anyone should feel motivated to avoid the pro-policy verdict to begin with.
Why is Harman so opposed to the welfare-promoting policy? In later work (p. 143), she appeals to the principle that severe harms generally can’t be compensated by benefits to others. But insofar as one finds that plausible, I think that’s because we’re generally imagining cases in which the harmed person is not also sufficiently compensated, and is instead wronged by the harms. But that isn’t the case here. As per above, I don’t think someone can be wronged by existence if they’re rationally happy to exist, and the harm has been sufficiently compensated by benefits to this very same individual. We’ve seen that the significance of the severe harm, in a non-identity case, is just that there’s an alternative in which a different person would exist in their place with greater well-being. But a mere loss of greater well-being is precisely the sort of thing that can easily be compensated for by providing yet greater benefits to others, on any reasonable view.
This disagreement ties in with some other interesting issues in bioethics, like whether treatment trials need to narrowly optimize for patient interests, or whether it’s instead permissible to offer a moderately-beneficial treatment (involving some risk of pro tanto harms) that one expects to lead to greater downstream social benefits. When discussing this previously, I wrote:
If you can either (i) help some people a moderate amount in hopes of subsequently helping the rest of society much more, or (ii) help the first group a bit more, but without such potential for downstream benefits, the first option is not inherently unethical. You have not wronged anyone by offering them a suboptimal benefit that you needn’t have offered them at all, when there’s a perfectly good reason (potential greater benefits for others) for not narrowly optimizing for just their interests.
At the time, I thought that this should be uncontroversial, and so I took concerns about the risk of brain damage to be a “sham objection” to Brain-Computer Interface research, whereas the real ethical dispute is just over whether the future development of the technology should be regarded as socially beneficial or not. But if I’ve understood Harman correctly, I now think that she might disagree with me on this practical issue. Due to the pro tanto “severe harms” involved, it seems that even if invasive BCI offered greater social benefits whilst also being net-positive for patients, Harman’s strong anti-harm principle3 would seem to suggest that it is unethical to pursue BCI over an alternative that’s safer for the immediate patients (but with less prospect for downstream social benefits).
So that’s interesting! I think it suggests further reason to reject the strong anti-harm principle, as again conflicting with the joint moral power of beneficence and autonomy. No competing reason can possibly claim greater normative authority or importance than the combined force of these two values together. Or so I’m inclined to think.
In section 5, Harman suggests that the notion of a “bad state” might be given a comparative analysis in terms of being in a state worse than “the normal healthy state of an organism of the species in question.” But species-relative standards are too arbitrary for my taste. I’m even skeptical that there really is any such thing. (Compare: what is the “normal” human lifespan? Human lifespans have varied hugely across history, and I don’t see any basis for privileging any particular environment as the “normal” one.)
A less interesting objection, which I’ll set aside, is that Harman’s strategy doesn’t cover cases in which the worse future involves no bad states but just a paucity of good states: “muzak and potatoes”, as Parfit once put it. It’s wrong to choose a “muzak and potatoes” future over a flourishing future, though the former contains no harms on any account.
That is, on which even individually-compensated harms cannot be “further compensated” by additional benefits to others.
Good article!
I’m skeptical of all attempts to solve the non-identity problem by non-standard conceptual analyses of HARM. Conceptual analysis won’t solve the core problem. In a slogan: the non-identity problem cannot be solved by definition.
Suppose we accept the following constraint on solutions to the non-identity problem: a solution to the non-identity problem should be rejected if it implies anti-natalism. (This constraint shouldn’t be respected dogmatically, or anything, but it should factor heavily in our deliberations.)
Given that constraint, it seems to me that all harm-based solutions to the non-identity problem should be rejected (at least provisionally) because they must either imply anti-natalism or be saddled with ad-hoc restrictions to avoid that implication.
Why think all harm-based solutions imply anti-natalism? Consider the following case:
BLIND CHILD, NO SIBLING: Wilma wants one child at most, but knows she’d get just as much life satisfaction from pursuing a certain career—and she can’t do both. If she has a child, the child will be blind.
I’m this case, it doesn’t seem like Wilma would wrong her child by conceiving her. Even if a non-standard account of HARM turned out to satisfy the concept HARM best, and thereby entailed that Wilma had harmed her child by conceiving him, that wouldn’t be a *wrongful* harm.
But in the standard (direct/same number) non-identity case, the only difference is that Wilma’s hobby is substituted for the conception of a sighted child. But whether some sighted child—who is not to blind child—would have been conceived if the blind child hadn’t been seems like it couldn’t transform Wilma’s (non-comparative) harming of her blind child into a wrongful harming.
One reason for thinking this is that the formal intuition just seems right. But for people who prefer case-specific intuitions, I think it can also be supported by cases. Consider, e.g.,
FERTILITY COACH: The case is the same as before, with one specification. One of the (short-term) careers Wilma would go into if she didn’t have a blind child is fertility coaching. She knows that if she goes into this profession, she’ll make possible the conception of a sighted child by a different couple of a different race six years later. (I add the “of a different race six years later” part to ward off the intuitional confusion Boonin warns about, where we allegedly have trouble holding the non-identity facts clear in our mind’s eye.)
In FERTILITY COACH, the counterfactual is (for all intents and purposes) the same as the in the standard non-identity case: *if* Wilma doesn’t conceive a blind child, *then* she will bring about the conception of a sighted child by another couple. But the truth of this counterfactual doesn’t seem like it could (or does) make it the case that Wilma wrongfully harms her blind child, rather than merely harming her in a non-comparative sense.
One might object that the counterfactual is different in the two cases—“will conceive” vs “will bring about the conception of”. But that seems like an irrelevant difference, and there are compelling arguments against solutions to the non-identity problem that try to make parental duties the issue.
Most of the energy directed at refuting harm based solutions to the non-identity problem comes in the form of giving counter-examples to this or that non-standard analysis of HARM (see Boonin’s book, Duncan Purves’ dissertation, etc.) But if this kind of blanket strategy works then that literature was probably unnecessary.
Good article! I think it’s all right!