Some things are absolutely bad: we have reason to prevent or avoid them, even at some (lesser) cost. Other things are merely comparatively bad: we have reason to want something else even more, but no reason to regard these things as intrinsically bad or worse than nothing at all.
This is an important distinction. It can explain:
Why avoiding wrongdoing is not the right moral goal
Why it isn’t (necessarily) harmful to create lives that will be greatly harmed by premature death; and
Why the risk of mistakenly granting rights to (actually non-sentient) AIs doesn’t give us reason to refrain from creating AIs of dubious moral status (contra Schwitzgebel).
Economists like to say that if you (often fly but) never miss a flight, you’re spending too much time at the airport. In a similar vein, if you never make moral mistakes, you’re not exercising enough agency. But the latter argument is actually much stronger. Missing a flight can impose significant absolute costs, after all, whereas merely comparative wrongs are not (in absolute terms) morally costly at all.
Why the Distinction Matters
We should want to achieve positive results, and avoid negative ones. Comparative judgments can be helpful because we should prefer better outcomes over worse ones. But an absolute aversion to merely comparative harms could lead us badly astray.
To see this, suppose you know the following facts: (i) your friend is about to pick up a discarded lottery ticket that is, unbeknownst to them, a winning ticket worth millions of dollars; and (ii) if they do pick it up, it will be immediately stolen by a compulsive ticket-thief who is watching you both closely. The theft would greatly harm your friend, by robbing them of a future of wealth and luxury. (Further suppose the harm is merely comparative: your friend won’t feel any distress at the event itself.) You could prevent this harm from occurring by distracting your friend so that they never pick up the winning ticket in the first place. But preventing the harm in this way doesn’t actually do any good, or make your friend any better off than they would have been had the harm occurred. They equally lack a future of wealth and luxury either way. You’ve prevented this desirable future from being “taken away” from them only by ensuring that it was never theirs to begin with. And there is no point at all in doing that.
Next, let’s add some benefit from picking up the ticket—suppose that touching it will magically cure your friend’s migraines. That’s a big (say +50) benefit, but not so great, let’s suppose, as the harm of being robbed of a future of wealth and luxury (a future that would have been worth, say, +100). Someone who confuses absolute and comparative harms might miscalculate the net effect of picking up the ticket as being “+50 benefit -100 harm = net -50 harm”. But in fact picking up the ticket yields +50 net benefit in absolute terms. There is no harm at all from the subsequent theft compared to never possessing the ticket at all. (The thief does harm in comparison to letting your friend keep the ticket, leaving them with just +50 rather than +150 value, but that comparison isn’t relevant to your decision context: it gives you no reason to prefer that your friend remain at zero.)
Being absolutely averse to the merely comparative harm of the ticket-theft could lead you to wrongly deprive your friend of a cure for their migraines. That’s (comparatively) bad! Clear thinking requires appreciating that merely comparative harms are neutral rather than bad: worse than a better alternative, but not absolutely disvaluable in a way that would warrant paying costs to replace it with some other neutral outcome. Rather, if you can’t secure the better alternative anyway, you should just accept the merely comparative harm as you would any other neutral situation. Paying costs to avoid it would just make the situation worse.
As we’ll now see, this acceptable harm principle has (at least) three important implications.
1. Against Avoiding Wrongdoing
Similar reasoning applies to merely comparative wrongdoing. In my latest paper on ‘Moral Importance’, I develop the following Deontic Leveling-Down Objection against the idea that we should have any intrinsic concern to avoid wrongdoing per se:
Suppose that by default you will, at some future time, have the choice to either (1) do immense good, (2) do a little good, or (3) do nothing. You’ve good inductive evidence to suggest that you’ll wrongly choose option 2, despite option 1 being clearly superior. But then you learn there’s a chance that, through no fault of your own, options 1 and 2 will no longer be available, forcing you to permissibly do nothing (the only option left). How would a virtuous agent regard this prospect? I say poorly. There is no morally appropriate goal that is served by losing good options. But the goal of avoiding wrongdoing is so served. So avoiding wrongdoing cannot be a morally appropriate goal.
Of course, some cases of wrongdoing—such as murder—are positively bad, or worse than doing nothing at all. Avoiding murder is a morally appropriate goal, in a way that avoiding wrongdoing isn’t, because murder is positively bad. But other cases of wrongdoing—like letting a child drown—merely comparatively bad. It would be vastly better to save the child. But there is no reason at all to avoid this wrongdoing in a way that also fails to save the child, e.g. by becoming paralyzed. Since a paralyzed agent cannot save the child, they are not morally required to: they do not act wrongly when they remain immobile. But there is nothing positive about avoiding wrongdoing in this way.
This reasoning helps to demonstrate that (im)permissibility is overrated. We should care about securing what’s desirable (or morally positive), and avoiding what’s undesirable (or morally negative). Comparative judgments (including about wrongdoing) may help to guide our decisions, but an absolute aversion to merely comparative wrongdoing—mistaking a foregone positive for an intrinsic negative—could lead one badly astray. Thus, avoiding wrongdoing should not be the top priority of a morally decent person.
2. Creating Short-lived Lives
Suppose that policy-makers must decide whether to allow some class of beings, the Xs, to come into existence. Further suppose that the Xs would be sentient, could be expected to experience net-positive lives for as long as they exist, but would almost certainly be killed prematurely. Since early death deprives them of a valuable future, killing them at that time would qualify as harmful (and presumptively wrong).
Many people would take the above facts to settle that we ought not to allow the Xs to come into existence. My acceptable harm principle suggests that this is a mistake. At least, the mere fact that it would be (comparatively) harmful to kill the Xs doesn’t suffice to show that it is in any way preferable for them to never exist at all. Contrary to common opinion (and pending further argument to the contrary), we may well have no reason at all to prefer their non-existence.
Two interesting applications:
Humanely-farmed animals. This familiar “logic of the larder” implies that we could (depending on the empirical details)1 have most moral reason to support humanely farmed meat (where the animals have net-positive lives) as morally better than veganism. (Of course, that’s not to excuse the horror of real-world factory farms, which impose great absolute harms.)
Sentient AI. AI programs may be run, and swiftly “reset”, across hundreds of millions of instances. If some future AI were truly sentient, each such “reset” might (depending on your views about personal identity) qualify as killing that conscious subject and replacing them with a new “copy”. Repeatedly killing hundreds of millions of highly intelligent, conscious beings sounds like a moral catastrophe. But whether that’s so depends on the alternative. By the acceptable harm principle, we’ve no moral reason to prefer (for example) that they never achieve sentience, so long as whatever life they do get to experience is in itself positive rather than negative.2
The obvious objection to all this is that it would sure seem bad to farm and kill human babies (for example), no matter that the individuals in question otherwise would not get to exist at all. Plausibly, such actions would be corrupting to our moral sense, practically speaking (even if not absolutely bad in principle).3 The tricky question is whether a similar callousness towards non-human lives and deaths would be similarly corrupting, or if lack of concern for the comparative harms we end up imposing could be compartmentalized without making us broadly uncaring (in a way that leads to absolute harms).
Current animal agriculture plausibly is corrupting to a degree, but it's also absolutely bad in a way that the above proposals are not. So that plausibly places an upper bound on the likely dangers of corruption here. Yet we still love our pets, and saved the whales. So we know that people are pretty good at compartmentalization.
This makes me think that, so long as there’s nothing intrinsically bad about the above policies, it’s not clear that we should expect them to have especially harmful instrumental effects (e.g. on our moral character). But I’m a philosopher, not a social scientist, so I’m happy to leave that an open empirical question. My main focus is instead on the question of whether it would be wrong in principle. And the acceptable harms principle suggests that it wouldn’t.
[Note that killing an existing person is not an acceptable harm, given the available alternative of their living autonomously. There are obvious reasons to expect that the best rules will prohibit killing humans. The remaining question is whether the best rules would extend this prohibition to all sentient beings—even at cost of their very existence—or whether we can manage a carve-out for other beings, like (happy) farm animals, whose very existence depends upon our being able to kill them when it suits us. My guiding question: what system will allow the most individuals to live more positive lives, overall?]
3. Accepting Risky Rights
In ‘Don’t Create AI Systems of Disputable Moral Status’, Eric Schwitzgebel argues that creating AIs of disputable moral status would land us in a catastrophic “rights dilemma”:
We will then need to decide whether to treat such systems as genuinely deserving of our care and solicitude. Error in either direction could be morally catastrophic. If we underattribute moral standing, we risk unwittingly perpetrating great harms on our creations. If we overattribute moral standing, we risk sacrificing real human interests for AI systems without interests worth the sacrifice.
My previous section already cast doubt on the first horn of the dilemma: even if we do unwittingly perpetrate great (but merely comparative) harms on our creations, such as by wrongly killing them, this may be no worse—in prospect—than not creating them at all. (Absolute harms would be a different matter, of course!)
But the second horn is also dubious. “Sacrificing real human interests” unnecessarily is, of course, unfortunate as far as it goes. Fewer sacrifices would be preferable! So, if the AI systems don’t have “interests worth the risk”, then we would be making moral mistakes in making those eventual sacrifices. Some could even be high-stakes mistakes, giving up much more than would be ideal. But once we appreciate that avoiding mistakes is not itself a worthy moral goal, we’re simply left with the question of whether we’re overall better or worse off for having these AI systems around (unwarranted sacrifices and all).
If cautiously respecting AI interests makes us worse off than having no such AI (of disputable moral status) at all, then we have clear prudential reasons to oppose the creation of such beings whether or not we’d count as morally mistaken in making our subsequent sacrifices. We’d be just as poorly off if the AIs were actually sentient. Alternatively, if the technology is overall beneficial to humanity, even counting the costs of our moral caution, then the possibility of our caution qualifying as a “moral mistake” (compared to carefree implementation) is not a reason to make the even greater mistake of being so cautious as to forego all the benefits of this technology. In neither case does the risk of “sacrificing human interests unnecessarily” constitute a morally relevant reason to oppose the creation of the AI. It can be safely disregarded in favour of the more straightforward prudential evaluation of whether developing AI of this sort (while being committed to cautiously treating it well) is in our collective interests at all.
Don’t get me wrong: it’s well worth trying our best to work out the true moral status of the “disputable” systems, and how we truly ought to treat them. We certainly should not wish for any sentient being to suffer absolute harms. So, to reduce this risk it may well be worth being “liberal” in our moral circle expansion, and default to treating disputable systems well (unless we can be very confident that they lack moral status, in which case that would be very good to establish). My point is just that if we commit to treating disputable AIs well, and it is still truly in our collective interests to create them, then we should not be put off from doing so by the mere fact that we would then also be generating more opportunities for moral mistakes (such as favouring humanity less than we ideally ought to in relation to the actually non-sentient AI).
Moral mistakes of this kind are totally fine! Yes, it would be better yet to make even better decisions (rather than moral mistakes); the mistakes are never good. But they are often best regarded as neutral, and not something we should pay costs to avoid. So this second horn of the Full Rights Dilemma also seems to fall afoul of the acceptable harm principle.
Conclusion
People widely exaggerate the significance of moral mistakes, due to conflating absolute and comparative harms or wrongs. They’re right to be averse to absolute harms and wrongs. But wrong to be (absolutely) averse to merely comparative ones. Absolute aversion to merely comparative wrongdoing generates moral hazard: it suggests that you have reason to reduce your moral capacities (so that you never fail to exercise your capacities as you should), just as aversion to comparative harms incentivizes precluding access to good futures (if the bulk of that future value will subsequently be taken away). But such strict aversion to merely comparative harms or moral mistakes is itself a moral mistake, and a potentially harmful one at that.
The acceptable harm principle directs us to bear this in mind. We should wish for greater agential capacities, if this will result in our securing more of what matters on net, no matter how great the (merely comparative) moral mistakes that result. For similar reasons, we should prefer for good lives to exist (all else equal), even if they are shorter than would be ideal.
The point of avoiding moral mistakes is to make things better in absolute terms.4 But some ways of avoiding mistakes actually make things worse, especially when the mistakes in question are purely comparative—“neutral” rather than “bad” in absolute terms. Wrongly failing to make things better is not intrinsically bad in a way that’s worth pre-empting at further cost; it’s just less good than we might prefer. If you pre-empt at further cost while still not achieving the desired good, you’ve just made things worse. There’s no moral reason to do this.
I think this is an important, but widely neglected, insight.5 (If I'm right about this, it could serve as a nice demonstration of the practical value of abstract ethical theorizing. Or—if you think I'm disastrously mistaken—of its risks! In the latter case, I welcome your counterarguments.)
Jeff Sebo has argued that utilitarians should probably support animal rights, but this turns on sufficiently subtle psychological and sociological questions (e.g. whether “humane” omnivorism is an easier sell than veg*nism, and whether it is more likely to result in moral backslidng or neglect of animal interests) that I remain pretty uncertain!
Of course, that’s not to excuse neglecting safety risks (including existential risks) from advanced AI, whether sentient or not. We may well have sufficient reason to oppose further development of the technology on simple instrumental grounds; I’m not in a position to assess that here. To learn more, I recommend Holden Karnofsky’s Most Important Century series.
But if not, it may just be a case where moral appearances are misleading. After all, it also “seems wrong” to commit obscene sexual acts with roadkill, but I take it no serious ethical theory will deliver strong non-instrumental reasons in support of that verdict. Sometimes we just find things distasteful, for no particularly principled reason (if “principled” here is taken to exclude instrumental considerations about psychological abnormality and alarm about “the kind of person who would do such a thing”). That’s OK — subjective revulsion can be a perfectly sufficient reason to avoid distasteful acts! That doesn’t change even if we’re forced to admit that there’s no principled reason to back it up. What does change is that there’s no reason to try to spread unprincipled revulsion around in a more “consistent” manner. For example, even if one could successfully argue that there was no principled difference between necrobestiality and use of ordinary inanimate objects as sex toys, that would not constitute any sort of argument against the latter. So it’s important, when evaluating consistency arguments, to be clear about whether one’s starting revulsion is independently supported or not. Cheap “seems wrong” intuitions are not sufficient for this purpose.
This does not assume consequentialism. I don’t mean “better” in terms of impartial value. I mean “better” in terms of whatever is truly worth caring about: attaining whatever is morally desirable, and avoiding what is absolutely undesirable (as opposed to strictly avoiding what is merely comparatively dispreferable to some specified alternative that may itself be positive, after all). As flagged in ‘Don’t Valorize the Void’, I worry that many non-consequentialists fail to sufficiently “center what positively matters” in their ethical thinking, but I don’t believe there’s anything essential to non-consequentialism that forces this error. Any who wish to avoid it can easily do so (without threatening their non-consequentialist scruples)!
I’m indebted to Ben Bradley’s insightful discussion of the “extrinsic” harm of death in his (2008) ‘The Worst Time to Die’. I’m not aware of any discussions of the issue at the broad level of generality and applicability that I’ve attempted here; but further references would be most welcome!
Thanks for the discussion of my post, Richard! This is interesting argument in favor of the second horn, if we’re willing to pay the cost. I don’t think I accept the general principle the post relies on, but I do interpret my advice to avoid creating AI of disputable moral status as defeasible policy advice rather than strict requirement, which can be outweighed in the right circumstances.
I’m inclined to think there’s an important difference between the human farming case and the baby farm case, though. I discuss my own version of this in the “Argument from Existential Debt” section of Schwitzgebel & Garza 2015. The case: Ana and Vijay would not have a child except under the conditions that they could terminate the child’s life at will. They raise the child happily for 9 years, then kill him painlessly. Is it wrong to have the child under these conditions? And is it wrong, after having had the child, to kill him — or is it like the humane meat case (as interpreted by defenders of that practice)?
I really enjoyed this post! Hilary Greaves has a paper, "Against "the badness of death,"" that also discusses how focusing on merely comparative can warp our thinking