‘Morality’ is ambiguous. It might be used to pick out either of the following (which are often presumed to co-refer, but conceptually could come apart):
(1) A certain social practice, with norms involving certain prohibitions, prerogatives, and practices of praise and blame.
(2) The fundamental, authoritative normative truths concerning what we really ought to care about and to do.
The question “Why be moral?” gets a grip on us due to meaning (1). You can’t sensibly ask why we ought to do what we really ought to do. But you can sensibly ask whether ordinarily accepted norms are actually authoritative, or worth (non-instrumentally) caring about. Indeed, it’s very important to ask this question, rather than uncritically accepting cultural norms and practices that may or may not be justified.
To anticipate: I think most ordinary morality is actually pretty good, on instrumental grounds, though there’s certainly room for improvement in places. But I think people go wrong when they attribute non-instrumental significance to deontic constraints. We get much more plausible verdicts overall when we appreciate that those norms have purely instrumental value, and that what we should care about non-instrumentally is just well-being. Put another way, we should place beneficence at the center of ethics, and see everything else as derivative of that. Competing norms cannot plausibly claim to be more important, in principle, than people’s lives and well-being.1
In what follows, I’ll briefly introduce the traditional (selfish) amoralist before expanding upon a new challenge specifically to deontological ethics: the challenge of the beneficent amoralist.
Why be Moral: The Selfish Amoralist
Hume’s “sensible knave” has long haunted moral philosophers. It’d be nice to have an argument that would rationally compel the selfish amoralist to care about others. That’s probably not possible, but I do think it is more rationally coherent to have a broader circle of concern. After all, we generally take ourselves (and our loved ones) to matter. But we’re not unique. Whatever property makes our interests normatively considerable (most plausibly, our sentience) is a property shared by many others too. So our overall patterns of concern are more unified and coherent if expanded widely and systematized in this way, rather than making exceptions for ourselves. I think this line of argument goes a fair way towards addressing the traditional “Why be moral?” challenge in a non-question-begging way.
But we might also be comfortable enough with some question-begging answers. For example, I think it is just obviously true that other people’s interests are genuinely worth caring about. This seems self-evident—not in the colloquial sense that everyone will necessarily agree with it, but in the philosopher’s sense that it doesn’t need to be justified by reference to anything else: simply understanding the intrinsic content of the claim provides sufficient justification for believing it. Crucially, it doesn’t seem mysterious in any way that others’ interests are worth caring about. We can comfortably take this as bedrock; it doesn’t call out for further explanation.
Why be Moral: The Amoral Saint
But I think there’s a new version of the challenge that specifically afflicts deontologists. Imagine a selfless and beneficent individual, who cares deeply (perhaps even equally) about all sentient beings, but has no independent (non-instrumental) concern for other norms of “morality”. From the perspective of this saintly amoralist, deontological “morality”—like conservative sexual morality—looks like a potentially harmful practice, fetishizing arbitrary and objectively irrelevant properties to the detriment of people’s real interests. “Why do you give moral weight to things other than making people’s lives go well?” she asks you. “How do you justify giving those other features so much weight that you insist upon letting vast numbers die unnecessary, or otherwise have people’s lives go significantly worse?”
These seem like good questions! Indeed, they seem like morally pressing questions. And they suggest that there is something mysterious about deontological distinctions. They can’t just be taken as bedrock; they really do call out for further explanation and justification.
Hume’s sensible knave is a selfish jerk, who fails to care about much of what really matters (namely: other people). But our beneficent amoralist isn’t like that at all. She cares deeply about others. So much so that she’d be willing to suffer the psychological trauma of pushing a guy in front of a trolley if that would truly help others even more. What a saint! The rest of us feel free to disregard the suffering that results from our “permissible” (in)actions; but not her. If she’d let the five die, their screams would have haunted her dreams forever, just as the death of the one she killed now will. She sees them all in their full humanity, and never turns away.
What can you say against this saint? What deficit of character or moral motivation does she display in her extreme beneficence? “She violated the rights of the one!” you say. But she just looks at you confused, like you’d started speaking in tongues. “I’m very regretful that the one was harmed at all,” she assures you, “but why aren’t you comparably concerned about the five?” Why indeed? When rights lack instrumental value, or fail to promote the overall good, they are in effect a mechanism for prioritizing some people (specifically, those with a certain kind of status quo privilege) and disregarding others (those in a less advantageous default position). Why would you endorse such an invidious social practice, when instrumentally harmful?
Narrow vs Wide Reflective Equilibrium
The standard justification for deontological moral theory is that it meshes with “commonsense intuitions” about morality. But this implicitly draws upon our first—more sociological—sense of ‘morality’. Narrow reflective equilibrium is the project of systematizing our first-order moral intuitions: addressing how to most intuitively apply the words ‘right’ and ‘wrong’ across different cases. Deontology may be a plausible solution to the project of narrow reflective equilibrium. But this narrow project misses the central point of morality, that it is supposed to be genuinely normatively authoritative. This is a higher-order fact about morality that arguably clashes with many first-order intuitions.
Ordinary moral intuitions are often influenced by what we find disgusting or disturbing, for example. “Yuk factor” thought experiments involving incest, eating roadkilled pets, etc., describe acts that many intuitively consider to be “wrong” even in distant possible worlds where it’s stipulated to be harmless. Previous generations might have added gay and interracial relationships to the list. Clearly, we cannot just take moral intuitions at face value. We need to reflect more deeply on whether a candidate moral norm has rational support that makes its putative significance intelligible as something that’s genuinely worthy of non-instrumental concern, and not just something that systematizes our (possibly arbitrary) cultural norms. Even when a norm is worth endorsing, theorists need to understand whether this is for instrumental or non-instrumental reasons.2
The Normativity Objection
Consider the inconsistent triad:
(i) Morality generates genuinely normative reasons
(ii) Morality enshrines deontological distinctions
(iii) Deontological distinctions are arbitrary, and lack genuine normative significance.
Arguments for (ii) tend to undermine (i), because of (iii). Deontologists may indeed be accurately describing a coherent system of norms that people are accustomed to talking about and using to guide their behaviour. People may have strong intuitions about what’s permitted or required by that familiar system of norms. But I’m not really interested in raising a merely “internal” challenge about whether that system is better described as having utilitarian roots. I’m suggesting that we need to question the system itself. Maybe still endorse it for instrumental reasons, insofar as it happens to be conducive to overall well-being. But don’t pretend that the system itself generates authoritative normative reasons, if it rests on indefensible foundations.
Why think the deontological system rests on indefensible foundations? Well, just look at it. The Doctrine of Double Effect claims that there’s more (intrinsic) reason to kill people as “collateral damage” than as a direct means to doing good. Isn’t that plainly arbitrary? Nobody on the receiving end could sensibly share this concern about the precise causal means by which you kill them. (If anything, I’d prefer for my death to serve some useful purpose.) DDE may yield intuitive verdicts about what to do, but as a matter of principle, it’s an absurd thing to care about.
Or consider Thomson’s famous distinction between redirecting an existing threat vs initiating a new threat. We’re told it’s OK for a president to redirect a foreign nuke away from a big city on to a small town, but not OK to nuke the small town while the foreign nuke is overhead so as to pulverize the latter.3 Supposing there were no instrumental differences between the cases (no risk of missing the foreign nuke, etc.), how could the remaining difference possibly merit intrinsic concern?
It may be that some rules along these lines are instrumentally useful norms for fallible people to follow. If so, that’s fine. (“Don’t nuke your own towns” seems like a pretty good rule for presidents to follow in general.) But it’s at least clear that these sorts of distinctions cannot carry any non-instrumental weight, right? They’re not things that can credibly compete with people’s lives and well-being as matters of intrinsic concern.
Or return to any classic “killing one to save five” case. Some of these are thought to constitute intuitive “counterexamples” to utilitarianism, but it’s actually very obscure why anyone would endorse the deontologist’s verdict upon reflection:
However terrible it is for Chuck to die prematurely, is it not—upon reflection—equally terrible for any one of the five potential beneficiaries to die prematurely? Why do we find it so much easier to ignore their interests in this situation, and what could possibly justify such neglect? There are practical reasons why instituting rights against being killed may typically do more good than rights to have one’s life be saved, and the utilitarian’s recommended “public code” of morality may reflect this. But when we consider a specific case, there’s no obvious reason why the one right should be more important (let alone five times more important) than the other, as a matter of principle. So attending more to the moral claims of the five who will otherwise die may serve to weaken our initial intuition that what matters most is just that Chuck not be killed…
If you asked all six people from behind the veil of ignorance whether you should kill one of them to save the other five, they’d all agree that you should. A 5/6 chance of survival is far better than 1/6, after all. And it’s morally arbitrary that the one happens to have healthy organs while the other five do not. There’s no moral reason to privilege this antecedent state of affairs, just because it’s the status quo. Yet that’s just what it is to grant the one a right not to be killed while refusing the five any rights to be saved. It is to arbitrarily uphold the status quo distribution of health and well-being as morally privileged, no matter that we could improve upon it (as established by the impartial mechanism of the veil of ignorance).
Conclusion
There are good reasons to be wary of naïve utilitarian decision procedures. Robust norms against harming others plausibly have higher expected value than blindly following naïve calculations, in which case following those more reliably-good norms is precisely what prudent utilitarianism entails.
Endorsing such norms does not require embracing deontology as a moral theory. (I think the theory gains a lot of unearned credibility from this conflation.) Deontologists theorize that those norms have non-instrumental significance. But this is very implausible, when you examine them more closely. It’s far more substantively plausible (i) that we should ultimately care more about people’s well-being than about subtle causal distinctions, and (ii) that we should ultimately prefer what everyone affected would prefer from behind a veil of ignorance rather than arbitrarily privileging status quo beneficiaries. Insofar as we have reason to embrace deontic constraints (despite their intrinsic absurdity), this must be for extrinsic, purely instrumental reasons: that doing so will ultimately help us to better achieve what really matters, namely, saving and improving lives.
I always want to ask deontologists, “Do you really think this is more important than people’s lives and well-being?” But few seem willing to give a straight answer. (My sense is that it isn’t a question they’re used to even considering. I hope to change that!)
I actually think most apparently deontological norms, including anti-incest ones, are best understood in this purely instrumental, utilitarian-compatible way. I suspect many people become deontologists by mistakenly imbuing instrumentally-good rules with intrinsic significance. I argue against this intrinsic significance. But I often enough agree with their rules, just on purely instrumental/utilitarian grounds. It’s a difference in interpretation, not practice (for the most part).
Thomson (1976), p. 208.
I strongly agree with part of this, and I strongly disagree with part. I agree with where you criticize deontology for respecting norms that are not really important. It doesn't really matter--neither to the victim nor the beneficiary--whether you redirect a threat or cause a new one to prevent some other one. I agree that our intuitions about deontology are reflecting shallow intuitions about lots of cases, rather than deep intuitions about what really matters.
Now on to the parts I disagree with. For one, I disagree with the claim that common-sense morality is pretty good. I think common-sense morality is pretty good in constraint cases, but is very bad in other cases. The failure of common-sense morality to endorse a general norm of beneficence has literally caused plausibly millions of deaths--imagine if everyone on earth was an effective altruist, or even just 10% of people gave 10% of their incomes.
Second, I disagree that the beneficent amoralist is a challenge. Imagine if utilitarianism is the correct morality. We can imagine a nice seeming amoralist deontologist, but this doesn't challenge the utilitarian. It seems in the dialogue, the reply of the moralist should be, if deontology is true, "you shouldn't violate rights, because it's really wrong, even if it's for the greater good."
I had a shower thought Richard probably inspired by reading your recent posts here. I'll just leave it here while its on my mind, its not well thought out enough to be worth pursuing in any real way.
Nonsecular ethics have a lot of very substantive norms and prohibitions that adherents considered moral ones, many that secular people, let alone ethicists, strongly reject, like prohibitions on homosexuality, wearing extravagant clothes, showing the bottom of your shoe in public, etc. etc. There are strong utilitarian reasons to think adherence to these kinds of norms, within societies in which they are widely adopted, with laws and punishment in place to enforce them, with violations of them leading to violence and exile, is instrumentally valuable. If we were to give the best explanation of their instrumental value, we would cite highly contingent, changeable features of cultural practices, people's beliefs, reasonable and unreasonable, etc and why some kind of conservative maintenance of these practices in those cultures is better for overall well-being than disrespecting them.
Now consider the case of deontological prohibitions and distinctions and vocabulary and so forth. You've been conceding a lot for a utilitarian about their instrumental value. Do you think the best explanation for their instrumental value is similarly some highly contingent, changeable features of practices, beliefs, reasonable and unreasonable, of culture? My shower thought is this: deontologists wil never concede this. They'll think that the reason deontology is of instrumental value for a consequentialist has nothing to do with the mere fact that people have quasi-religious beliefs about their weight, and that societies have built them into their cultural milieu like prohibitions on women in combat or marrying someone from another religion. The best explanation is more like some kind of Leibnizian pre-existing harmony principle. Adherence to deontological constraints is constitutive of what it takes for things to have value, its the people who adhere to them whose well-being is worthy of promoting. The reason deontology is instrumentally valuable is because it is part of the explanation of what a good consequence is. (Something like that, might be formulated differently, maybe its just that the best explanation of its instrumental value lies precisely in its being true, and that the world is structured this way where consequences line up with deontic moral truths.)
I don't endorse this, I don't think I'm a deontologist. But I keep seeing this over and over in, for instance, advocates of retributive punishment. They start by just assuming that whatever is deontically justice will have the best overall consequences, and when that's not true, consider the deontologically just consequence to be the best overall consequence.