Ethical Theory and Practice
The difference between hypothetical judgments and real-world guidance
Many seem to take the FTX collapse to cast doubt on utilitarianism as a moral theory. This response belies a misunderstanding of the relation between ethical theory and practice. Ethical theories tell us what fundamentally matters, and actual events have no greater significance than purely hypothetical ones so far as such in-principle judgments are concerned. Empirical observations (e.g. of disastrous decisions) are more relevant to the practical, instrumental question of how to think and act in light of what ultimately matters. But that’s a further question beyond the core one that ethical theories begin with.
It’s important to note the existence of a gap here between moral theory and practice. For example, my previous post explained the distinction between naïve vs prudent utilitarianism, and why it’s a mistake to assume that utilitarianism justifies crude “ends justify the means” reasoning in real life. In this post, I’ll expand upon when and how our choice of fundamental moral theory should make a difference to our practical ethics—and why it often shouldn’t.
Ethical Theory and What Fundamentally Matters
I take ethical theories to address the question of what fundamentally matters.
Our value theory, or axiology, tells us how to evaluate outcomes, while bracketing any distinctive questions that arise from the moral assessment of actions. Utilitarians endorse a welfarist theory of value, on which the well-being of sentient beings is what makes an outcome good. Others may make minor axiological tweaks, but any decent view should be at least approximately welfarist.
Our normative ethical theory tells us how or whether moral agency upsets the basic picture given to us by our axiology.1 Consequentialism is the simple view that agency doesn’t inherently change what matters. Roughly speaking: if it would be good for an outcome to be brought about by natural causes, then it would be good (in principle) to choose to bring about through human agency. Non-consequentialists deny this. Deontologists, for example, hold that there are weighty non-instrumental reasons to avoid certain actions, such as killing or harming innocent individuals as a means (without their consent), even if this would bring about a better outcome. Moderate deontologists allow that, if the stakes are high enough, these deontic constraints can be overridden. Absolutists deny this.
We might further ask whether deontic constraints have some deeper, unifying explanation. Rule Consequentialists say yes: deontic constraints are grounded in the fact that widespread internalization of those rules would promote value. Some deontologists (e.g. Kantians) seek to offer a different—non-consequentialist—grounding for constraints, while others (e.g. Rossian pluralists) are happy to take the constraints themselves as fundamental.
Distinguishing the Theories
It can be difficult to distinguish utilitarians and moderate deontologists in practice. After all, as we saw previously, prudent act utilitarians agree that there are extremely weighty reasons to avoid violating generally-beneficial rules or “rights” (even when it seems to the unreliable agent that the violation would be worth it). It’s just that these reasons are instrumental rather than non-instrumental. Confused thinkers may implicitly conflate “instrumental” with “non-weighty”, but that’s plainly a mistake. And both theories agree that these reasons could be outweighed by sufficiently strong evidence of sufficiently high stakes favouring the violation, so that doesn’t distinguish them either.
That is, utilitarians and moderate deontologists alike agree that (i) you shouldn’t go around carving people up for their organs, and (ii) there are conceivable exceptions to this rule. There’s no surface-level practical difference in this respect. The difference is not in whether it’s wrong to kill, but why.
One way to distinguish the views is via hypothetical thought experiments (the more alien and artificial, the better). Or perhaps through retrospective judgments of a peculiar sort. E.g.:
You discover crumbling records indicating that, long ago, a doctor secretly murdered someone for their organs, thereby saving five other lives, without ever getting caught (or going on to commit other misdeeds).
Now, it’s important to distinguish different questions we can ask ourselves about this situation. Here’s one:
(Q1) Are you glad this happened (rather than the five dying)?
Utilitarians must surely answer ‘yes’: all else equal, it’s preferable to have more people avoid premature deaths, and there’s nothing in the story to suggest that things aren’t equal. Deontologists may instead answer ‘No’. Though I think this reveals bad ultimate values on their part, I’d expect them to prefer that the doctor keep his hands clean and let the five die. (Hard to see sufficient substantive disagreement with utilitarians, otherwise.)
(Q2) Knowing what you do about the outcome, would you (or God) advise the doctor to act as he did?
Again, an obvious ‘yes’ for utilitarians, since it’s stipulated that we know the action really did turn out for the best. Deontologists must surely answer ‘No’ to this one.
(Q3) Does the doctor’s action reveal good moral judgment and good character?
Here I think the answer is ‘no’, even for utilitarians. I’d certainly feel horrified by the doctor’s action. Absent the advice of an omniscient time-travelling philosopher, it seems crazy reckless in expectation. And—even though we’re told he committed no other egregious misdeeds—given my understanding of human psychology, I imagine it would take a disturbingly callous character to be capable of such cold-blooded murder, even for the greater good. I would not expect this doctor to be a generally good or virtuous person, in utilitarian terms, on the basis of this action.
I could imagine a virtuous alien finding the needs of the five so salient that they’re driven to override their immense moral respect for the importance of the one. But for a human, it’s surely far more likely that he just didn’t have much moral respect for the one he killed. And that’s intrinsically criticizable, even on utilitarianism, as a gross moral failing: exemplifying maleficence rather than beneficence.
[For those who doubt the ability of utilitarian theory to account for disrespecting individuals in this way, see my (2021) paper, ‘The Right Wrong-Makers’.]
(Q4) Would you generally want or advise doctors to murder people for their organs?
Obviously not, on any view. In utilitarian terms: we should expect most such acts to do far more harm than good (once secondary consequences are taken into account), and overall bring about a much worse world—the very opposite of what we want.
(Q5) Was the doctor’s action justified?
This question is unclear. Philosophers often invoke an ex post notion of ‘ought’ or justification that harks back to Q1 or Q2, which could lead utilitarians to answer ‘yes, in a sense’. But I think this is pretty misleading to ordinary language users, who I expect read Q5 as more connected to ex ante assessments like Q3 and Q4—in which case, again, utilitarians can very comfortably answer ‘Definitely not!’, right along with deontologists.
Critics of utilitarianism often assume that anyone who aims at the criterion of rightness must automatically count as “well intentioned” according to that moral theory. (“At least they were aiming at the right thing; any flaws were purely in the execution…”) But this is a mistake, as explained in my response to (Q3) above (and further in ‘The Right Wrong-Makers’). A truly good-willed person must care sufficiently about each person’s interests, and it’s very unlikely that a human violating deontic constraints truly does.
Assessing the Answers
I think the utilitarian answers I’ve given to all these questions are intuitively plausible, more so than the competing answers offered by deontological theories. (As even some deontologists claim, it would be “monstrously narcissistic” to care more about “following the dictates of reason” than about saving lives.) So when critics claim that utilitarianism too easily “justifies” instrumental harm, in a sense that is meant to be intuitively abhorrent, they must be imagining that utilitarianism entails positive answers to Q3 and/or Q4, encouraging instrumental harm in practice. But it doesn’t. That’s simply a misconception.
One general difficulty here is that many people—even philosophers!—don’t clearly distinguish these questions. This can motivate them to dodge questions like Q1 and Q2, for fear of naive utilitarian answers to Q3 and Q4.
I’m skeptical that such slippage is behind real-life harm (moral theory can’t really protect against the kind of motivated reasoning that’s plausibly at work when powerful people act egregiously wrongly, as Peter McLaughlin explains here), but insofar as one has such worries, the best response is surely to be really clear on the philosophical gap between theory and practice.
An ironic example of this is esotericism, or lying about the true morality.2 In theory, it’s easy to see how esotericism could be justified, so it’s obviously not an objection to a theory that it allows for this possibility. But in practice, I again think that dishonesty is very obviously inferior to clarity and co-operative inquiry.
It would be a bad sign about someone if they needed to be argued into appreciating the value of honest, truth-oriented inquiry. But if they do: one simple reason is that, given the evident justifiability of utilitarian principles, many people are going to be drawn to utilitarianism regardless. It’s presumably better that such proto-utilitarians properly understand the distinction between theory and practice than that they don’t. So it’s better to clearly communicate this distinction, and end up with a (perhaps marginally larger) group of prudent utilitarians, than to conflate the two and end up with a (perhaps marginally smaller) group of utilitarians, more of whom are confused and naive about how to implement it.
(But again, an epistemically virtuous agent wouldn’t need to hear that argument in the first place.)
Distinguishing the Practices
As we’ve seen, utilitarianism and moderate deontology may have pretty similar surface-level practical implications in terms of avoiding rights violations, etc. But that doesn’t mean there are no practical differences. For utilitarianism plainly implies beneficentrism: it’s really important to positively do good (when you can do so without violating rights)—and the more good, the better.
I think the best non-consequentialist views would also endorse this claim. The best consequentialist and non-consequentialist views may then end up being almost indistinguishable in practice. (Note that, just as nothing about deontic constraints suggests that it’s any less important to promote the overall good when you can do so permissibly, so the heuristic reasons for prudent utilitarians to rule rights-violations out of consideration are not reasons to settle for less effective charities, to be speciesist, or to automatically neglect geographically or temporally distant interests: Utilitarians “should still try to do the most good they can, but only while respecting these commonsense moral rules and virtues.”)
Unfortunately, most real-life non-consequentialists do not seem to endorse beneficentrism.3 So that seems the main practical difference between utilitarianism and the most common forms of non-consequentialism. But I'd certainly welcome more non-consequentialists adopting the best version of their view here, and eliminating this residual source of practical divergence.
Conclusion
Moral theories tell us about what matters in principle. It’s a further question how to put this into practice. There will be significant practical overlap between different moral theories, as the constraints deontologists take to matter non-instrumentally also have obvious instrumental value (given inescapable human foibles, from cognitive limitations to motivated reasoning).
Theories differ in the verdicts they yield about hypothetical cases (and certain kinds of “ex post” retrospective judgments). But it would be a mistake to take these as carrying over straightforwardly to real-life cases—or even to various “ex ante” judgments, including judgments of the quality of the agent’s intentions, character, or decision-making. Utilitarians can say much more commonsensical things about these sorts of judgments than most people realize.
Finally, the utilitarian reasons to embrace commonsense virtues and deontic constraints as heuristics don’t undermine the case for optimizing between the remaining (non-vicious) options. Indeed, even full-blown deontology doesn’t do that. We should all want to do more good rather than less, all else equal.
Most ethicists would say that normative ethical theories address the ‘deontic’ question of what you ought to do. But I think this unhelpfully obscures the distinction between theory and practice that animates this post.
The irony is that, in current discourse, it’s overwhelmingly putative non-consequentialists who seem to be touting the alleged “harmfulness” of utilitarianism as a reason not to believe it (even though they must realize that such practical reasons are not truth-indicative).
Some go so far as to mock the very idea of effective altruism, and respond with apparent glee, rather than disappointment, when the EA movement falls short of its ideals.
I think all the stuff you're saying is important and mostly correct, but I think it has to be admitted there are non-fantastical scenarios when non-naive act utilitarians ought, by their own lights, to do things that significantly violate conventional morality. Stuff about maxims and meta-courses-of-action complicate things, but ultimately if you live a utilitarian life, there is a non-trivial possibility you're gonna have to do something pretty 'sketchy' at some point.
How often this happens is a complex empirical quesiton relevant to the art of utilitarian living in the present moment in human history, but it does happen sometimes.
And, modulo stuff about population, everyone should be glad of this in some sense, since it increases their ex-ante welfare.
The claim that "our normative ethical theory tells us how or whether moral agency upsets the basic picture given to us by our axiology" can be read as implying that axiological facts are prior to, and specifiable independently of, normative facts. Is this what you mean? Because if it is, many (most?) non-consequentialists will of course disagree: they will say facts about which outcomes are good are grounded in metaphysically prior facts about what agents have reason to want.