The standard caricature portrays utilitarians as “cold and calculating” moral robots, motivated solely by extremely abstract considerations like simplicity, who insist that we should maximize happiness (perhaps by throwing people into experience machines against their will) since at least that’s an end that we can quantify and measure.
Sounds pretty awful! It’s also nothing remotely like how I think about ethics, despite the fact that I self-identify as a utilitarian(-ish) philosopher. I think there’s a striking disconnect between how people commonly think of utilitarianism and what (the best version of) the view actually looks like.
To help remedy these common misconceptions, here’s a rough summary of my preferred brand of (utilitarian-flavoured) consequentialism.1
Starting Points and Moral Methodology
Ethical theory is about what fundamentally matters, or is worth caring about.2 Upon reflection, I think we should care about the well-being of all sentient beings, and—at least in many contexts—we should give equal weight to everyone’s interests, and so prefer what’s overall best or maximizes objective well-being.
(I’m not strictly utilitarian: I’m sympathetic to the idea that we can reasonably care more about our nearest and dearest, and independently existing people, and possibly make desert-adjustments to count bad people for less. Maybe there are even some minor non-welfarist values, I’m not sure. But these are slight departures, and make little practical difference. Utilitarianism is at least approximately right, especially in high-stakes cases. I don’t think it’s reasonable to take anything else to have much weight in comparison to the raw importance of everyone’s well-being.)
Everything else falls out of this basic starting commitment, of wanting the world to be a better place. Action is instrumental to worthwhile ends, and so we find ourselves with (mostly) utilitarian reasons for action. To avoid this, you would either need to (i) reject instrumental rationality (such as by positing incoherence between what you should want and what you should do), or (ii) care about the wrong things.
The Crux of Consequentialist Concern
My route to consequentialism starts with beneficentrism—the idea that helping others is centrally important—and asking whether anything else plausibly matters more. It strikes me as implausible (and rather indecent) to claim that other things matter more, in principle, than saving and improving lives. So I’m swiftly led to welfarist consequentialism, and something at least very close to utilitarianism.
Deontologists, by contrast, end up having to care about deeply weird things. For example, one standard “solution” to the Trolley Problem3 invokes the idea that it’s deeply important whether one is “introducing” a new causal threat into the situation, or merely “redirecting” an existing threat. The famed Doctrine of Double Effect tells us that it matters immensely whether one is killed as a means to some further end, or simply as collateral damage—a foreseen but not explicitly intended side-effect of the approved action. It strikes me as bizarre to care about such distinctions.
As a potential victim, I care a lot about whether I end up dead, and very little about the causal details of precisely how I end up dead.4 Moral agents should take others’ interests and preferences into account. To prefer that five of us end up dead, rather than just one dead via a special causal chain, is implicitly to treat the special causal chain as more significant than four people’s lives. That’s pretty awful, IMO, and disrespectful of our value as persons.
It’s not as though the one cares vastly more about not being killed in this way than the five each care about being rescued, after all. So when deontologists prioritize the former over the latter, they are acting in a way that cannot be justified by reference to the interests or preferences of the affected parties. They’re introducing a novel (moralized) preference of their own into the situation, and treating it as more important than what the affected parties care about (their very lives).5
In sum: welfarist consequentialism is just what you get if you’re guided by direct concern for all affected parties. Deontology departs from this, giving immense weight to special causal chains.6 But people’s lives and interests matter more than special causal chains, and it’s disrespectful to claim otherwise. So consequentialism is actually the only decent and respectful ethical theory (properly reflecting sympathetic concern for what the affected parties themselves have reason to care about).
Consequentialism in Practice
We should care about others (more than probably any of us actually manage to do), and perform the actions that make sense given that concern—actions that are likely to help others more rather than less. That mostly doesn’t mean engaging in constant calculation, though maintaining some awareness of likely consequences is generally a good idea (and explicit cost-benefit analyses may be useful—though still not necessarily decisive—in special circumstances such as public policy and evaluating charities). As a rough first pass at a reasonable utilitarian decision procedure, I’d suggest:
Pursue any “low-hanging fruit” for effectively helping others while avoiding harm,
Inculcate virtues for real-world utilitarians (including respect for tried and true norms of social co-operation), and
In a calm moment, reflect on how we could better prioritize and allocate our moral efforts, including by seeking out expert cost-benefit analyses and other evidence to better inform our overall judgments of expected value.
As I explain in Theory-Driven Applied Ethics, sensibly “beneficentric” mid-level principles could (in principle) be shared by many other views besides utilitarianism. But given the striking rarity of efforts at beneficent prioritization, I do think that any remotely decent view will end up looking very different from most ordinary ethical and political thought (which routinely prioritizes fuzzy feelings and tribalism over actually helping people, neglects scale, and especially neglects the interests of the less salient—such as the global poor, non-human animals, and future generations—in ways that are utterly indefensible).
Objections
There are dozens of objections to consequentialism, and they’re basically all terrible.7 Most rest on simple misunderstandings. Some don’t speak to the truth of the view at all. And the remainder are basically just critiquing the surface vibes—“Sounds bad, man”—in a way that doesn’t really cast doubt on the foundations at all. (It’s perfectly explicable that moral truths could sound bad, especially to biased, scope-insensitive humans who find some harms more salient than others.) I’ve written about the most prominent objections in detail at utilitarianism.net, so won’t repeat those here; but feel free to follow up in the comments if you think there’s a powerful objection that I’m overlooking!
As an illustrative example (that I haven’t yet written about elsewhere), consider the self-sacrifice objection: “According to utilitarianism, it’s wrong to help someone a little at greater cost to yourself. But that doesn’t seem wrong! So utilitarianism must be false.”
This objection, like many others, fundamentally misunderstands what utilitarianism is about. In my view, utilitarianism is not really a theory of ‘right’ and ‘wrong’ in the ordinary senses of these words; it’s more conceptually revisionary than that (see note 2). Utilitarianism is a theory of what’s worthwhile, and we can rejigger familiar moral vocabulary to this end, but that’s something of a linguistic hack, and we certainly shouldn’t expect the results to seem linguistically natural in all cases. But that’s fine, because words don’t matter. The only relevant question is whether utilitarianism gives the correct account of what’s worthwhile, and most “objections” (hung up as they are on the traditional vocabulary of “rightness”) do not speak to this at all.
One could try to reformulate the objection: “According to utilitarianism, it isn’t worth helping someone a little at greater cost to yourself. But it is [??]. So utilitarianism must be false.” But this reformulated objection is a non-starter, since it surely isn’t worth helping someone a little at greater cost to yourself. (Such excessive self-sacrifice would reveal a problematic lack of self-respect.)
More generally, as noted in my Master Argument, it’s trivial for consequentialists to accommodate superficial verbal objections (about the intuitive application of ‘right’ and ‘wrong’) by appeal to deontic fictionalism or two-level consequentialism. And the most common objections strike me as superficial in this way, and hence completely lacking in rational force (or philosophical interest), given my understanding of moral methodology. It’s much rarer for anyone to actually argue that utilitarianism gives the wrong account of what’s really worth caring about, but that’s the real crux of the matter. So I’d encourage critics to refocus their attention on this more fundamental normative question.
Conclusion
My understanding of consequentialism may be distinctive in two ways.
Firstly, I see it as rooted in a perfectly ordinary sort of beneficence. I think this helps to bring out how deeply misguided are the critics who see utilitarianism as inherently dehumanizing—obviously beneficence does not have this feature, so they’re simply misunderstanding the view. (I make this argument more carefully in my (2021) ‘The Right Wrong-Makers’. Probably my best paper.)
Secondly, I think the crux of the debate is methodological. My strongest philosophical commitment in ethics is not so much to any particular first-order view as to the methodological dictum that ethical theory should focus more on what’s worth caring about rather than the downstream matter of delineating right from wrong. (There’s a strong case to be made that the latter project is actually unimportant,8 whereas the significance of the former is undeniable.) If you accept this dictum, I think it will be very difficult to resist consequentialism.
One upshot of all this is that I think that consequentialists and non-consequentialists are often talking past each other. Consequentialists are talking about what’s worth caring about, and non-consequentialists… typically aren’t. I don’t have a good sense of what they are talking about—I suspect I may not even have the concept of impermissibility that they are theorizing about—since the best I can come up with is that they’re engaged in the quasi-sociological project of explicating the “morality system” propagated by our society. But obviously if that’s all they’re doing then there’s no essential conflict between us. Conflict only arises if they argue that the morality system reflects what is objectively worth caring about, and no number of intuitions about which acts are wrong gives us any reason to think that the thing being intuited—the property of wrongness, or whatever—is something that’s actually worth caring about. This leads me to think that much existing moral philosophy is besides the point.
But to return to the dialectical point: Insofar as consequentialists are willing to talk about ‘right’ and ‘wrong’ at all, we’re typically just repurposing these words to point to what’s worth caring about (there are at least a couple of different ways of doing this). Non-consequentialists, as far as I can tell, instead use ‘wrong’ in a primitive, undefinable, mustn’t-be-done sense that I don’t understand. Because we’re using the same words, it looks like we disagree about which things are right and wrong. But I think the disagreement is actually much less clear than that. We obviously disagree about what should be done (at least in certain fully-specified thought experiments). But I think this is not because we take our moral concepts to have different extensions; rather, I think we’re employing entirely different moral concepts.
This makes debates between rival theorists especially intractable, because each view is apt to look incredibly alien (and implausible) from within the other’s conceptual framework. For example, if you presuppose that society’s “morality system” uses apt normative concepts and simply needs modest refinements for maximal coherence, then utilitarianism is apt to look absurdly revisionary. Maximizing act utilitarianism is simply not a credible candidate for capturing what ordinary people mean by ‘right’. I agree with that critique; except that I don’t view it as a critique, because that isn’t my project. I don’t suppose that what ordinary people mean by ‘right’ has any real normative significance, so I’m not even trying to capture it. I’m instead trying to work out what really matters, and there’s little reason to think that our social norms (and associated language) inherently reflect that.
If I’m right about this, then the only real way to make progress is to find more fundamental, actually-shared normative concepts that we can use to formulate the precise disagreements between the rival theories. I think the concept of what’s worth caring about can do this work (again, see note 8). So, in my vision, the future of productive debate in ethical theory rests on getting clearer about what matters, and why.
I’m in the early stages of developing a book project on Bleeding Heart Consequentialism, but it may yet take a couple more years to bring it to fruition.
On a more conventional view, ethical theory is instead about systematizing our verdicts about “right” and “wrong”. I reject this conception because it’s far from clear that we have any reason to care about “rightness” per se. The conventional conception presupposes a deontological answer to the question of what fundamentally matters. Since I reject the presupposition, I think conventional ethical theory addresses the wrong question. Rather than presenting consequentialism as a rival theory of “morality”, as conventionally understood, I think we do better to view it as a paradigm shift to a completely different way of thinking about practical normativity. (See the concluding section of the post for more on this methodological disagreement.)
PSA: A common misnomer is to take ‘trolley problem’ to simply name the thought experiment. But in fact ‘the trolley problem’, as Thomson introduced the term in 1976, is the problem of explaining why it is sometimes acceptable to kill one to save five, and other times not. (“[I]t's a lovely, nasty difficulty: why is it that Edward may turn that trolley to save his five, but David may not cut up his healthy specimen to save his five? I like to call this the trolley problem, in honor of Mrs. Foot's example.”)
If anything, I might slightly prefer my death to be useful rather than to be mere collateral damage. (Similar thoughts are expressed by Parfit (On What Matters, p. 365), and Caspar Hare (‘Should we wish well to all?’, p. 463).) But holding fixed the eventual harms and benefits, I don’t see much further reason to care about the causal relation between the two.
And yet somehow it’s consequentialism that has the reputation for being “unsympathetic”!?
Of course, I understand that deontologists will dispute this characterization, claiming instead that they’re “showing respect for the value of humanity”, or some such. And I grant that that is what they’re intending to do, but I dispute that it’s an accurate characterization of what they’re actually doing. We recognize in other cases that this sort of interest-independent moralizing, however sincerely meant, isn’t really respectful of other persons, and I see no principled difference between deontology and conservative sexual ethics—both involve prioritizing notions of moral purity over people’s real interests.
In the sense that most don’t even identify a pro tanto theoretical cost, or any reason to reject utilitarianism, in my view. (But that’s compatible with their serving to explain why someone else might reasonably find the view unappealing. Someone with deontological intuitions about Martian Harvest could reasonably point to the Rights objection as why they personally reject utilitarianism, for example, without this providing utilitarians with any reason to doubt their view.) The exceptions are things like reasonable partiality, where as noted I actually reject strict utilitarianism in favour of a looser form of welfarist consequentialism.
By contrast, I think that many objections to deontology (e.g. my new paradox, the hope objection, and ex ante pareto violations) are undeniably costs to that view, even if they’re costs that the defenders are ultimately willing to accept. My sense is that consequentialism has no such theoretical costs (incredible though this may sound). There are puzzles that arise for it, of course, but they are puzzles for everyone, and provide no reason to reject consequentialism per se. The things people usually regard as objections, by contrast, could only possibly appear that way from the outside, i.e. while steeped in a foreign conceptual framework.
Since that doesn't seem to be anything about it on the utilitarianism.net website either, I think it would be good to say something about how your debunking arguments work for virtue ethics as well as deontology. In this case, I think it's because utilitarianism can just swallow virtue ethics whole.
The main point is that utilitarianism focuses on completely different moral concepts than virtue ethics. It's not about right character, but about what's overall worth promoting or worthwhile. The general argument from good to ethicist goes that the other theories are too narrowly focused on actions and there's more broader focused on character, but in fact really utilitarianism is much broader than virtue ethics as you've explained If this category of thing, what's overall worthwhile, exists at all, then there's no real conflict with virtue ethics.
The real debate isn't about what the right actions/character is, but whether there's something higher that justifies which characters, dispositions, and actions are instrumentally right because they promote some outside good or not.
I'm pretty sure that's the actual difference between utilitarianism and virtue ethics, and it's super obvious. I don't get why most intro philosophy stuff misses this and says utilitarianism is a theory of right action like deontology but virtue ethics is broader.
On the virtue ethics view, some dispositions are just good in themselves. But to a utilitarian, this idea is just a straight up mistake, accidentally looking at things that are instrumental goods and thinking that they are fundamental. It doesn't even make sense because what does it mean for a disposition to be good in itself? You're not saying it should be promoted objectively or that the greatest expression of it is best. But that's because virtue ethics doesn't believe that this higher level of thing even exists.
So, the consequentialist has this easy way of taking in virtue ethics, right? They can just accept all the bits about practical wisdom, not getting too hung up on objective morality when deciding stuff, and focusing on growing virtues instead of using utilitarian calculations all the time. Consequentialists are cool with these ideas and treat them as fictional things that help them promote the overall objective good. It fits in pretty well with their own way of thinking.
But virtue ethicists aren't having it. They say the parts of virtue ethics that consequentialists treat as fictional are actually way more real and closer to how we think and live. Virtue ethicists want these ideas to guide our actions since they're more down-to-earth and relatable than those big, abstract moral principles. I think the virtue ethicist would probably draw some kind of appeal intuition or debunking argument here and say that there's just way less track record for the concept of what's overall worthwhile than for the concept of virtue. So we should treat that as the most real thing and the thing we derive our normative concepts from that, but then I'm not a virtue ethicist.
I suspect some people feel that the options are either 100% utilitarian or something radically different. Your "utilitarian-ish" position falls outside this. They think something like, once you concede a slight bit of ground around the edges to deontology, view-point relativity, or value pluralism, you will quickly be forced to slip into a more standard common-sense morality. Why do people think this? I find that sort of thinking somewhat intuitive, but I don't see a good reason for it.