Critics sometimes imagine that utilitarianism directs us to act disreputably whenever it appears (however fleetingly) that the act would have good consequences. Or whenever crudely calculating the most salient first-order consequences (in isolation) yields a positive number. This “naïve utilitarian” decision procedure is clearly daft, and not something that any sound utilitarian actually advocates. On the other hand, critics sometimes mistake this point for the claim that utilitarianism itself is plainly counterproductive, and necessarily advocates against its own acceptance. While that’s always a conceptual possibility, I don’t think it has any empirical credibility. Most who think otherwise are still making the mistake of conflating naïve utilitarianism with utilitarianism proper. The latter is a much more prudent view, as I’ll now explain.
Adjusting for Bias
Imagine an archer, trying to hit a target on a windy day. A naive archer might ignore the wind, aim directly at the target, and (predictably) miss as their arrow is blown off-course. A more sophisticated archer will deliberately re-calibrate, superficially seeming to aim “off-target” but in a way that makes them more likely to hit. Finally, a master archer will automatically adjust as needed, doing what (to her) seems obviously how to hit the target, though to a naïve observer it might look like she was aiming awry.
Is the best way to be a successful archer on a windy day to stop even trying to hit the target? Surely not. (It’s conceivable that an evil demon might interfere in such a way as to make this so — i.e., so that only people genuinely trying to miss would end up hitting the target — but that’s a much weirder case than what we’re talking about.) The point is just that naïve targeting is likely to miss. Making appropriate adjustments to one’s aim (overriding naive judgments of how to achieve the goal) is not at all the same thing as abandoning the goal altogether.
And so it goes in ethics. Crudely calculating the expected utility of (e.g.) murdering your rivals and harvesting their vital organs, and naively acting upon such first-pass calculations, would be predictably disastrous. This doesn’t mean that you should abandon the goal of doing good. It just means that you should pursue it in a prudent rather than naive manner.
Metacoherence prohibits naïve utilitarianism
“But doesn’t utilitarianism direct us to maximize expected value?” you may ask. Only in the same way that norms of archery direct our archer to hit the target. There’s nothing in either norm that requires (or even permits) it to be pursued naively, without obviously-called-for bias adjustments.
This is something that has been stressed by utilitarian theorists from Mill and Sidgwick through to R.M. Hare, Pettit, and Railton—to name but a few. Here’s a pithy listing from J.L. Mackie of six reasons why utilitarians oppose naïve calculation as a decision procedure:
Shortage of time and energy will in general preclude such calculations.
Even if time and energy are available, the relevant information commonly is not.
An agent's judgment on particular issues is likely to be distorted by his own interests and special affections.
Even if he were intellectually able to determine the right choice, weakness of will would be likely to impair his putting of it into effect.
Even decisions that are right in themselves and actions based on them are liable to be misused as precedents, so that they will encourage and seem to legitimate wrong actions that are superficially similar to them.
And, human nature being what it is, a practical working morality must not be too demanding: it is worse than useless to set standards so high that there is no real chance that actions will even approximate to them.
For all these reasons and more (e.g. the risk of reputational harm to utilitarian ethics),1 violating people's rights is practically guaranteed to have negative expected value. You should expect that most people who believe themselves to be the rare exception are mistaken in this belief. First-pass calculations that call for rights violations are thus known to be typically erroneous. Generally-beneficial rules are “generally beneficial” for a reason. Knowing this, it would be egregiously irrational to violate rights (or other generally-beneficial rules) on the basis of unreliable rough calculations suggesting that doing so has positive “expected value”. Unreliable calculations don’t reveal the true expected value of an action. Once you take into account the known unreliability of such crude calculations, and the far greater reliability of the opposing rule, the only reasonable conclusion is that the all-things-considered “expected value” of violating the rule is in fact extremely negative.
Indeed, as I argued way back in my PhD dissertation, this is typically so clear-cut that it generally shouldn’t even occur to prudent utilitarians to violate rights in pursuit of some nebulous “greater good”—any more than it occurs to a prudent driver that they could swerve into oncoming traffic. In this way, utilitarianism can even accommodate the thought that egregious violations should typically be unthinkable. (Of course one can imagine hypothetical exceptions—ticking time bomb scenarios, and such—but utilitarianism is no different from moderate deontology in that respect. I don’t take such wild hypotheticals to be relevant to real-life practical ethics.)
Prudent Utilitarians are Trustworthy
In light of all this, I think (prudent, rational) utilitarians will be much more trustworthy than is typically assumed. It’s easy to see how one might worry about being around naïve utilitarians—who knows what crazy things might seem positive-EV to them in any fleeting moment? But prudent utilitarians abide by the same co-operative norms as everyone else (just with heightened beneficence and related virtues), as Stefan Schubert & Lucius Caviola explain in ‘Virtues for Real-World Utilitarians’:
While it may seem that utilitarians should engage in norm-breaking instrumental harm, a closer analysis reveals that it often carries large costs. It would lead to people taking precautions to safeguard against these kinds of harms, which would be costly for society. And it could harm utilitarians’ reputation, which in turn could impair their ability to do good. In light of such considerations, many utilitarians have argued that it is better to respect common sense norms. Utilitarians should adopt ordinary virtues like honesty, trustworthiness, and kindness. There is a convergence with common sense morality… [except that] Utilitarians can massively increase their impact through cultivating some key virtues that are not sufficiently emphasized by common sense morality…
This isn’t Rule Utilitarianism
I’ve argued that prudent utilitarians will follow reliable rules as a means to performing better actions—doing more good—than they would through naively following unreliable, first-pass calculations. When higher-order evidence is taken into account, prudent actions are the ones that actually maximize expected value. It’s a straightforwardly act-utilitarian view. Like the master archer, the prudent utilitarian’s target hasn’t changed from that of their naïve counterpart. They’re just pursuing the goal more competently, taking naïve unreliability into account, and making the necessary adjustments for greater accuracy in light of known biases.
There are a range of possible alternatives to naïve utilitarianism that aren’t always clearly distinguished. Here’s how I break them down:
(1) Prudent (“multi-level”) utilitarian: endorses act-utilitarianism in theory, motivated by utilitarian goals, takes into account higher-order evidence of unreliability and bias, and so uses good rules as a means to more reliably maximize (true) expected value.
(2) Railton’s “sophisticated” utilitarian: endorses act-utilitarianism in theory, but has whatever (potentially non-utilitarian) motivations and reasoning they expect to be for the best.
(3) Self-effacing utilitarian: Ex-utilitarian, gave up the view on the grounds that doing so would be for the best.
(4) Rule utilitarian: not really consequentialist; moral goal is not to do good, but just to act in conformity with rules that would do good in some specified — possibly distant — possible world. (Subject to serious objections.)
See also:
Utilitarianism.net section on ‘Multi-level utilitarianism’—and how it differs from Rule Utilitarianism
Schubert & Caviola, ‘Virtues for Real-World Utilitarians’
R.M. Hare (1981) Moral Thinking
Pettit & Brennan (1986) ‘Restrictive Consequentialism’
Gibbard (1984) ‘Utilitarianism and Human Rights’
Amanda Askill’s EA Forum summary of the ‘criterion of rightness’ vs ‘decision procedure’ distinction
As we stress on utilitarianism.net [fn 2]: “This reputational harm is far from trivial. Each individual who is committed to (competently) acting on utilitarianism could be expected to save many lives. So to do things that risk deterring many others in society (at a population-wide level) from following utilitarian ethics is to risk immense harm.”
"So to do things that risk deterring many others in society (at a population-wide level) from following utilitarian ethics is to risk immense harm".
For a while, I was thinking about trying to write an economic model of this sort of thing with repeated games, collective reputation, and evolutionary dynamics based on the fraction of utilitarians in society (and/or the frequency of their 'transgressive' behavior) vis-a-vis other 'types'. It didn't seem worth pursuing at the time, but maybe could interesting at some point.
Some relevant stuff I looked at though in case anyone cares:
- Jonathan Levin on Collective Reputation https://siepr.stanford.edu/publications/working-paper/dynamics-collective-reputation
- Ingela Alger "Evolution and Kantian morality" https://www.sciencedirect.com/science/article/pii/S0899825616300410
Under a utilitarian framework, how would “rights” be determined? For instance, if there was evidence that imposing the death penalty prevented future crime, would prudent utilitarianism still prioritize the criminal’s rights over the potential welfare lost. And if so, what would be the basis for these rights outside of welfare?