Some of the deepest puzzles in ethics concern how to coherently extend ordinary beneficence and decision theory to extreme cases. The notorious puzzles of population ethics, for example, ask us how to trade off quantity and quality of life, and how we should value future generations. Beckstead & Thomas discuss a paradox for tiny probabilities and enormous values, asking how we should take risk and uncertainty into account. Infinite ethics raises problems for both axiology and decision theory: it may be unclear how to rank different infinite outcomes, and it’s hard to avoid the “fanatical” result that the tiniest chance of infinite value swamps all finite considerations (unless one embraces alternative commitments that may be even more counterintuitive).
Puzzles galore! But these puzzles share a strange feature, namely, that people often mistakenly believe them to be problems specifically for utilitarianism.
Their error, of course, is that beneficence and decision theory are essential components of any complete moral theory. (As even Rawls acknowledged, “All ethical doctrines worth our attention take consequences into account in judging rightness. One which did not would simply be irrational, crazy.” Rossian pluralism explicitly acknowledges a prima facie duty of beneficence that must be weighed against our other—more distinctively deontological—prima facie duties, and will determine what ought to be done if those others are not applicable to the situation at hand. And obviously any account relevant to fallible human beings needs to address how we should respond to uncertainty about our empirical circumstances and future prospects.)
Why, then, would anyone ever think that these puzzles were limited to utilitarianism? One hypothesis is that only utilitarianism is sufficiently clear and systematic to actually attempt an answer to these questions. Other theories too often remain silent and non-committal. Being incomplete in this way is surely not an advantage of those theories, unless there’s reason to think that a better answer will eventually be fleshed out. But what makes these questions such deep puzzles is precisely that we know that no wholly satisfying answer is possible. It’s a “pick your poison” situation. And there’s nothing clever about mocking utilitarians for endorsing a poisonous implication when it’s provably the case that every possibility remaining amongst the non-utilitarian options is similarly poisonous!
When all views have costs, you cannot refute a view just by pointing to one of its costs. You need to actually gesture towards a better alternative, and do the difficult work of determining which view is the least bad. Below I’ll briefly step through some basic considerations that bring out how difficult this task can be.
Population Ethics
In ‘The New Moral Mathematics’ (reviewing WWOTF), Kieran Setiya sets up a false choice between total utilitarianism and “the intuition of neutrality” which denies positive value to creating happy lives. (Note that MacAskill’s longtermism is in fact much weaker than total utilitarianism.) He swiftly dismisses the total view for implying the repugnant conclusion. But he doesn’t mention any costs to neutralism, which may give some readers the misleading impression that this is a cost-free, common-sense solution. It isn’t. Far from it.
Neutrality implies that utopia is (in prospect) no better than a barren, lifeless rock. It implies that the total extinction of all future value-bearers could be more than compensated for by throwing a good enough party for those who already exist. These implications strike me as far more repugnant than the repugnant conclusion. (If you think the big party doesn’t sound so bad, given that you’re already invited, instead imagine Cleopatra making the decision millennia ago.) Moreover, neutrality doesn’t even fully avoid the original problem! It still doesn’t imply that future utopia A is better than the repugnant world Z; just that they are “on a par”. (This is a result that totalists can just as well secure through a more limited critical range that still allows awesome lives to qualify as positive additions to the world.)
To fully avoid repugnance, we want a population axiology that can at least deliver both of the following verdicts:
(i) utopia (world A) is better than Parfit’s world Z, and
(ii) utopia is better than a barren rock.
The total view can’t secure (i), but at least it’s got (ii) covered. Neutrality gets us neither! (The only hope for both, I think, is some kind of variable value view, or possibly perfectionism, both of which allow that we have strong moral reasons to want more awesome, excellent lives to come into existence.)
To bring out just how little is gained by neutrality, note that all the same puzzles re-emerge when trading off quantity and quality within a single life, where neutrality is clearly not an option. (The intrapersonal “neutral” view would hold that early death is harmless, and adding extra good time to your life—however wonderful that time might be—is strictly “on a par” with never having that time at all. Assuming that you’d prefer to experience bliss than instant death, you already reject the “intuition of neutrality” in this domain!)
Consider the intrapersonal repugnant conclusion: A life contain zillions of barely-positive drab moments is allegedly better for you than a century in utopia. Seems wrong! So how are you going to avoid it? Not by appealing to neutrality, for the reasons we’ve just seen. An intrapersonal analogue of variable value or critical range views is surely more promising, though these views have their own significant costs and limitations (follow the links for details). Still, if you settle on a view that works to avoid the intrapersonal repugnant conclusion, why not carry it over to the inter-personal (population) case, if you’re also concerned to avoid the repugnant conclusion there?
Once you acknowledge that (i) the intrapersonal repugnant conclusion is just as counterintuitive as the inter-personal one, and yet (ii) unrestricted “neutrality” about creating new moments of immense value is not a feasible option, it becomes clear that neutrality about creating happy lives is no panacea for the puzzles of population ethics. Either we make our peace with some form of the repugnant conclusion, or we settle on an alternative account that’s nonetheless compatible with ascribing value to creating new loci of value (times or lives) at least when they are sufficiently good. Folks who think neutrality offers an acceptable general solution here are deluding themselves.
Decision Theory
In an especially striking example of conflating utilitarianism with anything remotely approaching systematic thinking, popular substacker Erik Hoel recently characterized the Beckstead & Thomas paper on decision-theoretic paradoxes as addressing “how poorly utilitarianism does in extreme scenarios of low probability but high impact payoffs.” Compare this with the very first sentence of the paper’s abstract: “We show that every theory of the value of uncertain prospects must have one of three unpalatable properties.” Not utilitarianism. Every theory.
(Alas, when I tried to point this out in the comments section, after a brief back-and-forth in which Erik initially doubled down on the conflation, he abruptly decided to instead delete my comments explaining his mistake.)
Just to briefly indicate the horns of the paradox: in order to avoid the “recklessness” of orthodox (risk-neutral) expected utility in the face of tiny chances of enormous payoffs, you must either endorse timidity or reject transitivity. Timidity “permit[s] passing up arbitrarily great gains to prevent a tiny increase in risk.” (Relatedly: risk-averse views may imply that we should prefer to destroy the world rather than risk a 1 in 10 million chance of a dystopian future, even on the assumption that a correspondingly wonderful utopia is vastly more likely to otherwise eventuate.) Doesn’t sound great! And rejecting transitivity strikes me as basically just giving up on the project of coherently systematizing how we should respond to uncertain prospects; I don’t view that as an acceptable option at all.
Conclusion
It’s really not easy for anyone to avoid uncomfortable verdicts in these puzzle cases. However bad the “utilitarian” verdict looks at first blush, a closer examination suggests that many alternatives are likely to be significantly worse. (When discussing related issues in ‘Double or Nothing Existence Gambles’, I suggest that a moderate mix of partiality and diminishing marginal value of intrinsic goods might help in at least some cases. But it’s really far from obvious how best to deal with these problems!)
Most of those who are most confident that the orthodox utilitarian answers are absurd haven’t actually thought through any sort of systematic alternative, so their confidence seems severely misplaced. Personally, I remain hopeful that both the Repugnant Conclusion and (at least some) reckless ‘Double or Nothing’ existence gambles can be avoided with appropriate tweaks to our axiology. But I’m far from confident: these puzzles are really tricky, and the options all have severe costs! Non-consequentialists may superficially look better by refusing to even talk about the problems, so—like skilled politicians—they cannot so easily be pinned down. But gaps in a theory shouldn’t be mistaken for solutions. It’s important to appreciate that any coherent completion of their view will likely end up looking just as bad—or worse.
As a result, I think many people who (like Erik Hoel) think they are opposed to utilitarianism are really reacting against a broader phenomenon, namely, systematic theorizing. The only way to entirely avoid the problems they deem so sneer-worthy is to stop thinking. Personally, I just can’t shake the feeling that that would be the most repugnant response of all.
I loved this post - this is a big pet peeve of mine as well and I think you nailed it.
However, I think a lot of times when I see similar arguments 'in the wild', even if they are initially framed narrowly as critiques of utilitarianism they are in fact motivated by a broader feeling that there are limits to moral reasoning. Something like, we shouldn't expect our theories to have universal domain, and we don't get much leverage by trying to extend our theories far beyond the intuitions that initially motivated them.
The main example I have in mind is Tyler Cowen's recent conversation with Will. Tyler raises a number of objections to utilitarianism. At times I found this frustrating, because if viewed from the lens of figuring out the best moral theory he is making isolated demands for rigor. But I think Tyler's point instead is something more like the above, that we shouldn't rely too much on our theories outside of everyday contexts.
You do touch on this in the post, but only briefly. I'd be interested to hear more about your thoughts on this issue.
Caveat: I'm not a philosopher, but rather an economist.
I think many of these paradoxes (Quinn's Self-Torturer, Parfit's "mere addition," etc.) have the following form:
> Start from state S. Operation O(S) is locally preferable (i.e., it produces a preferred state S'.) But if we iterate ad infinitum, we end up with S* that's not preferable to S.
The conclusion is usually either that S* actually _is_ preferable (i.e., our preferences are "rational" and therefore transitive), or that our preferences are seriously suspect. To the point where "maximizing" them is a hopelessly muddled concept.
I think there's another way to approach this. Behavioral economics deals with such problems ("time-inconsistent preferences") routinely. Consider a would-be smoker. He doesn't smoke his first cigarette, because he knows that his preferences display habit formation --- his first cigarette leads to the second, and so on.
In other words, the time 0 self has a genuinely different axiology than the time _t_ self. (Equivalently, preferences are state-dependent.) It would definitely be _cleaner_ if our rankings of future worlds were invariant to where we are today, but if the choice is between axiomatic hygiene and uncomfortable paradoxes, I'll take the mess.
(I think this also has something to say about, e.g., the demandingness objection. It's always locally preferable to save one more child, but the agent is justifiably wary of committing to a sequence of operations which turns him into a child-rescuing drone.)