Three Arguments for Consequentialism
Including a Master Argument to rule them all, find them, bind them, etc.
This post will set out three arguments for consequentialism (or against deontic constraints), in order of increasing abstraction. You should probably accept all three.
Utilitarian Pre-commitment vs Deontological Defection
From behind a veil of ignorance, everyone in the situation would rationally endorse killing one to save five, since that markedly increases their chances of survival. This means that each person has decisive reason to pre-commit their conditional consent to be killed, in the event that they are in the position of the one, on condition that the others do likewise. So now consider the following argument:1
If everyone affected consensually pre-commits to this plan, then one ought to kill one to save five.
If people lack the opportunity to communicate in advance, but have decisive reason to pre-commit to a co-operative agreement (e.g. of mutual conditional consent to the above plan), then one ought to enforce this rationally-mandated pre-commitment in an unbiased fashion.2
Before opening our eyes for the first time, we all have decisive reason to pre-commit to endorsing utilitarian tradeoffs (conditional on others doing likewise), as this maximizes our expected welfare.
So: one ought to enforce utilitarian tradeoffs (in an unbiased fashion).
Deontology is defection. Once you know you’re rich, you no longer want to give to the poor. Once you know you’re on top of the footbridge, you don’t want to save the five on the tracks. But if the shoe were on the other foot, you’d think differently. And the rest of us have no reason to enshrine your status quo privilege. We should stick to what we would all have agreed to before we knew our positions in life. *Push*
The Teleological Argument
Our reasons for action are given by applying instrumental rationality to the correct moral goals.
No competing candidate moral goals are more important, in principle, than saving and improving lives.
So: we should do whatever would best save and improve lives.3
The “Master Argument”
Moral philosophy progresses via reflective equilibrium: weighing the plausibility of a theory’s fundamental principles against that of its verdicts about cases.
Consequentialism has vastly more plausible fundamental principles.
Verdicts about cases don’t clearly favor literal deontology over deontic fictionalism or two-level consequentialism.
So consequentialism wins out: it has very strong reasons in support, and no clear reasons against.
Objections?
For related arguments, see Kacper Kowalczyk’s (2022) People in Suitcases, and Bentham’s Bulldog’s discussion here.
The latter clause is intended to rule out discretionary enforcement, e.g. killing the one if and only if it turns out to be Bob. Such a system would obviously no longer be in Bob’s ex ante interests.
At least to a first approximation. Other reasonable goals might make a difference on the margin.
Do you only endorse agreements behind the veil of ignorance that "markedly" improve people's prospects? Under veil of ignorance reasoning, shouldn't you also endorse killing one when there is, e.g 25% chance of saving five, since this would improve everyone's chances of survival ex ante (though not "markedly")?
I find that veil of ignorance arguments against deontology to be problematic. First, when you say "From behind a veil of ignorance, everyone in the situation would rationally endorse killing one to save five", I'm going to assume that by "rational", you are just referring to what there is most self-interested reason to do. In that case, it's not clear why our moral obligations are determined by our self-interested reasons in this way. But more importantly, I think this kind of veil of ignorance styled reasoning implies a strict kind of Utilitarianism, which you've objected to in the past.
For example, from behind the veil of ignorance, a self-interested agent would only want to maximize future prospects for well-being. He wouldn't care about whether the well-being was deserved or not. So from behind the veil of ignorance, self-interested parties would not select principles that give any _intrinsic_ weight to desert (of course, they might give some _instrumental_ weight to desert). But you've previously argued in favor of incorporating facts about desert in our moral reasoning https://www.philosophyetc.net/2021/03/three-dogmas-of-utilitarianism.html. E.g. you say that the interests of the non-innocent should be liable to be discounted. Why would purely self-interested parties care about desert from behind the veil of ignorance?
One answer might be that fully rational agents are also fully moral, and fully moral agents would care about desert because desert is morally relevant. In that case, it's not clear why a deontologist wouldn't also say that fully rational/moral agents would care about rights because rights are morally relevant.
For another example, I don't see why self-interested parties would distinguish between principles that kill persons vs principles that fail to create persons. From the perspective of the agent behind the veil of ignorance, failing to be created is just as much of a loss as being killed. Thus, I would imagine that the self-interested parties would be indifferent between the following two worlds:
* world A: N people live long enough to acquire X utility
* world B: N people live long enough to acquire X/2 utility before they are killed and replaced with another N people who live long enough to acquire X/2 utility.
You've argued elsewhere that the strongest objection to total utilitarianism is that it risks collapsing the distinction between killing and failing to create life. But why would self-interested parties from behind the veil of ignorance care about this distinction?
So while it is plausible that fully rational agents from behind the veil of ignorance would not care about rights, it is equally plausible that they would not care about desert, the distinction between killing vs failing to create, the distinction between person-directed vs undirected reasons, special obligations, etc. So it seems like veil of ignorance style reasoning leads to strict total utilitarianism.