I think the biggest challenge for deontology is to answer the question, “How should bystanders feel about optimific rights violations? For or against?”
If they answer (as seems most natural), “Against!”, then they run straight into my New Paradox of Deontology. In short: they are then unable to display adequate respect and concern for five rescuable victims (after another has already been wrongly killed as a means). Once the wrong is done, they care too little about whether the potential innocent beneficiaries are actually saved or not. (It’s a surprising result, but you can follow the link for the proof.)
If they answer, “For!”, then they sound too much like utilitarians. (What kind of deontologist chants, “Push! Push! Push!”, as they watch the Trolley Footbridge case unfold from afar?) To prefer “wrong” actions to be performed would seem to rob deontic constraints of their normative authority. If you grant that we should actually want people’s rights to be violated on the occasions that this turns out to be socially optimal, that just seems to be a way of saying that “rights” don’t really matter all that much, on your view.
Maybe rights have a purely agent-relative significance: you don’t want to get your hands dirty, but you’re perfectly happy for others to do the dirty work. This egoistic deontology is a distinct and coherent view, but not a very morally appealing one. It seems pretty clear that a decent form of deontology should instead appeal to the kinds of patient-centered considerations (such as the inviolability of human life, or some such) that could equally speak to bystanders as to agents.
So neither answer seems remotely tolerable. This suggests that deontologists need to come up with a third option: some form of, “It’s complicated!” The best such attempt I’ve come across so far is (what I’ll call) bystander preference permissivism.
Bystander Preference Permissivism
According to this view, the answer is: “It’s up to them!” Bystanders aren’t required to prefer that constraints be respected, so they could get off my train of argument at the first stop; but they do have the option, and so rights do have some agent-neutral significance or ability to “speak” to bystanders.
It’s a neat suggestion! I could see it appealing to many deontologists. But it doesn’t really resolve the underlying dilemma so much as just shuffle it under the rug.
The Dilemma Returns
In a way, permissivism is stuck with both problems noted above. For, in allowing that you may prefer other agents to violate rights whenever it’s optimific for them to do so, the view seems to forfeit any claim to taking rights especially seriously. In general, if a morally ideal observer could reasonably not care whether you do X, then it sure seems to follow that it doesn’t greatly matter whether you do X. If X = “optimifically violate human rights”, then this seems like a really awkward view for a putative deontologist.
Worse, my New Paradox still applies in full force. For permissivism claims that bystanders may prefer that constraints be respected, and hence may prefer the world of Five Killings over that of One Killing to Prevent Five. So suppose they do. Take any bystander who has that preference. Now run through my original argument with that agent, starting with this preference in premise (3). My argument shows that any such bystander cannot sufficiently strongly prefer One Killing to Prevent Five over Failed Prevention.
What my argument shows is that moral decency is incompatible with bystanders’ preferring Five Killings over One Killing to Prevent Five. That starting preference inevitably leads to moral indecency. So the starting “deontological” preference is not permissible for bystanders. So permissivism, like every other theory that permits this preference, must be false.
Straw man versions of both utilitarianism and deontology are easy to criticize. A reasonable position will have elements of both.
Even our host here admits that social systems are limited by human psychology, so that at the very least we benefit from a system of laws consisting of reasonably clear and simple rules, and perhaps also a rule-based shared moral system in addition to that. And reasonable deontologists won’t deny the need for exceptions to general rules. Bernard Gert discusses this.
The question boils down to, how do we know what rules to follow, and when to make exceptions? The simple insight I have never seen discussed in. This context is that victims of exceptions to rules deserve compensation. So in the extreme thought experiment, a quasi-deontologist can steal a dime to save the planet, but also take responsibility by accepting the legal and social penalty for breaking the rule. This resembles the use of a strategic foul in sports, e.g. fouling someone to stop the clock in basketball.
Extreme deontologists will reject this, but it seems more workable than either extreme. It allows the agent to respect what the deontologists seek without leaving consequences completely out of the picture. Then the actual victims can decide whether they think they deserve compensation, and grateful bystanders or beneficiaries of the agent's violation can help the agent make compensation, if needed. A fair arbitrator can decide what maximum compensation is due, and the victim and violator could negotiate over the form it will take. If everyone except the victims agree that “utility was enhanced,” they are able to help the agent to compensate the victims. If the actual value of the violation is low or controversial, this is less likely to happen. Hence there is a social process of learning that helps society adapt. If the result appears too unjust, laws can be amended or moral attitudes can be modified in response. This gives more of an explanation of how to deal with error.
The strategic foul is an interesting analogy, or perhaps just a very concrete example of this sort of hybrid between deontology and utilitarianism. Sports can’t proceed without rules. Fouls can sometimes be used for strategic advantage. In some cases, obvious intentional fouls receive an additional penalty to try to balance making penalties proportionate when they are unintended, but then reducing their value for strategic advantage.
The only objection I have thought of is that we can imagine a billionaire using this cynically to act irresponsibly and pay off the victims. But that objection assumes that compensation always involves only money. Perhaps “consequence” would be more appropriate. If non-monetary penalties get included among possible consequences, even very wealthy persons could be disincentivized against abusing this framework (like the extra penalty for intentional fouls).
Phrasing it as "egoistic deontology" is very clever/sneaky, but of course that's not an accurate description of the view - what's going on is the exact opposite of egoism! When I refuse to push someone in front of the train, then my aim is not "not to get my hands dirty" (which would indeed be egotistical), but rather I recognise that I owe it to that person that I don't do it. The fact that I don't get my hands dirty is a *byproduct* of my action, but not the *motivation* for it - those are two completely different things.
(Btw, being positively delighted about someone killing another person would show a highly defective moral attitude, whatever view we pick - even the utilitarian should agree with this. Even most utilitarians would agree that you should probably not be super happy while torturing someone, even if you knew that it will maximise overall wellbeing)