Straw man versions of both utilitarianism and deontology are easy to criticize. A reasonable position will have elements of both.
Even our host here admits that social systems are limited by human psychology, so that at the very least we benefit from a system of laws consisting of reasonably clear and simple rules, and perhaps also a rule…
Straw man versions of both utilitarianism and deontology are easy to criticize. A reasonable position will have elements of both.
Even our host here admits that social systems are limited by human psychology, so that at the very least we benefit from a system of laws consisting of reasonably clear and simple rules, and perhaps also a rule-based shared moral system in addition to that. And reasonable deontologists won’t deny the need for exceptions to general rules. Bernard Gert discusses this.
The question boils down to, how do we know what rules to follow, and when to make exceptions? The simple insight I have never seen discussed in. This context is that victims of exceptions to rules deserve compensation. So in the extreme thought experiment, a quasi-deontologist can steal a dime to save the planet, but also take responsibility by accepting the legal and social penalty for breaking the rule. This resembles the use of a strategic foul in sports, e.g. fouling someone to stop the clock in basketball.
Extreme deontologists will reject this, but it seems more workable than either extreme. It allows the agent to respect what the deontologists seek without leaving consequences completely out of the picture. Then the actual victims can decide whether they think they deserve compensation, and grateful bystanders or beneficiaries of the agent's violation can help the agent make compensation, if needed. A fair arbitrator can decide what maximum compensation is due, and the victim and violator could negotiate over the form it will take. If everyone except the victims agree that “utility was enhanced,” they are able to help the agent to compensate the victims. If the actual value of the violation is low or controversial, this is less likely to happen. Hence there is a social process of learning that helps society adapt. If the result appears too unjust, laws can be amended or moral attitudes can be modified in response. This gives more of an explanation of how to deal with error.
The strategic foul is an interesting analogy, or perhaps just a very concrete example of this sort of hybrid between deontology and utilitarianism. Sports can’t proceed without rules. Fouls can sometimes be used for strategic advantage. In some cases, obvious intentional fouls receive an additional penalty to try to balance making penalties proportionate when they are unintended, but then reducing their value for strategic advantage.
The only objection I have thought of is that we can imagine a billionaire using this cynically to act irresponsibly and pay off the victims. But that objection assumes that compensation always involves only money. Perhaps “consequence” would be more appropriate. If non-monetary penalties get included among possible consequences, even very wealthy persons could be disincentivized against abusing this framework (like the extra penalty for intentional fouls).
"Consider a “ticking time bomb” scenario, where one supposedly can only prevent a nuclear detonation by illegally torturing a suspect. If millions of lives are on the line, the argument goes, we should accept that torture could be justified. But given the risk of abuse, we might also want anyone who commits torture to suffer strict legal sanctions. If millions of lives are really on the line, the agent should be willing to go to jail. If someone wants to torture others, but isn’t willing to go to jail for it, this raises serious questions about their moral integrity—and the likely consequences of letting them run loose. Accordingly, there’s no inconsistency in utilitarians holding both that (i) violating human rights could be justified in the most extreme circumstances, and yet (ii) anyone who violates human rights should be strictly held to account."
This isn't what's in dispute between consequentialists and deontologists.
There is no inconsistency, but I rarely see a utilitarian argument made in a way that incorporates this sort of accountability. Guess I need to keep reading! Thanks for the link.
Straw man versions of both utilitarianism and deontology are easy to criticize. A reasonable position will have elements of both.
Even our host here admits that social systems are limited by human psychology, so that at the very least we benefit from a system of laws consisting of reasonably clear and simple rules, and perhaps also a rule-based shared moral system in addition to that. And reasonable deontologists won’t deny the need for exceptions to general rules. Bernard Gert discusses this.
The question boils down to, how do we know what rules to follow, and when to make exceptions? The simple insight I have never seen discussed in. This context is that victims of exceptions to rules deserve compensation. So in the extreme thought experiment, a quasi-deontologist can steal a dime to save the planet, but also take responsibility by accepting the legal and social penalty for breaking the rule. This resembles the use of a strategic foul in sports, e.g. fouling someone to stop the clock in basketball.
Extreme deontologists will reject this, but it seems more workable than either extreme. It allows the agent to respect what the deontologists seek without leaving consequences completely out of the picture. Then the actual victims can decide whether they think they deserve compensation, and grateful bystanders or beneficiaries of the agent's violation can help the agent make compensation, if needed. A fair arbitrator can decide what maximum compensation is due, and the victim and violator could negotiate over the form it will take. If everyone except the victims agree that “utility was enhanced,” they are able to help the agent to compensate the victims. If the actual value of the violation is low or controversial, this is less likely to happen. Hence there is a social process of learning that helps society adapt. If the result appears too unjust, laws can be amended or moral attitudes can be modified in response. This gives more of an explanation of how to deal with error.
The strategic foul is an interesting analogy, or perhaps just a very concrete example of this sort of hybrid between deontology and utilitarianism. Sports can’t proceed without rules. Fouls can sometimes be used for strategic advantage. In some cases, obvious intentional fouls receive an additional penalty to try to balance making penalties proportionate when they are unintended, but then reducing their value for strategic advantage.
The only objection I have thought of is that we can imagine a billionaire using this cynically to act irresponsibly and pay off the victims. But that objection assumes that compensation always involves only money. Perhaps “consequence” would be more appropriate. If non-monetary penalties get included among possible consequences, even very wealthy persons could be disincentivized against abusing this framework (like the extra penalty for intentional fouls).
Cf. 'The Abusability Objection' at utilitarianism.net/objections-to-utilitarianism/abusability/
"Consider a “ticking time bomb” scenario, where one supposedly can only prevent a nuclear detonation by illegally torturing a suspect. If millions of lives are on the line, the argument goes, we should accept that torture could be justified. But given the risk of abuse, we might also want anyone who commits torture to suffer strict legal sanctions. If millions of lives are really on the line, the agent should be willing to go to jail. If someone wants to torture others, but isn’t willing to go to jail for it, this raises serious questions about their moral integrity—and the likely consequences of letting them run loose. Accordingly, there’s no inconsistency in utilitarians holding both that (i) violating human rights could be justified in the most extreme circumstances, and yet (ii) anyone who violates human rights should be strictly held to account."
This isn't what's in dispute between consequentialists and deontologists.
There is no inconsistency, but I rarely see a utilitarian argument made in a way that incorporates this sort of accountability. Guess I need to keep reading! Thanks for the link.