21 Comments

Straw man versions of both utilitarianism and deontology are easy to criticize. A reasonable position will have elements of both.

Even our host here admits that social systems are limited by human psychology, so that at the very least we benefit from a system of laws consisting of reasonably clear and simple rules, and perhaps also a rule-based shared moral system in addition to that. And reasonable deontologists won’t deny the need for exceptions to general rules. Bernard Gert discusses this.

The question boils down to, how do we know what rules to follow, and when to make exceptions? The simple insight I have never seen discussed in. This context is that victims of exceptions to rules deserve compensation. So in the extreme thought experiment, a quasi-deontologist can steal a dime to save the planet, but also take responsibility by accepting the legal and social penalty for breaking the rule. This resembles the use of a strategic foul in sports, e.g. fouling someone to stop the clock in basketball.

Extreme deontologists will reject this, but it seems more workable than either extreme. It allows the agent to respect what the deontologists seek without leaving consequences completely out of the picture. Then the actual victims can decide whether they think they deserve compensation, and grateful bystanders or beneficiaries of the agent's violation can help the agent make compensation, if needed. A fair arbitrator can decide what maximum compensation is due, and the victim and violator could negotiate over the form it will take. If everyone except the victims agree that “utility was enhanced,” they are able to help the agent to compensate the victims. If the actual value of the violation is low or controversial, this is less likely to happen. Hence there is a social process of learning that helps society adapt. If the result appears too unjust, laws can be amended or moral attitudes can be modified in response. This gives more of an explanation of how to deal with error.

The strategic foul is an interesting analogy, or perhaps just a very concrete example of this sort of hybrid between deontology and utilitarianism. Sports can’t proceed without rules. Fouls can sometimes be used for strategic advantage. In some cases, obvious intentional fouls receive an additional penalty to try to balance making penalties proportionate when they are unintended, but then reducing their value for strategic advantage.

The only objection I have thought of is that we can imagine a billionaire using this cynically to act irresponsibly and pay off the victims. But that objection assumes that compensation always involves only money. Perhaps “consequence” would be more appropriate. If non-monetary penalties get included among possible consequences, even very wealthy persons could be disincentivized against abusing this framework (like the extra penalty for intentional fouls).

Expand full comment
author

Cf. 'The Abusability Objection' at utilitarianism.net/objections-to-utilitarianism/abusability/

"Consider a “ticking time bomb” scenario, where one supposedly can only prevent a nuclear detonation by illegally torturing a suspect. If millions of lives are on the line, the argument goes, we should accept that torture could be justified. But given the risk of abuse, we might also want anyone who commits torture to suffer strict legal sanctions. If millions of lives are really on the line, the agent should be willing to go to jail. If someone wants to torture others, but isn’t willing to go to jail for it, this raises serious questions about their moral integrity—and the likely consequences of letting them run loose. Accordingly, there’s no inconsistency in utilitarians holding both that (i) violating human rights could be justified in the most extreme circumstances, and yet (ii) anyone who violates human rights should be strictly held to account."

This isn't what's in dispute between consequentialists and deontologists.

Expand full comment
Jul 31, 2023Liked by Richard Y Chappell

There is no inconsistency, but I rarely see a utilitarian argument made in a way that incorporates this sort of accountability. Guess I need to keep reading! Thanks for the link.

Expand full comment
Jul 31, 2023·edited Jul 31, 2023

Phrasing it as "egoistic deontology" is very clever/sneaky, but of course that's not an accurate description of the view - what's going on is the exact opposite of egoism! When I refuse to push someone in front of the train, then my aim is not "not to get my hands dirty" (which would indeed be egotistical), but rather I recognise that I owe it to that person that I don't do it. The fact that I don't get my hands dirty is a *byproduct* of my action, but not the *motivation* for it - those are two completely different things.

(Btw, being positively delighted about someone killing another person would show a highly defective moral attitude, whatever view we pick - even the utilitarian should agree with this. Even most utilitarians would agree that you should probably not be super happy while torturing someone, even if you knew that it will maximise overall wellbeing)

Expand full comment
author

The decent version of deontology is one on which *nobody* (agent or bystanders alike) should want the agent to perform a rights-violating action. If rights violations should only bother the agent themselves, and everyone else should celebrate (whenever the violation is optimific), then I think that constitutes an objectionably egoistic -- self-centered -- form of deontology, and not one that really takes rights seriously as having immense normative authority. It treats human rights as akin to special obligations or promises: of special concern to the agent themselves, perhaps, but of no great concern to others. That's a possible view, but it strikes me as far from the most appealing form of deontology.

Expand full comment
Jul 31, 2023·edited Jul 31, 2023

I disagree that it robs rights of their normative authority. The deontological view of morality is that morality limits, for each agent individually, what they are allowed to do in pursuit of their goals... It limits, for me, what I am allowed to do. And it limits, for Dr. Chappell, what he is allowed to do. And so on. I don't see how any of this makes rights less authoritative - it's still true, for any conceivable rational agent x, that x ought not do y (where y is any arbitrary rights violation). Seems pretty authoritative to me!

By the way, by describing this as "egoistic" and "self-centred" it seems like you are making a very strong *psychological* claim about deontologists - do you really think that e.g. many neo-Kantian philosophers act out of self-interest and not out of a genuine sense of altruistically motivated duty? Obviously you think they are *mistaken* in their normative judgements, but by describing them as egotistical it seems like you are saying something much stronger

Expand full comment
author
Jul 31, 2023·edited Jul 31, 2023Author

I'm describing the *view*, not the psychology of its *adherents*. But I guess I do think there's something morally objectionable about being primarily guided by agent-relative reasons, neglecting what strikes me as the more legitimately "moral" (impartial / agent-neutral) point of view. (But I wouldn't use the term "pretty bad" to describe someone merely for being morally flawed in this way.)

> "I don't see how any of this makes rights less authoritative"

That might be because you didn't include the distinguishing features of (what I'm calling) egoistic deontology. After all, agent-neutral deontology also limits, for each agent individually, what they are allowed to do in pursuit of their goals. And it further specifies that *we all have decisive moral reason to want people to respect these moral limits, in each instance*. The distinguishing feature of egoistic deontology, by contrast, is that it adds that *everyone else might reasonably hope that agents violate their deontic constraints* (whenever it would maximize welfare for them to do so). It is THIS further claim that I take to undermine the normative authority of deontic constraints.

"It's really important that I do X, but nobody else has any reason to hope/want that I do X" seems incoherent. In this way, egoistic deontology seems incompatible with regarding deontic constraints as important. Part of normative authority, I take it, is not just determining what an agent ought to do, but being such that the rest of us *should actually care* that it be done.

Do you really not see anything strange about a deontologist bystander quietly chanting "Push! Push! Push!" under their breath as they observe the trolley footbridge scenario unfold? (I don't think this is the actual view of most deontologists, neo-Kantian or otherwise. I think most would accept agent-neutral deontology, once the distinction is brought to their attention.)

Expand full comment
Jul 31, 2023·edited Jul 31, 2023

I don't see how thinking that the world would be better if the fat man is pushed would rob rights of their authority either - *it would still be true, for the person who pushed the fat man, that he ought not have done it*. Whether I think the world contains more value either way doesn't affect that.

Now does "Tim made the world better by pushing the fat man, but he ought not have done it" sound strange? Maybe a little bit. But that's mainly because usually when we make the world better we also have good deontic reason to do so. But not always, in fact we often hear people in e.g. crime documentaries saying something like "The world is better without him, but it was still wrong to kill him".

"Do you really not see anything strange about a deontologist bystander quietly chanting "Push! Push! Push!" under their breath as they observe the trolley footbridge scenario unfold?"

Sorry I think it might be that you didn't read the edit to my first comment, I will quote it here:

"(Btw, being positively delighted about someone killing another person would show a highly defective moral attitude, whatever view we pick - even the utilitarian should agree with this. Even most utilitarians would agree that you should probably not be super happy while torturing someone, even if you knew that it will maximise overall wellbeing)"

Expand full comment
author

Who said anything about evaluative beliefs? I'm talking about *preferences* (and preferability). The agent-neutral deontologist can agree that pushing the fat man increases welfare value; they just deny that this is what *matters*. It's more important, for consistent deontologists, that the agent not act so as to violate the one's rights. And so they do not *want* the agent to act so as to violate the one's rights.

The weird thing about the egoistic deontologist is that they lack these distinctively "deontological" preferences. Instead, they share the utilitarian's preferences: they want the fat guy to be pushed off the bridge! Not in a gleeful, "gee it makes me so happy to see people go soaring through the air" kind of way, of course. But as an all-things-considered preference: they would be *more disappointed* if the agent respected the one's rights, and *more relieved* if they kill the one as a means. Those are weird attitudes for a putative deontologist to have! But if you agree with me that they're the *right* attitudes, then I feel like you've gone a long way towards agreeing that consequentialism is really the right view after all. You've at least accepted *consequentialism for bystanders*. That seems like a surprising thing for deontologists to grant!

It seems like, for example, you should probably want fewer people to believe deontology. You want them to act like utilitarians instead. They'd be "wrong" to do so. But you don't want other people to act rightly, when their doing so is suboptimal. So we should all join together to try to promote a utilitarian moral code in society. I'm on board with that if you are!

Expand full comment
Jul 31, 2023·edited Jul 31, 2023

"But if you agree with me that they're the *right* attitudes, then I feel like you've gone a long way towards agreeing that consequentialism is really the right view after all. You've at least accepted *consequentialism for bystanders*. That seems like a surprising thing for deontologists to grant!"

For the record I don't think I ever did agree with that, as far as I can see I haven't really taken a stance which of the views you mentioned is ultimately correct. But if I had to pick one it might indeed be what you misleadingly call egoistical deontology and several other deontologists have defended something like that (agent-relative deontology is certainly more popular in the moral literature than agent-neutral one).

"So we should all join together to try to promote a utilitarian moral code in society"

That of course doesn't follow even if I were to believe it would be good for there to be more utilitarians, because I believe pretty strongly that deontology is correct and I obviously think there are pretty strong deontic constraints on lying or intentionally misleading.

Expand full comment
Aug 1, 2023Liked by Richard Y Chappell

Does you owing to that person that you don’t do it really matter more than the deaths of 5 lives?

It seems more like it just matters more to you (it’s in this sense in which it seems more egotistical) as then you don’t have to break your supposed obligation to not be the one who pushes them - but it would be better if the wind did just push them down (or maybe you don’t agree with that?).

I then don’t think Richard’s point is that deontologists are being selfish (I assume he agrees you can reasonably and virtuously be a deontologist) but *what i’ve highlighted above* seems more egoistical (on a theoretical level) then how morality should be (ofc your intuitions might differ).

Expand full comment

Glad to see you're back! I agree that Bystander Preference Permissivism has huge problems. Unrelated: you've described great sympathy for the view that morality is about what really matters. But that seems to imply that there aren't special obligations either to present generations or family members--after all, your family members don't matter more than other people's family members.

Expand full comment
author

"Mattering" may be biasing in inviting an agent-neutral interpretation. Really my view is that morality is about *what we should care about*, which leaves room for agent-relative concerns like special obligations to family members. It doesn't seem crazy to think that we should especially care about our families, after all. But I do think there are limits to the normative authority of agent-relative obligations. If someone prioritizes the impartial good over their special obligations, it's not clear that they're really making a serious "mistake". Maybe their special obligator could reasonably complain or resent them. But the rest of us probably shouldn't be too bothered, and maybe should even prefer that they do more impartial good.

Expand full comment

But here you are suggesting that it's not just about what you should hope but what perfectly moral third parties should hope. But they should want you to violate your special obligations--after all, they have no special obligations to your family member. But it seems weird to think both that you should save a loved one over a stranger and also that god should be sitting in heaven chanting "save the strangers," over and over again.

Expand full comment
author

Yeah, interesting. That may partly be because we imagine that God wouldn't *order* you to do something unless you really have to. But maybe it seems strange enough to think you should prioritize your loved one even while God just (quietly) *wishes* you would do otherwise. Perhaps we think that ideal agents should respect our moral reasons. So although God would generally prefer that the strangers be saved (via natural causes, say), he does not prefer that *you* (wrongly) save the strangers.

Expand full comment
Aug 2, 2023·edited Aug 2, 2023Liked by Richard Y Chappell

Here's a plausible principle: if some ideal agent wants you to take action A rather than B, they wouldn't want you to try to do A but accidentally do B. But this is incompatible with special obligations. In addition, if God's desires should respect our moral reasons, then, because special obligations are collectively self-defeating, god will sometimes want people to take actions which leave everyone worse off.

In addition, we can create a paradox of special obligations similar to your paradox of deontology. Consider three states of affairs.

w1) you save your loved one, but this prevents two other people from saving theirs.

w2) you save your loved one but a random unanticipated side effect is that two other people can't save theirs.

w3) the other two each save their loved one.

> represents preferability from the standpoint of a third party.

w3>w2≥w1. Therefore, w1<w3, but w3 is just w1 where you don't save your loved one and the other two do instead.

Ultimately, I think that deontic constraints rise or fall together.

w3>w2

Expand full comment
author

Yeah, that's a puzzle -- I'll have to think more on it!

On collective self-defeat, I think Parfit was right that special obligations need a carve-out for those situations. You should help the strangers if others would then better help your own loved ones.

Expand full comment
Comment deleted
Expand full comment
author

On my understanding of welfarism, it's the view that (i) welfare is the only good, and (ii) welfare matters *because* sentient beings (welfare-bearers) matter.

There's two different senses of "mattering" in play here. There's the question of what objects or entities "matter" in the sense of being the proper *focus* of our moral concern. The answer: sentient beings are the *things* that ultimately matter.

Then there's the question of what *features* or *changes* we should want to see in the world: what is *good*, or worth promoting. This is the sense in which welfare "matters": it's good, or worth promoting. We might add: promoting welfare is desirable precisely because that's what would be good *for* the important entities: the sentient beings (or welfare-bearers) that we should ultimately care about. We promote welfare for the sake of the welfare-bearers.

(Bringing good new lives into existence may be an exception; that's a tricky case. It does turn out to be good for the eventual person. But it's not clear that the initial act can coherently be done for the sake of the resulting individual who doesn't yet exist, and wouldn't exist if one chose otherwise. Maybe the moral reason in that case is more impersonal.)

Expand full comment