29 Comments

I agree that this highlights a problem with deontology. It seems bizarre to think that, while it would be great if you pushed the person off the bridge in footbridge by accident or while sleepwalking, nevertheless you shouldn't do it. It shouldn't be bad to allow perfectly moral people to choose which decision to make.

I don't, however, agree with the idea that it makes agency strange. The deontologists would agree presumably that it would be good (axiologically) if you pushed the person in bridge. Thus, their assessment is quite universal--they think it's good when good things happen. They just think that it's sometimes wrong to promote the good.

Expand full comment

"I agree that this highlights a problem with deontology. It seems bizarre to think that, while it would be great if you pushed the person off the bridge in footbridge by accident or while sleepwalking, nevertheless you shouldn't do it."

The idea that *as a rule*, you shouldn't kill people makes sense. The idea that moral rules don't apply at all to natural or unintentional events also makes sense, and explains the "inconsistency". Since consequentialism and deontology are different things, they can have different concepts of good/right. If you do something you should not do, and it has desireable consequences, then it is bad from one POV and good from the other. But deontology wins when it comes to defining what you should do, because getting good consequences out of bad actions is sheer luck.

(I'm assuming a form of deontology where the rules generate desireable consequenes on average).

Expand full comment

Deontologist here. Studied philosophy at Calvin University.

I agree that deontology is foundationally reliant upon the idea of human life being sacred. I'm ok with that as well, ethics is generally downstream of bigger questions.

But in the general: these types of questions are way overdetermined. Practically: most ethical theories will give you the same answer for your daily life questions (don't kill your neighbor, etc).

Expand full comment

The edge cases pop up regularly though. Like abortion.

Abortion isn't an everyday dilemma but everyday its a dilemma for many.

Expand full comment

"Deontological verdicts involve further seeing agency as transformative—changing what is desirable—in a way that seems entirely unmotivated to me."

Indeed. There's an explanation in this fascinating paper I just read by Henne, Niemi, Pinillos, De Brigard, and Knobe:

https://pubmed.ncbi.nlm.nih.gov/31082750/

Basically, they think that we regard doing harm as worse than allowing harm because we regard doing as "more causal" than allowing is. But why? Well, their favoured explanation of why we think it's "more causal" is that we ignore counterfactuals in which no harm occurs when thinking about allowing, but take some such counterfactuals into account when thinking about doings. So yeah -- doesn't look good for the doing/allowing distinction. Indeed, even if the hypothesis that they reject -- the "force transfer" hypothesis -- is correct, that still doesn't look good for deontologists.

Expand full comment

Fascinating. I'm not sure to what extent I'm tainted by deontology but this post is helpful.

I believe consequentialism is CORRECT but my intuitions, which are clearly emotionally driven (= highly evolutionarily derived) often go against it and it's usually in the agency area.

So, I don't think human life is actually sacred, and while I see a reason for extremely strong prohibitions on killing (much atrocity has resulted from assumption that killing, including mass killing, was justified by a greater good), I have no moral objections to suicide at all, and that includes compassionate helping people get past their primitive/visceral fear of death in cases when their life is unbearable and suicide is a better option.

But the idea of "agentic pollution" feels intuitively correct. If doing "more good" would involve causing me personally great distress, distress of emotional nature and thus unamenable to rational argument, I'm going back to this term "visceral" but I really mean "limbic", then this would affect my intuitions. There's something to do with responsibility and whether a thing FEELS "on me" or not, regardless of whether it really is, something about whether I'm a part of the "original situation" or not. So, as the single Martian I absolutely should sacrifice to save others, but as an additional Earthling -- it would be very hard to press that button and face the lifetime of nightmares afterwards.

We're really in the "Sophie's Choice" territory here!!!

Expand full comment

I stumbled on this article by accident and really liked it. “Preserve life above all other considerations” never made sense to me. And I’m pretty sure that, if offered a choice between living 40 more years in a maximum-security prison and 20 more years in freedom, most people would choose the latter.

Expand full comment

I don't really hold people as sacred, so much as I hold my own qualia sacred. Cogito ergo sum is still the best jumping-off point I know about, even after all these years.

But other people are by most intuitive metrics very much like me, and so I strongly suspect they have their own qualia as well. But I can't be 100% sure about this, so that already invalidates me from being a cobsequentialist - if I'm only 99% sure someone is experiencing qualia like mine, then I can only give them at most 0.99 * my own sacred value.

It's intuitively obvious to me that other beings with qualia very different from my own should be viewed as very much lesser for it, though. This would be as true for a galactic supercomputer as it would be for a honeybee. Alas, I arrived at these conclusions too late (or maybe too early) for them to affect my own decision making much; instead of trying to tesselate the universe with copies of me, I just plan on having kids and having fun.

Expand full comment

I’ve always found resistance to the trolly problem to be a matter of uncertainty.

In reality, you cannot generally know for certain that five people will die from the trolley, some other actions or circumstances may yet save them, but you do know that your course of action involves deliberately killing someone.

It is not that life is sacred, it’s that your reasoning in a moment like that is uncertain, and a certain death of 1 is likely worse than the possible death of 5.

We should very much want a social norm of preferring possible to definite harm. If we were all perfect calculators perhaps that’s a different norm but to protect against humans that reason poorly (all of them) we prefer inaction.

Expand full comment

Yeah, the thought experiments are very artificial, but you can't really distinguish the rival theories if you aren't willing to play along with the intended stipulations.

Expand full comment

I think any theory that ignores the social equilibrium it creates isn’t worth much.

Expand full comment

Not sure what you mean. Any consequentialist theory automatically takes into account social consequences. The methodological issue here is just whether you're willing to consider hypotheticals that diverge from real-life cases. (Your answer to this question doesn't have any bearing on what moral theory is correct, except that you won't be in a position to form justified beliefs about the latter if you aren't willing to consider such hypotheticals.) For more on the distinction, see: https://rychappell.substack.com/p/ethical-theory-and-practice

Though something I suspect (and maybe this is what you're getting at) is that many people are drawn to deontology because they mistakenly conflate the question of what real-life norms to follow with the theoretical question of what justifies them. I discuss this more here: https://rychappell.substack.com/p/deontic-fictionalism

Expand full comment

There are many neuropsychological experiments using more sophisticated and realistic scenarios of moral choices. See Sapolsky (Behave), Zaki (The War For Kindness), Preston (Altruistic Urge), and many other empirical studies. Three month old babies care for others (see Carolyn Zahn Waxler and others). Also, see Atran (In Gods We Trust), Wranham (The Goodness Paradox), Moral Order (Boehm)... The dichotomy of essentialism-deontology can be useful but misleading.

Expand full comment

You refreshingly admit what I typically suspect about deontologists: they're not actually addressing the thought experiments (even the ones they themselves propose), but instead variations where they smuggle in all sorts of real-world harms that would likely attach to the better-consequence choice that they reject, and/or add epistemic uncertainty to the worse-consequence choice that they accept.

Except for the rare case like Kant, who followed the logic rigorously and thereby accepted insane moral judgements like not lying to a murderous person about the location of their intended victim.

Expand full comment

I’m not a deontologist, but thank you.

Consequentially, you can’t get around the fact that all kinds of people have to morally reason, every day, in conditions where we’re can’t calculate expected value even if the individual has the intelligence, time, and inclination to do so.

What is the simple moral rule that leads to the best outcomes when executed by the intelligent, idiots, and everyone in between? That is the only question with any consequence.

Thought experiments to ground any particular rule in “logic” serve no purpose, the optimal rule is justified by the outcomes, not some abstract reasoning,

Expand full comment

"you can’t get around the fact that all kinds of people have to morally reason, every day, in conditions where we’re can’t calculate expected value"

Yes, and that's why pretty much all consequentialists acknowledge some concept like "deontic fiction". Rules like negative rights are often very useful heuristics, but what defines their borders -- and what makes them either valuable or dubious or crazy -- is ultimately their relation to expected positive and negative consequences.

"when executed by the intelligent, idiots, and everyone in between"

I have no idea why you you would think that a good heuristic would need to be equally reasonable for everyone. The kind of heuristics useful for playing better chess, flirting with strangers at a bar, writing a decent poem or running a business vary considerably with individuals' aptitudes and experience. Why wouldn't deontic fictional heuristics work in a similar way? For a severely mentally disabled person prone to anger, reinforcing a heuristic like "NEVER punch anyone" might be a great idea. Not so great a heuristic for someone with a stronger mind and sounder temper who's in a position where they might need to physically defend themselves and others.

Expand full comment

We don’t get one set of social norms for the high IQ and another for the average. Or one for good circumstances and one for poor.

Too many consequentialists see themselves as smarter than they are, if they believe they can determine the expected value of pushing someone off a trolley bridge. I’ve yet to meet a person I would trust to make that calculation.

Expand full comment

Sorry, but that makes no sense. A doctor's reasonable heuristics about what sort of intervention they should or shouldn't administer to an injured stranger on the street, are going to be very different from those that are reasonable for me, even with both of us having the same underlying consequentialist goal. I specifically mentioned "aptitudes and experience", which will vary widely with the type of situation, not the blanket elitism you've somehow read into it.

Anyhow, there's no basis for accusing specifically consequentialists of hubris, because deontologists equally make a choice about trolley bridges. They just pretend that their choice to have the people on the tracks die would be a non-choice. I may not trust anybody perfectly, but I trust that latter sort of person considerably less.

Expand full comment

My original comment and follow ups have been about broad social norms in the context of literally killing a person to save 5 others, which is not a real or generalisable experience.

The trolley problem does not specify expertise or any specific circumstances.

Of course doctors have different heuristics for treatment, however even doctors are not in a position to determine an injured stranger on the street should be summarily euthanised and put on ice so their organs may save 5 other people.

Point being, that abstract problems like this are more misleading than helpful, they do not lead to moral intuitions that are good for society, nor do they provide a chain of moral reasoning that will be generally helpful to people in real situations. Because of the real and insurmountable uncertainty of real life, no one should be deciding that one person who is not currently a threat to others should die in order that others might live. More errors than benefits will certainly accrue.

Expand full comment

"Deontological verdicts involve further seeing agency as transformative—changing what is desirable—in a way that seems entirely unmotivated to me."

If the thing that you are trying to do with morality is assign praise and blame, then the difference between agentive actions and natural occurrences is important, and the difference between intentional actions and accidents is also important. If you are trying to do something else, such as simply obtain desirable outcomes, they might be less important. Different ethical systems have different conceptions of the purpose of ethics, which means they are not entirely rivalrous. Consequentialism is the system that's tuned towards desireability.

"...changing what is desirable..."

No, agency doesn't change what's *desirable*: it changes what's punisheable/blameable.

Do you want to level up or level down? Put sleepwalkers, non human animals and machinery in jail? Or put no one in jail, since all agentive acts are just natural occurences?

Expand full comment

I have a perfectly coherent account of praise- and blame-worthiness that fits with a utilitarian account of our reasons for action (including marking the distinction between intentional actions vs accidents, etc.). Nothing about this motivates *deontology*.

Expand full comment

My opinion of deontology is very dismissive.

I believe deontologists are just optimizing for having beliefs that simplify their lives and give them good vibes.

Expand full comment

Oh, but you're too kind.

Especially among activists, deontologists often seem to be optimizing for virtue signaling and crapping all over people who are honest consequentialists from the outset... even though they themselves switch right into consequentialist mode when tough challenges to their positions are raised.

Expand full comment

Some people are governed by reason, and some are governed by other things (f.ex. sacredness).

Expand full comment

I'll give you my take on "dignity".

Dignity = elevation (often in kind) of an object above another object along some particular dimension of philosophical significance, or in some philosophically significant respect.

Uses of "dignity" in moral philosophy of course refer to *morally* significant elevations. Typically, though not only, in terms of *moral standing*.

Hence, human dignity, or specifically human dignity, can be defined in terms of the elevation of human beings (=morally rational beings, beings capable of morality) with respect to non-human beings in general, in terms of moral standing.

Now, this special status of human beings has to be described precisely, to avoid confusion with the thought that human beings are "the only beings with moral standing". The claim, I think, should instead be reas as saying that human (morally rational) beings have a *special type of moral standing*. Which, again, needs to be described precisely.

Hence, we can also speak of the dignity of sentient beings in general, as there seems to be a specific, morally significant, sense in which sentient beings are raised above non-sentient beings in general *in terms of moral standing*. Namely, there's a specific type of moral standing that sentient beings in general have (including human beings, of course, now regarded in their capacity for happiness as such).

"Dignity" is also used in relation to realised states of the more basic dignities, e.g. the moral stature of a human being (how good of a human being they are, as opposed to the dignity they have by merely being capable of morality) or decent living conditions for a sentient being.

I hope this helps!!

Expand full comment

Perhaps there are more than two options: consequential; sacred. Maybe the sacred is ordinary (Eliade) yet consequential. A person's life being sacred does not, for me, imply opposition to euthanasia if the consequences are living in pain and suffering.

Expand full comment

Sure, I don't mean to imply that every deontologist accepts *every* example of "sacredness" discussed in the post.

Expand full comment