8 Comments

I strongly agree with part of this, and I strongly disagree with part. I agree with where you criticize deontology for respecting norms that are not really important. It doesn't really matter--neither to the victim nor the beneficiary--whether you redirect a threat or cause a new one to prevent some other one. I agree that our intuitions about deontology are reflecting shallow intuitions about lots of cases, rather than deep intuitions about what really matters.

Now on to the parts I disagree with. For one, I disagree with the claim that common-sense morality is pretty good. I think common-sense morality is pretty good in constraint cases, but is very bad in other cases. The failure of common-sense morality to endorse a general norm of beneficence has literally caused plausibly millions of deaths--imagine if everyone on earth was an effective altruist, or even just 10% of people gave 10% of their incomes.

Second, I disagree that the beneficent amoralist is a challenge. Imagine if utilitarianism is the correct morality. We can imagine a nice seeming amoralist deontologist, but this doesn't challenge the utilitarian. It seems in the dialogue, the reply of the moralist should be, if deontology is true, "you shouldn't violate rights, because it's really wrong, even if it's for the greater good."

Expand full comment
author
Feb 14, 2023·edited Feb 14, 2023Author

I guess I think commonsense morality does endorse beneficence? But maybe doesn't emphasize it enough, or something. It's a bit unclear. Certainly most *people* should be more beneficent than they are, so I don't think we really disagree on that point.

On the challenge of amoralism: I'm not sure exactly what you have in mind by "a nice seeming amoralist deontologist", but as I'm now imagining him, he doesn't seem to care about what's really important. Like, he doesn't believe in morality, he just has a brute preference that people die as collateral damage rather than as a means to saving lives. That's weird! Maybe he's equally baffled by why I just want more lives to be saved. But this isn't a "challenge" to me, because it's self-evident that we have reason to want more lives to be saved. My whole point is that it isn't self-evident that there's any reason to care about deontological distinctions. If the deontologist thinks their distinctions *are* self-evidently worth caring about, then they could reply the same way. But my guess is that they don't really believe that, because that would be a weird thing to believe.

Note that it's not enough for them to say "because it's really wrong", because it isn't self-evident that "wrongness", in their sense, is worth caring about. Maybe being "really wrong" is just an internal matter of being prohibited by the deontological system. The question is whether there's external reason to care about that.

Edited to add: imagine someone in an honor society insisting that they had to burn the widow alive because Honor demanded it. Maybe true! I don't know what Honor is, maybe it demands weird stuff. But screw Honor, in that case. We clearly shouldn't listen to that jerk. Maybe Wrongness is similar.

Expand full comment
Feb 17, 2023·edited Feb 17, 2023Liked by Richard Y Chappell

I agree that commonsense morality generally endorses beneficence, and suspect that the disagreement hinges on whether one strongly distinguishes normative statements from statements about one's own psychological dispositions to act. It seems trivially easy to think of many ways in which I could be morally better tomorrow, but probably won't, because although I probably value the aggregate well-being of the world a lot more than the average person, I value other things as well. and will continue to sometimes sacrifice some amount of moral goodness for them.

I don't think I've ever seen someone respond to finding out someone else donating 10% with a "so what?" It's typically along the lines of: "That's amazing! I don't think I'd ever be able to do something like that!" We don't have anything like BB's world, not because commonsense morality rejects strong beneficence, but because the overwhelming majority cares much more about things other than morality (not only greed, but also social conformity, laziness, etc).

Expand full comment

//I guess I think commonsense morality does endorse beneficence?//

But it doesn't see donating, say, ten percent as the moral baseline. If it did, lots of lives would be saved.

//My whole point is that it isn't self-evident that there's any reason to care about deontological distinctions. If the deontologist thinks their distinctions *are* self-evidently worth caring about, then they could reply the same way. But my guess is that they don't really believe that, because that would be a weird thing to believe.//

Maybe they don't think that it's self evidently worth caring about on its face, but it is self evident that you shouldn't harvest people's organs, and they think deontology is the only way to account for that. Wrongness seems by definition worth caring about. In the honor society, it seems that it wouldn't be genuinely wrong to do dishonorable things--but if it were, then you shouldn't do dishonorable things. Same with the deontological cases.

Expand full comment
Feb 16, 2023Liked by Richard Y Chappell

I had a shower thought Richard probably inspired by reading your recent posts here. I'll just leave it here while its on my mind, its not well thought out enough to be worth pursuing in any real way.

Nonsecular ethics have a lot of very substantive norms and prohibitions that adherents considered moral ones, many that secular people, let alone ethicists, strongly reject, like prohibitions on homosexuality, wearing extravagant clothes, showing the bottom of your shoe in public, etc. etc. There are strong utilitarian reasons to think adherence to these kinds of norms, within societies in which they are widely adopted, with laws and punishment in place to enforce them, with violations of them leading to violence and exile, is instrumentally valuable. If we were to give the best explanation of their instrumental value, we would cite highly contingent, changeable features of cultural practices, people's beliefs, reasonable and unreasonable, etc and why some kind of conservative maintenance of these practices in those cultures is better for overall well-being than disrespecting them.

Now consider the case of deontological prohibitions and distinctions and vocabulary and so forth. You've been conceding a lot for a utilitarian about their instrumental value. Do you think the best explanation for their instrumental value is similarly some highly contingent, changeable features of practices, beliefs, reasonable and unreasonable, of culture? My shower thought is this: deontologists wil never concede this. They'll think that the reason deontology is of instrumental value for a consequentialist has nothing to do with the mere fact that people have quasi-religious beliefs about their weight, and that societies have built them into their cultural milieu like prohibitions on women in combat or marrying someone from another religion. The best explanation is more like some kind of Leibnizian pre-existing harmony principle. Adherence to deontological constraints is constitutive of what it takes for things to have value, its the people who adhere to them whose well-being is worthy of promoting. The reason deontology is instrumentally valuable is because it is part of the explanation of what a good consequence is. (Something like that, might be formulated differently, maybe its just that the best explanation of its instrumental value lies precisely in its being true, and that the world is structured this way where consequences line up with deontic moral truths.)

I don't endorse this, I don't think I'm a deontologist. But I keep seeing this over and over in, for instance, advocates of retributive punishment. They start by just assuming that whatever is deontically justice will have the best overall consequences, and when that's not true, consider the deontologically just consequence to be the best overall consequence.

Expand full comment
author

I'd be inclined towards an intermediate explanation of the instrumental value of constraints. It's not highly contingent or easily changeable: I'd expect similar constraints to be genuinely good across widely diverse cultures and societies. But it's also not *constitutive* of the good: we can certainly imagine exceptions. Rather, they're *robustly* (but not *necessarily*) good responses to human nature, cognitive biases and limitations, etc.

Expand full comment

I got into this with Bentham, but we can separate acceptable risks/unforeseen side effects morally from unacceptable risks/treating people as means.

Say 1% of prisoners constitute innocent people wrongly imprisoned, but unknowingly so. That’s morally different from knowingly sending innocent people to prison to increase aggregate welfare (and assume welfare is the same in both cases). The former case imposes reasonably acceptable risk where imprisoning innocents are a side effect of maintaining a prison system.

The risk is acceptable given the high burden of proof to imprison someone (90-95% certainty).

Yet if the burden of proof is reduced and the risk is increased, we might be creating too high of a risk on innocent parties to the point it can be reasonably rejected, and therefore immoral to impose. You aren’t respecting people’s freedom in that case, which is the real heart of morality.

If you want your death to serve a useful purpose, that’s your personal choice. However, morality doesn’t demand that from people, as discussed below:

https://open.substack.com/pub/neonomos/p/what-is-morality?r=1pded0&utm_medium=ios&utm_campaign=post

Expand full comment

“You can’t sensibly ask why we ought to do what we really ought to do. “

That's a confusing way to put it. I would have said you think there are reasons why we ought to do what we ought to do. And that is what you would mention is I asked you why we ought to do what we really ought to do, isn’t it? Isn’t it about maximizing happiness in some sense? And you are always giving reasons and arguments for that conclusion. Have I missed some subtle point?

Is the problem knowing what we ought to do, or knowing why we should feel motivated by the reasons to act on it? The trolley problem and various other thought experiments I’ve seen you use usually seem to proceed as if knowing what we ought to do is not too difficult.

Expand full comment