21 Comments
Dec 28, 2022·edited Dec 28, 2022Liked by Richard Y Chappell

"[T]he idea that utilitarianism is “counterintuitive” rests on interpreting it as addressing a primitive, indefinable sense of ‘wrongness’."

That might be one thing that makes utilitarianism look unintuitive, but there are lots of others. For example, its lack of sensitivity to how "close" someone is to you (e.g. your mom) is unintuitive in some ways especially in particular cases (if not at the level principles).

Lack of sensitivity to lots of other ethical ideas (see next sentence) also make it conflict strongly with a lot of intuitions we have in our lives. Such ideas include, desert, reciprocity, loyalty, non-betrayal, local egalitarianism, responsibility, honesty, sadistic pleasures, human supremacy, the importance of pre-existing value, authenticity, integrity. These ideas may sometimes seem unintuitive at the level of principles (debatable), but I think they are often highly unintuitive in real cases.

You can argue that these intuitions are outweighed by other ones, explained away, or sacrificed in the quest for a fully specified and parsimonious theory, but they do exist. And I don't see why a theory that accounted for these things couldn't adopt scalarism or whatever alternative to binary-deontic fundamentalism you suggest.

Expand full comment
author

Yes, true, I should have written "consequentialism" there -- as there are certainly specific aspects of the utilitarian conception of the good that are intuitively questionable.

Expand full comment
Dec 28, 2022·edited Dec 28, 2022

Well, things like the doing-allowing distinction and other themes from non-consequentialism (or, perhaps, agent relativity) are also also very intuitive in cases. Often it's when you try to induce them up to principles that they seem unintuitive.

Expand full comment
author

Very intuitive when assessing wrongness/permissibility in cases, or also when assessing what you've most reason to want (and to do)? I was thinking just the former. But always interesting to learn if others have different intuitions.

Expand full comment
Dec 29, 2022·edited Dec 29, 2022Liked by Richard Y Chappell

I'll have to reflect further on that, but I think I see what you're saying better now. I think some people will object that you are sneaking a concept of agent-neutrality into the concept of "importance", whereas they view importance as potentially agent-relative.

I think such an objection is made in James Dreier's "The Structure of Normative Theories" (Reproduced as the remainder of this comment):

"

We want to understand why, according to Sidgwick, the egoist is safe when hugging the “ought” judgment, but unstable when striking out to the “Good” judgment. I think we can find some guidance in the metaphor of objectivity. Thus, an “ought” judgment does not objectify. Saying “I ought to pursue my own happiness” keeps my reasons securely inside me. But saying

“My own happiness is Good” does objectify the value. It seems to place the value of, in this case, the Egoist’s happiness, outside himself and in the happiness. Objectification suggests that the value is public, that it ought to be appreciable by anyone.

But, if this is in fact what Sidgwick was thinking (and I must emphasize that my reconstruction of his thought is highly speculative), then he was eliding two different kinds of objectivity. I used the term “objective” in the first place because I believe the kind I defined is sometimes confused with another kind. A value is objective in my sense, if it outreaches its own existence. But in the sense that Sidgwick would need, “objective” must mean something very like “agent neutral.” It must mean something like, “appreciable to anyone as a reason.”

"

Expand full comment
author

Nice passage! I actually mean to leave open that importance may be agent-relative. (You could reasonably judge it more important to save your own child than to save two strangers, for example -- that's a perfectly reasonable-seeming pattern of concern.) But it would seem self-indulgent, IMO, to care more about maintaining "clean hands" than about saving lives. Cf. Nye, Plunkett & Ku in 'Non-Consequentialism Demystified' on how this would seem "monstrously narcissistic": https://www.philosophyetc.net/2015/02/thoughts-on-non-consequentialism.html

Expand full comment
Dec 28, 2022Liked by Richard Y Chappell

I don't follow this: "Imagine that a killer asteroid is heading straight for Earth. With sufficient effort and ingenuity, humanity could work to deflect it. But no-one bothers. Everybody dies. This is clearly not a great outcome, even if no-one has done anything morally wrong (since no-one has done anything at all). This scenario poses a challenge to the adequacy of traditional morality, with its focus on moral prohibitions, or “thou shalt nots”. "

What is the challenge? Plainly it's impermissible to ignore the child drowning in the pond (/to do nothing while the child drowns in the pond—if you're reifying the idea of "doing nothing"). And plainly it's also impermissible to ignore an asteroid flying towards the planet if you're in a position to stop the collision. (One reason to be sceptical of giving weight to this idea of "doing nothing" is that it's very hard to cash out in a sensible way. Does napping count? Standing still? Ignoring someone? Trying hard to ignore someone? Etc. etc.) Unless I'm missing something, I don't see why this would be a challenge for anyone—deontologists included.

Expand full comment
author

I think there's at least a tendency for deontological ethics to give less attention to the "allowing" side of the doing/allowing distinction. So there will be cases where it's clearly worth investing in x-risk prevention, in expectational terms, yet it won't qualify as an "obligation" on most versions of commonsense deontology. Probably the easiest way to establish this is to just ramp up the uncertainty: make it a 10% chance of stopping the collision, instead of 100%. Or 1%, or 0.1%, or... until it's not obligatory.

Alternatively, you could approach it from the epistemic side. Deontology generally doesn't establish obligations to gather information that are as stringent as would be socially optimal. So suppose we don't know yet whether there are any asteroids heading our way. Are we obliged to invest in satellite arrays and early warning systems? Suppose we're not (though it would be positive expected value to gather such info). And then we all die as a result of people not bothering. Again: seems bad!

One can imagine a version of deontology that avoids these problems by explicitly building in an obligation to *positively pursue what's important* (whenever it doesn't violate rights etc.). That would fix it. But what's really doing the work then is the focus on what's important, not just on avoiding wrongdoing. The latter only helps insofar as it entails the former. And, again, real-life deontology tends not to take this superior form, at least in my experience. Compare the mystery of why more non-consequentialists don't embrace beneficentrism: https://rychappell.substack.com/p/beneficentrism

Expand full comment

Maybe we have in mind different sorts of deontology (or are just using the term differently). I have in mind "common-sense deontology" of Foot, Thomson, Kamm etc. And while I agree they have questions to answer vis-a-vis uncertainty (ones they've spent time addressing), I don't see why their distinguishing between doing and allowing gives rise to those questions

As for beneifcentrism...I would think most commonsense deontologists accept an obligation to help those that we can (when it's not too hard for us to do so). They just think it has limits (certainly temporal limits). After all, it's commonsense that we have such obligations (and that they have limits) and the project is to build a theory around that data.

Expand full comment

Is something still.imperssible if no one cares whether you did it not? Who is issuing the permission?

Expand full comment
Dec 28, 2022Liked by Richard Y Chappell

I’ve struggled in the past to say why I find permissibility an unintuitive framing, so thanks for addressing this subject! The asteroid is a good example of a situation where there’s no way to set up permissibility rules that feel intuitive.

Seems to me your argument in “Importance is more important (authoritative)” will miss the point for most people. If I'm understanding right, your argument would be persuasive to a demographic with the intuitions that (1) the reasons to save five are more important than the reasons to avoid wrong, but (2) we should prioritize wrongness-type reasons, even when they’re less important than other reasons. It’s hard for me to imagine people holding the second intuition. Surely most disagreements happen over the first intuition (which reasons are more important)?

Expand full comment
author

fwiw, I think many deontologists have neglected the question of what they should most want/hope to happen in trolley cases and the like. So my main hope for progress here is just to get them addressing that question at all. Once it's considered squarely, my hope is that at least some of them will then agree that "the reasons to save five are more important than the reasons to avoid wrong". Of course, others will no doubt continue to disagree. But I think it makes sense to begin by addressing the most persuadable.

Expand full comment

Clear cut trolley cases, or realistic trolley cases?

Expand full comment
author

Clear cut thought experiments, where all else is held equal. In more realistic cases, the instrumental effects of the wrongdoing are likely to give even utilitarians strong reasons to oppose it: https://rychappell.substack.com/p/ethical-theory-and-practice

Expand full comment

To what extent do your arguments here depend on some form of metaethical realism/objectivism regarding "importance"/"what matters"/"what is ultimately worth caring about"?

If there is no (knowable) fact of the matter concerning "what is ultimately worth caring about," but only facts such as "what I do care about" and "what I can reasonably expect others to care about," then certain non-consequentialist ideas (e.g. agent-relativity, supererogation) become much more plausible.

In particular, I would regard impermissibilty and supererogation, which you here treat with suspicion as signs of moral laxity, as intiuitively familiar and potentially useful concepts for mapping the treacherous moral terrain between the regions of what I care about, what I can reasonably expect others to care about, and what others can reasonably expect me to care about.

Expand full comment
author

I don't think the realism/anti-realism distinction makes much difference to my view here. (Aside: note that you're sneaking in the normative term "reasonably" -- are you a realist about that one?)

I agree that impermissibility and supererogation are "familiar and potentially useful concepts" -- see 'Deontic Pluralism': https://rychappell.substack.com/p/deontic-pluralism -- I just don't think they should be our *central* moral concern. I'd still hold this view if I were an expressivist rather than a robust realist.

Expand full comment

>"I just don't think they should be our *central* moral concern."

Fair enough on the last point. I fully agree that impermissibility and supererogation are better off living within a diverse ecosystem of moral concepts rather than propagated as monocultures.

>"I don't think the realism/anti-realism distinction makes much difference to my view here."

Really? Perhaps not in relation to the critique of deontology, but the following claims, for example, seem to lose much of their force if there are no objective facts about "importance":

- "Ideally, our actions should be guided by what’s truly important."

- "In a conflict between what’s important and anything else (e.g. deontic status), the former clearly wins out."

(Incidentally, I think the way you have written your final section somewhat undermines your claims about the theory-neutrality of "importance" by using an example that assumes utilitarianism gives the correct account of what is important. Shouldn't you at least include the flipped deontologist's version for comparison? ["Perhaps when considering the Trolley Footbridge case, it seems bad to allow five to die, but it seems more important not to kill the one. In a conflict between what’s important and anything else (e.g. saving lives), the former clearly wins out."])

>sneaking in the normative term "reasonably"

I don't think my talking about reasonableness here implies any particular realist commitments. I have expectations about what people care about based on my experience of interacting with them (e.g. trying to persuade them to care about certain things, and having them try to persuade me to care about others). In this context, a "reasonable expectation" is just one that accurately models the psychology and social norms of an imagined community of moral discourse.

Expand full comment

'Imagine that a killer asteroid is heading straight for Earth. With sufficient effort and ingenuity, humanity could work to deflect it. But no-one bothers. Everybody dies"

Big picture stuff like wars.and catastrophes typically isn't among the obligations of ordinary citizens ...but it is for governments and the like. Consider the aftermath of 9/11, when security agencies were criticised for failing, but ordinary people on the ground were praised to the skies.for their help.

Expand full comment
author

Yes, though if there are things we can do to reduce x-risks, that's surely worth doing even if not an "obligation" in the ordinary sense.

Expand full comment

It's a case of obligation-for-whom. I can't do anything about an asteroid, but NASA might.

Expand full comment

1. That's very easy to fix...you just add praise/rewards for superogatory goodness to punishments for underperformance. Of course, everything works like that already.

2. Needing elements of deontology isn't exclusive of needing elements of consequentialism etc. Rule consequentialism is an example of a combined approach. Likewise, arguments for consequentialism aren't per se arguments.agaisnt deontology. The arguments for deontology stand up in their own right. Deontology ensures that the bare minimum actually happens, enhances coordination, and creates clarity about when punishments will descend.

3. People seem to have intuitions to some.extent in favour of all of C, D and V, so it is likely that all one legged approaches are counterintuitive.

Expand full comment