8 Comments
10 hrs agoLiked by Richard Y Chappell

Dear Richard,

First of all: hi! I've been reading your (excellent) blog for many years. I also think we overlapped at Princeton way back in the day, and may even have met, but there is no reason why you should remember me.

Regarding this post:

1. I agree strongly with the central thesis, that preferability is an important and somewhat neglected concept in ethics -- neglected especially by 'deontologists,' particularly those of Kantian inspiration. As you say, much of the suspicion derives from the fear that talk of preferability is smuggling in a tendentious form of consequentialism. Maybe it's worth noting, though, that many deontologists are also suspicious of idea that the maximally thin notion of a preference used by many philosophers, decision theorists, etc., has any central role to play in our philosophical psychology. Not sure which suspicion is more fundamental. I have two thoughts here about ways of disarming some of the knee-jerk skepticism (some) deontologists feel towards preferability-talk. First, it might be better to talk about desirability, and, especially, about what it is fitting to *want.* It's less stilted and jargony (to my ear), and so more clearly has a basis in our commonsense ethical evaluations. Second, in my experience some deontologists can be warmed up to the relevant notion of preferability (or desirability) by being reminded that it is an open theoretical possibility that facts about which preferences over states of affairs would be fitting are at least sometimes explained by facts about other fitting attitudes. So, the fact that I should (/it would be fitting to) prefer to not cut up the one to save the five could be explained by the fact that it is fitting to respect the one as a person with a sacred-magical-dignity-aura (or some such).

2. Relatedly, I think it might strengthen your case here if you were a bit more ecumenical about the relation between preferability and preference (or desirability and desire). It's just overwhelmingly plausible that there is some true and interesting biconditional of the form: A is preferable to B iff it is G to prefer A to B, where G is some good normative or evaluative status. Maybe G is *fitting*-- I know that's your view (or your preferred bit of ideology). But it could be *virtuous* or *rational* or any number of other things (or, indeed, several at once), and the general point you are making will still stand. Or am I missing something? This isn't really meant as a criticism -- perfectly fine to use your preferred ideology on your blog! Just a thought about how to pitch the general point to the widest possible audience.

3. I also agree (I think) with the spirit behind the claim that "'how should bystanders feel about optimific rights violations?' is an extremely deep and puzzling challenge for deontologists to grapple with." But I'm sympathetic to some kind of agent-neutral deontology, so I think the question, as posed here, carries a false presupposition. That is, I think the bystander, with no special connections to any of the victims/potential victims, shouldn't prefer that we cut up the one to save the five (that it wouldn't be virtuous or fitting for him to so prefer, etc.). And I think something is impartially better iff it is impartially preferable. So I'm inclined to think that there are no optimific rights violations, at least on the reading of "rights violations" where such violations are necessarily subjectively impermissible. But the general point -- that thinking through how they should feel about, say, someone else cutting up the one to save the five is a productive and challenging line of inquiry for deontologists -- seems clearly right to me. I'm not trying to convince you of the truth of agent-neutral deontology, or the associated verdicts about preferability and permissibility. I realize that you have sophisticated views about these matters which I haven't engaged with at all.

Anyway, thanks for your interesting post.

Expand full comment
author

Thanks Jake! I'm very sympathetic to all this.

(1) My main hesitancy about shifting to talk of what we should *want* is that it might seem to imply a kind of inappropriate glee when picking between bad options. (You don't want any of them! Still, you can *prefer* the lesser evil, just because you'd want the greater evil *even less*.) So which way of talking goes over best will probably vary across different listeners. But I'm very open to using whatever term people turn out to be most receptive to. I don't think anything of substance hangs on it.

I very much agree that it's an open theoretical possibility that fitting preference can be explained by other fitting attitudes. I often try to stress this by especially highlighting the possibility that deontic reasons for action (that the act would be wrong, disrespectful, whatever) could explain why we shouldn't want it to be done. But I like your suggestion of also highlighting the attitude of respect as another possible normative ground here.

(2) I think your suggested terms are all plausibly equivalent! The main thing I'd want to rule out is interpreting G to be *good* (or value-promoting), in global consequentialist fashion. That would be a very different normative concept from preferability. So I mean to use 'fitting' as the ecumenical but unambiguous alternative to global consequentialist evaluation. (Some attempt global consequentialist analyses of virtue, which I think are mistaken but it makes virtue talk less clear. And some analytically associate 'rational' with narrow self-interest or desire-based reasons, which I'd also want to avoid having built into the concept. But I agree that the best understandings of virtue and rationality will mesh with what I say using the term 'fitting', so I certainly don't mean for my terminology here to exclude that.) I take fitting attitude talk to be a conceptual placeholder for objects being "X-able". (Credible, preferable, desirable, etc.) If others are interpreting it less ecumenically, then it could certainly be helpful for me to clarify that I don't intend it to be so.

(3) Fair enough! You anticipate how to reword the substance of my point without relying on 'optimific'. (I find it more clear to use evaluative terms like 'better' to pick out just what's *pre-morally* preferable, excluding distinctively deontological reasons for preference. That's just because it's helpful to have a simple way of expressing the sense in which even an agent-neutral deontologist who prefers that nobody carve up the one to save five surely grants that there is *something* worse about the outcome where more innocent people die. But it's ultimately terminological. You can use different words for this concept if you prefer. "Letting the five die is the overall better outcome, but worse with respect to non-moral goods such as well-being," for example.)

Expand full comment

Hi Richard! Mark here. I always enjoy your work on Ethical Theory and EA/Longtermism, so thanks for the thought-provoking essay! I'm mostly in agreement, and I admire your intricate, multi-category, non-simplistic approach to Ethical Theory as compared to what is perhaps the vast majority of folks in Ethics who flail in oversimplification due to their limited categories or notions or yardsticks. But some responses, some to play devil's advocate.

(A side comment & question first: I love the fact that you always link to your other Essays. It makes things nice and tidy and makes me fantasise of a hierarchy or structured Web of "Chappell's Ethical System". So I'm curious: do you think there is ONE paper (or one small set of papers) which form the "foundations" of your Ethical thought, from which all your other papers fall out more or less as logical corollaries of them? And—if this isn't too bold!—could you create a mindmap of how all your papers fit together, logically speaking? I think fellow admirers of your work would greatly appreciate seeing how this whole labyrinth fits together! I sincerely think you have an admirable intricate take of Ethics and "how it hangs together" (to borrow that Sellarsian phrase) and I think it would be a shame if other philosophers weren't given the opportunity of being able to take it all in (to grasp the forest over the trees) on 1 page or 1 jpg. I think more philosophers would be more inclined to follow your intricate approach if they could appreciate it easily/quickly first before delving into the details.)

1) What is the significance of talking/thinking in terms of preferability ("X is preferable", "X is preferable over Y") vs talking/thinking in terms of normative reasons? ("There is normative reason to prefer X" or "There is normative reason for Bob to prefer X" or "There is more normative reason for X to exist")

2) Given that, at the end of the day, what we want our Ethical Theory to do, at a minimum, is to tell us how we ought *to live* (i.e. to answer the Socratic question) or what we have most reason *to do*, couldn't we actually dispense (in our foundations, not in practice) with preferability and value and treat them as mere heuristics or guides to talking about reasons for action? After all, it seems that people's preferences or emotions or desires are simply not within their direct, voluntary control. Only their actions. And given the ought implies can principle (or a suitable generalisation thereof), how could normativity/reasons attach to something outside the sphere of choice? (to borrow a phrase from Epictetus)

And personally, I would like to minimise the number of posits or ontological commitments I make, simplicity/parsimony or Occam's Razor, I think, being the only principled way of not having to consider a whooole bunch of (perhaps infinite) crazy, wild philosophical theses. So it is with metaethics. I want to make the fewest number (or smallest variety) of posits about what fundamental normative facts/properties/truths there are, and I think normative reasons for action might be enough as far as Ethical Theory goes. Why countenance *irreducible* normative facts about preferability or value or permissibility when I can get all the plausible verdicts from just reasons for action? Sure, perhaps I can't describe, simpliciter, the normative difference between a possible world in which a child is lightning-struck and an otherwise identical world in which he isn't. But (1) This isn't even action-guiding and so not really normative or not helping to answer the Socratic question (2) I can still approximate towards that desirable/plausible verdict: I can still say "IF it were within my power to cause a child to be lightning-struck, I have normative reason to not do so (or extremely less reason to do so than to not do so)."

3) Suppose it were given that it is impossible for Bob to prefer X. (E.g. perhaps because there simply does not exist any chain of cause-and-effect (or continuous fourdimensional spacetime worm) leading from the Big Bang to the [nonexistent] event (the mental state) of Bob's preferring of X—plus the Kripkean fact that Bob has his origins essentially/necessarily: from the Big Bang).

Would Bob still have reason to prefer X? (Seems not, given the Ought Implies Can principle, or a suitable generalisation thereof) But then again, our default/commonsense intuitions beckon us to answer: Yes (e.g. if we take X="The child not to be lightning-struck")

It seems to me, as someone with sympathies for Necessitarianism, that I would answer Yes only when thinking of an utterly abstract (and so, nonexistent) agent. But when thinking about that very person, Bob himself, I must consider what is possible and what is impossible for his mental states to do. But when we do Ross-style comparisons (or thought experiments) of pairs of possible worlds with only one independent variable is modified, one is tantalised to think in the hypothetical abstract rather than messy, real, concrete Reality where sometimes it's just the case that a person couldn't have preferred X no matter what. His mental/neural states were necessitated to be thus-and-so with ironclad necessity by cause-and-effect or the laws of Nature or a continuous fourdimensional spacetime worm that had always already been in such-and-such condition.

Plus, it seems to me that the rational (or ideally rational) person would just accept Reality as It [eternally or fourdimensionally] is (*), rather than prefer a nonexistent, hypothetical world over It. A "Reality" which is just like Reality but with 1 variable magically changed is just as fictitious as Heaven/Nirvana/Paradise. This (*) statement is of course apt for misunderstanding! It is by no means equivalent to saying "the rational person would just do nothing, or, think the present time-slice is superior in value to all other time-slices".

Expand full comment
author

Thanks Mark! Until my book is finished, the best existing overview of my moral philosophy is probably the forthcoming Oxford Handbook chapter, 'Consequentialism: Core and Expansion': https://www.dropbox.com/scl/fi/1mzpns3k5iuqtlpv9f707/Chappell-CoreConsequentialism.pdf?rlkey=nvi5nc9sgpoicnnq2wm2z18uh&e=2&dl=0

But yeah, maybe I should try to put together a diagram sometime to map it all out...

(1) I think they're plausibly the same, but some people use 'reasons' to include "wrong kind" reasons, e.g. if an evil demon threatens to blow up the world unless you desire that puppies suffer, some will say that gives you a reason to have that desire. But it doesn't make the suffering of puppies desirable, or *meriting* of desire. (We don't need the actual suffering in order to save the world; just the perverse desire.) Others will say the evil demon's incentive merely gives you reason to want (and hence try to acquire) the perverse desire, which is not the same as saying that the perverse desire is *itself* reasonable or supported by reasons. To avoid this terminological debate, I prefer to just talk about preferability, or fitting ("right-kind") reasons to prefer one thing over another.

(2) You can't generalize "ought implies can" beyond actions. There can be unreasonable beliefs, things you shouldn't believe based on your evidence, even though belief is not voluntary. We often want to assess whether one's involuntary rational capacities are functioning properly. For more on why I think fittingness is important and illuminating to ethical theory (not worth cutting away with Occam's razor), see my paper 'Fittingness: the sole normative primitive': https://philpapers.org/rec/CHAFAF

Methodologically, I don't think we should ever want to deprive ourselves of the ability to say true and important things. So I would apply Occam's Razor very differently, and far more cautiously, than you seem to have in mind.

(3) Yes, as above, I don't think it matters whether the agent literally can be rational (given physical determinism or whatever); we should have rational attitudes, and if something causally prevents us then that simply explains why it is that we actually fail to have the attitudes that we rationally ought to have.

Expand full comment

I don’t mean to overemphasize a tangential point, so let me preface this by saying that the main discussion of preferences and how they relate to deontological decision making was interesting and persuasive to me. But of course I want to talk about the thing I disagree with.

Why is someone who prefers a child be struck by lightning ‘in the grip of a theory’? Can’t people be misanthropes or sadists? Couldn’t someone be glad because the child was from the other tribe? Couldn’t someone believe that the child must have angered God and therefore self-evidently deserved their fate?

I find appeals to the alleged ‘datum’ of our preferences completely unconvincing. It mistakes the preferences of twenty-first century philosophy professors for universal law.

Expand full comment
author
8 hrs ago·edited 8 hrs agoAuthor

Oh, sure, one could have special reasons for the opposite preference (though I think they would generally be bad reasons: I happen to think I'm pretty good at picking up on universal laws! If you're opposed to all such moral clarity, you'll end up in the land of "Hitler just had different tastes from me" relativism).

I really mean to be talking about someone who thinks that the *concept of a preference* rules out having preferences about things that we can't ourselves affect.

Expand full comment

Well, as a moral relativist, I do think Hitler had different values from me, but I get that you were more thinking of a less practical objection.

Do you have a substack piece that more addresses relativist concerns? I took a quick scroll through your stuff but nothing jumped out to me as relevant.

Expand full comment
author
2 hrs ago·edited 2 hrs agoAuthor

Nothing recent that I recall. If your concerns are mostly epistemic, I discuss my views on the epistemology of philosophy (incl. ethics) in some depth here (following a fun YouTube discussion with Dustin Crummett):

https://www.goodthoughts.blog/p/recent-media-appearances

Otherwise, there's a post on my old blog (from way back when I was an undergrad) setting out what I find most troubling about naive relativism, and how a more sophisticated version might avoid this:

https://www.philosophyetc.net/2006/06/why-we-need-to-idealize-ethics.html

But I no longer endorse the claim that "If the convergence claim is false, and even fully informed and ideally rational agents could disagree morally, then there would seem to be no basis for universal moral truths."

I later came to conclude that this conditional claim is self-defeating (since a robust realist could coherently deny it): https://www.philosophyetc.net/2015/11/self-undermining-skepticisms.html

Expand full comment