25 Comments
Mar 15·edited Mar 15Liked by Richard Y Chappell

I'm curious are you specifically interested in human morality and human normative concepts here, or do you think this stuff is more abstract?

For example, Kantians claim their theory (whatever that is) applies to "all rational beings". Does yours?

[Admittedly I haven't waded into the details here, so maybe my question is not specific enough.]

Expand full comment
author

I tend to think of it as having universal application, like Kantians do. Though I don't think all rational beings would have to have the full range of emotional responses that we do, and there may be alien attitudes that some rational beings could have (with associated fittingness conditions) that we can't even imagine.

Expand full comment

Whoa, you believe these feelings have some kind of rational basis? So like you doubt that you could induce different ones with similar moral 'feel' with appropriate brain interventions?

Like there is some necessary connection between qualia and moral sentiments?

Expand full comment
author

That's a wild non-sequitur. It's always possible to induce irrational thoughts and responses with brain interventions. "Irrational" does not mean "impossible" -- far from it!

Expand full comment
Mar 15Liked by Richard Y Chappell

Nevermind ..I was a bit confused because of the assumption it had to be narfowly moral not just more broadly normative.

Expand full comment

Fair, but I guess I'm trying to understand why you can't then seperate the experiences and the reasoning.

I suspect you would agree that some hypothetical agent who didn't have any feelings or intuitions of this kind wouldn't in fact come to these conclusions. Or are you suggesting that you can, a la Kant, derive them from pure logic alone?

Basically I'm a bit confused about how you understand the evidential/epistemic status of moral sentiments as an addition to pure logical consequence of is your notion of rationality much broader than that?

Expand full comment
author

Much broader notion of rationality! Logically coherent agents include Future-Tuesday indifference, counter-inductivists, and ideally coherent Caligulas who value torture for its own sake. I think they're all objectively misguided ("irrational"). Mere coherence is no guarantee of substantive rationality.

More on my moral epistemology here: https://philpapers.org/rec/CHAKWM

Expand full comment
Mar 15Liked by Richard Y Chappell

Ok thanks that helps alot

Expand full comment

I would love to see your response to Miles Tucker's value first view - https://philpapers.org/rec/TUCCAO

Neil Sinhababu also believes in value first view.

Expand full comment
author

I like Tucker's paper! Two main points of disagreement:

(1) His objection to my view, in section 2, rests on an appeal to tracking counterfactuals: "Counterfactual reasoning reveals that maximizing the good is the proportional explanation."

This mistakenly assumes that all right acts must be right for the same (normative) reason. We shouldn't assume that. Suppose that saving either of two people (when you can't save both) would equally well maximize the good. So either act is equally permissible. Still, it seems open to the consequentialist to claim that the acts are importantly distinct: right for different reasons, and each admitting of pro tanto reasons to regret not choosing the other.

So I don't think he has identified any reason to reject my account of right-making features.

(2) I found the argument of section 3 (Against Fittingness) hard to follow. The challenge for Tucker is how to accommodate the intuition that there's something morally mistaken about *desiring that others suffer* even if the desire has (purely) good effects. I say that the desire's *content* fails to fit with the facts about what is truly good & desirable. Tucker seems to agree: "We get it wrong when we have this desire, because we think something deeply false about morality." But then he adds, confusingly, "This thought is not unfitting, it is merely false."

But that's just what it is for an attitude to be unfitting: for its implicit normative claims to be false. Compare: https://www.philosophyetc.net/2008/05/claims-of-emotion.html

Expand full comment

I suspect that most utilitarians are utilitarians either in an anti-realist sense (it's what I emotionally do feel compelled to care about and your feelings don't change that one jot) or are realists who take the access problem relatively seriously and thus place alot more value on theoretical simplicity than any match with human sentiments.

I mean in my realist moods I reject purportedly moral sentiments entirely and my motivation is mostly just theoretical simplicity plus the fact that pleasure is pleasant. Sure I guess there is some role being played at a really deep level but it's as small as possible.

Expand full comment

Huh? I guess I just don't understand how the fact that people have certain feelings is a strong argument for the claim that those feelings correspond to any objective (ie realist in the trad sense) moral feature of reality.

I read this and don't really see how it differs from the objection: but I feel like it's bad not to prefer my friends, spend more to save us children etc etc or even "but that theory implies I should give away all my money"

I mean in what sense are you not just giving names to these feelings (I get they are different feelings but the form seems the same)?

Expand full comment
author

I'm not sure whether you're objecting to (i) the *concept* of fitting attitudes, i.e. the idea that feelings can be, or fail to be, objectively warranted, or (ii) specific *verdicts* about fitting attitudes, e.g. that it is objectively unwarranted to want people to suffer.

My answer would differ depending on which of these you're concerned about.

Expand full comment

I'm objecting to the suggestion that fitting attitudes are moral. I mean maybe they are facts but what makes them moral facts?

Expand full comment
author
Mar 15·edited Mar 15Author

I'm not sure what that means. Fittingness is plainly a normative concept, not a descriptive one. But I don't know what you have in mind by denying them the status of "moral" facts. Whether or not someone is genuinely blameworthy seems like a paradigmatically moral question, for example. But fitting beliefs are a matter of epistemic rather than moral normativity.

Expand full comment

So maybe most of my objections here are really a verbal dispute. I'm not sure I have an issue with these as normative facts but I just don't understand normative facts that aren't moral facts to be within the purview of utilitarianism. But that's purely a matter of definition.

Expand full comment
author

I think there are connections between them. For example, utilitarianism makes claims about value. But value claims are conceptually related to claims about *desirability*, or what it would be fitting to want.

Other times, people *mistakenly* believe that utilitarian verdicts have implications (e.g. for fitting deliberation or psychological dispositions) that it doesn't. But we need to talk about fittingness, and its connections to other moral concepts, in order to successfully counter those objections.

Expand full comment

I'm not sure I'm convinced about the first paragraph. Why must it be fitting to want utilitarian ends? I mean couldn't you just give the same answer as to the aliens who torture people if you are utilitarian (tho obviously not about consequences I just mean shrug and say it's not a theory about what you should want at all just a true claim about a certain kind of moral fact about which worlds are preferable).

However, I can endorse the second paragraph.

Expand full comment

Ahh I see the confusion. I took the discussion to have the following form.

1) Utilitarian tradition is stunted.

2) My assumption (but maybe you disagree) the utilitarian tradition is a tradition of analyzing morality/ moral facts.

3) The utilitarian tradition can't be stunted if it doesn't include X facts unless X is moral anymore than it's stunted because it doesn't incorporate GR.

Expand full comment

Thanks for the post! I won't pretend to have followed everything in it, but I think you may find the alternate utilitarian framework I propose here to be interesting: https://www.lesswrong.com/posts/wwioAJHTeaGqBvtjd/update-on-developing-an-ethics-calculator-to-align-an-agi-to.

I expect the "Unique (I Think) Contributions" section would be most interesting to you. I believe my framework, with value based on "positive experiences" as influenced by self-esteem and with explicit incorporation of the effects of rights on value building may be something fun to debate with your "conceptually stunted" colleagues. Thanks! Sean

Expand full comment