25 Comments

I'm curious are you specifically interested in human morality and human normative concepts here, or do you think this stuff is more abstract?

For example, Kantians claim their theory (whatever that is) applies to "all rational beings". Does yours?

[Admittedly I haven't waded into the details here, so maybe my question is not specific enough.]

Expand full comment

I tend to think of it as having universal application, like Kantians do. Though I don't think all rational beings would have to have the full range of emotional responses that we do, and there may be alien attitudes that some rational beings could have (with associated fittingness conditions) that we can't even imagine.

Expand full comment

Whoa, you believe these feelings have some kind of rational basis? So like you doubt that you could induce different ones with similar moral 'feel' with appropriate brain interventions?

Like there is some necessary connection between qualia and moral sentiments?

Expand full comment

That's a wild non-sequitur. It's always possible to induce irrational thoughts and responses with brain interventions. "Irrational" does not mean "impossible" -- far from it!

Expand full comment

Nevermind ..I was a bit confused because of the assumption it had to be narfowly moral not just more broadly normative.

Expand full comment

Fair, but I guess I'm trying to understand why you can't then seperate the experiences and the reasoning.

I suspect you would agree that some hypothetical agent who didn't have any feelings or intuitions of this kind wouldn't in fact come to these conclusions. Or are you suggesting that you can, a la Kant, derive them from pure logic alone?

Basically I'm a bit confused about how you understand the evidential/epistemic status of moral sentiments as an addition to pure logical consequence of is your notion of rationality much broader than that?

Expand full comment

Much broader notion of rationality! Logically coherent agents include Future-Tuesday indifference, counter-inductivists, and ideally coherent Caligulas who value torture for its own sake. I think they're all objectively misguided ("irrational"). Mere coherence is no guarantee of substantive rationality.

More on my moral epistemology here: https://philpapers.org/rec/CHAKWM

Expand full comment

Ok thanks that helps alot

Expand full comment

I would love to see your response to Miles Tucker's value first view - https://philpapers.org/rec/TUCCAO

Neil Sinhababu also believes in value first view.

Expand full comment

I like Tucker's paper! Two main points of disagreement:

(1) His objection to my view, in section 2, rests on an appeal to tracking counterfactuals: "Counterfactual reasoning reveals that maximizing the good is the proportional explanation."

This mistakenly assumes that all right acts must be right for the same (normative) reason. We shouldn't assume that. Suppose that saving either of two people (when you can't save both) would equally well maximize the good. So either act is equally permissible. Still, it seems open to the consequentialist to claim that the acts are importantly distinct: right for different reasons, and each admitting of pro tanto reasons to regret not choosing the other.

So I don't think he has identified any reason to reject my account of right-making features.

(2) I found the argument of section 3 (Against Fittingness) hard to follow. The challenge for Tucker is how to accommodate the intuition that there's something morally mistaken about *desiring that others suffer* even if the desire has (purely) good effects. I say that the desire's *content* fails to fit with the facts about what is truly good & desirable. Tucker seems to agree: "We get it wrong when we have this desire, because we think something deeply false about morality." But then he adds, confusingly, "This thought is not unfitting, it is merely false."

But that's just what it is for an attitude to be unfitting: for its implicit normative claims to be false. Compare: https://www.philosophyetc.net/2008/05/claims-of-emotion.html

Expand full comment

I suspect that most utilitarians are utilitarians either in an anti-realist sense (it's what I emotionally do feel compelled to care about and your feelings don't change that one jot) or are realists who take the access problem relatively seriously and thus place alot more value on theoretical simplicity than any match with human sentiments.

I mean in my realist moods I reject purportedly moral sentiments entirely and my motivation is mostly just theoretical simplicity plus the fact that pleasure is pleasant. Sure I guess there is some role being played at a really deep level but it's as small as possible.

Expand full comment

Huh? I guess I just don't understand how the fact that people have certain feelings is a strong argument for the claim that those feelings correspond to any objective (ie realist in the trad sense) moral feature of reality.

I read this and don't really see how it differs from the objection: but I feel like it's bad not to prefer my friends, spend more to save us children etc etc or even "but that theory implies I should give away all my money"

I mean in what sense are you not just giving names to these feelings (I get they are different feelings but the form seems the same)?

Expand full comment

I'm not sure whether you're objecting to (i) the *concept* of fitting attitudes, i.e. the idea that feelings can be, or fail to be, objectively warranted, or (ii) specific *verdicts* about fitting attitudes, e.g. that it is objectively unwarranted to want people to suffer.

My answer would differ depending on which of these you're concerned about.

Expand full comment

I'm objecting to the suggestion that fitting attitudes are moral. I mean maybe they are facts but what makes them moral facts?

Expand full comment

I'm not sure what that means. Fittingness is plainly a normative concept, not a descriptive one. But I don't know what you have in mind by denying them the status of "moral" facts. Whether or not someone is genuinely blameworthy seems like a paradigmatically moral question, for example. But fitting beliefs are a matter of epistemic rather than moral normativity.

Expand full comment

So maybe most of my objections here are really a verbal dispute. I'm not sure I have an issue with these as normative facts but I just don't understand normative facts that aren't moral facts to be within the purview of utilitarianism. But that's purely a matter of definition.

Expand full comment

I think there are connections between them. For example, utilitarianism makes claims about value. But value claims are conceptually related to claims about *desirability*, or what it would be fitting to want.

Other times, people *mistakenly* believe that utilitarian verdicts have implications (e.g. for fitting deliberation or psychological dispositions) that it doesn't. But we need to talk about fittingness, and its connections to other moral concepts, in order to successfully counter those objections.

Expand full comment

I'm not sure I'm convinced about the first paragraph. Why must it be fitting to want utilitarian ends? I mean couldn't you just give the same answer as to the aliens who torture people if you are utilitarian (tho obviously not about consequences I just mean shrug and say it's not a theory about what you should want at all just a true claim about a certain kind of moral fact about which worlds are preferable).

However, I can endorse the second paragraph.

Expand full comment

"preferable" literally means "ought to be preferred", so I'm not sure how to make sense of your comment!

Expand full comment

Then I'm using the wrong word. It seems to me that one can have beliefs that certain possible worlds are morally better than others (just as a kind of foundational moral fact) and for this to have no necessary connection to any claim about what attitudes anyone should take towards the claim.

I understand your claims about fittingness or desierability to be claims about relations between something like mental states and states of affairs and I'm suggesting that the claim that world A is morally better to world B need not imply anything about that.

Expand full comment

If "better" does not entail meriting preference, or any other similar pro-attitude, what does it even mean? I worry that you've stripped the word of all possible normative content.

(I think we distinguish "better" and "worse" in terms of the different attitudes -- pro and con, respectively -- that they call for. It seems your account has no such basis for distinguishing them.)

Expand full comment

There is a general problem explaining why any objective fact about the world could be the kind of thing that gives rise to some kind of moral fact (not to mention the access problem). That's why on even days I'm a moral anti-realist.

But on odd days I literally have the opposite intuition to you. How could something that can't be grounded out without mention of reasons to act/attitudes/agent behavior be the sort of thing that makes one choice morally better than another?

I mean if there is some objectively special notion of 'should' as you'd want to be a moral realist then there must be facts out there about the world that -- on pain of vicious circularity don't mention our reasons, intentions etc -- justify some asymetry that makes one claim about this special moral shouldness true and another false.

(ofc you could always just build in things to your definition of moral reasons but a moral realist needs to be able to say the alien who has scmoral reasons defined differently but with the same action guiding role is somehow wrong about something).

--

I mean you could ofc say there are just irreducible facts about what what kinds of things are appropriately motivating or produce this special normative force. That is that your notion of morality is fundamentally about the relation of actors to their motives/intentions/etc and doesn't depend on any independent objective facts about this world having more value than that (tho I don't mean to assume consequentialism just a notion of value not grounded in claims about reasons to act or the like).

All I can say to that is I'm just not interested in that notion of morality. If that's how you understand morality as an empirical matter I don't find any pressure to do the kind of things that theory would say I should.

Expand full comment

Ahh I see the confusion. I took the discussion to have the following form.

1) Utilitarian tradition is stunted.

2) My assumption (but maybe you disagree) the utilitarian tradition is a tradition of analyzing morality/ moral facts.

3) The utilitarian tradition can't be stunted if it doesn't include X facts unless X is moral anymore than it's stunted because it doesn't incorporate GR.

Expand full comment

Thanks for the post! I won't pretend to have followed everything in it, but I think you may find the alternate utilitarian framework I propose here to be interesting: https://www.lesswrong.com/posts/wwioAJHTeaGqBvtjd/update-on-developing-an-ethics-calculator-to-align-an-agi-to.

I expect the "Unique (I Think) Contributions" section would be most interesting to you. I believe my framework, with value based on "positive experiences" as influenced by self-esteem and with explicit incorporation of the effects of rights on value building may be something fun to debate with your "conceptually stunted" colleagues. Thanks! Sean

Expand full comment