I think there are connections between them. For example, utilitarianism makes claims about value. But value claims are conceptually related to claims about *desirability*, or what it would be fitting to want.
Other times, people *mistakenly* believe that utilitarian verdicts have implications (e.g. for fitting deliberation or psychologica…
I think there are connections between them. For example, utilitarianism makes claims about value. But value claims are conceptually related to claims about *desirability*, or what it would be fitting to want.
Other times, people *mistakenly* believe that utilitarian verdicts have implications (e.g. for fitting deliberation or psychological dispositions) that it doesn't. But we need to talk about fittingness, and its connections to other moral concepts, in order to successfully counter those objections.
I'm not sure I'm convinced about the first paragraph. Why must it be fitting to want utilitarian ends? I mean couldn't you just give the same answer as to the aliens who torture people if you are utilitarian (tho obviously not about consequences I just mean shrug and say it's not a theory about what you should want at all just a true claim about a certain kind of moral fact about which worlds are preferable).
Then I'm using the wrong word. It seems to me that one can have beliefs that certain possible worlds are morally better than others (just as a kind of foundational moral fact) and for this to have no necessary connection to any claim about what attitudes anyone should take towards the claim.
I understand your claims about fittingness or desierability to be claims about relations between something like mental states and states of affairs and I'm suggesting that the claim that world A is morally better to world B need not imply anything about that.
If "better" does not entail meriting preference, or any other similar pro-attitude, what does it even mean? I worry that you've stripped the word of all possible normative content.
(I think we distinguish "better" and "worse" in terms of the different attitudes -- pro and con, respectively -- that they call for. It seems your account has no such basis for distinguishing them.)
There is a general problem explaining why any objective fact about the world could be the kind of thing that gives rise to some kind of moral fact (not to mention the access problem). That's why on even days I'm a moral anti-realist.
But on odd days I literally have the opposite intuition to you. How could something that can't be grounded out without mention of reasons to act/attitudes/agent behavior be the sort of thing that makes one choice morally better than another?
I mean if there is some objectively special notion of 'should' as you'd want to be a moral realist then there must be facts out there about the world that -- on pain of vicious circularity don't mention our reasons, intentions etc -- justify some asymetry that makes one claim about this special moral shouldness true and another false.
(ofc you could always just build in things to your definition of moral reasons but a moral realist needs to be able to say the alien who has scmoral reasons defined differently but with the same action guiding role is somehow wrong about something).
--
I mean you could ofc say there are just irreducible facts about what what kinds of things are appropriately motivating or produce this special normative force. That is that your notion of morality is fundamentally about the relation of actors to their motives/intentions/etc and doesn't depend on any independent objective facts about this world having more value than that (tho I don't mean to assume consequentialism just a notion of value not grounded in claims about reasons to act or the like).
All I can say to that is I'm just not interested in that notion of morality. If that's how you understand morality as an empirical matter I don't find any pressure to do the kind of things that theory would say I should.
I think there's some conceptual confusion here about where "mind-independence" enters the picture for the realist. Realism simply claims that *whether or not X is a reason to [...]* is a fact that holds independently of our agency, desires, etc. Realism does *not* have to claim that the prescribed [...] (an act, attitude, or whatever) is unrelated to our agency -- that would be absurd. The whole point of reasons is to provide us with normative direction. So they have to specify *what direction they are giving us*, i.e. something about our actions or attitudes.
For example, *the phenomenal feeling of pain* is such as to give us (every rational being) reason to wish that no innocent being suffer it (all else equal). As a moral realist, I think this reason-claim holds objectively: it would be just as true whether or not I personally believed it, or happened to have pro-torture desires, or whatever. None of those contingent details matter to the badness, i.e. the objective *undesirability*, of pain. The normative fact directs us to adopt anti-pain attitudes, and tells us that we are *in error* insofar as we fail to do so (and doubly so if we adopt the opposite, e.g. pro-torture attitudes).
I think there are connections between them. For example, utilitarianism makes claims about value. But value claims are conceptually related to claims about *desirability*, or what it would be fitting to want.
Other times, people *mistakenly* believe that utilitarian verdicts have implications (e.g. for fitting deliberation or psychological dispositions) that it doesn't. But we need to talk about fittingness, and its connections to other moral concepts, in order to successfully counter those objections.
I'm not sure I'm convinced about the first paragraph. Why must it be fitting to want utilitarian ends? I mean couldn't you just give the same answer as to the aliens who torture people if you are utilitarian (tho obviously not about consequences I just mean shrug and say it's not a theory about what you should want at all just a true claim about a certain kind of moral fact about which worlds are preferable).
However, I can endorse the second paragraph.
"preferable" literally means "ought to be preferred", so I'm not sure how to make sense of your comment!
Then I'm using the wrong word. It seems to me that one can have beliefs that certain possible worlds are morally better than others (just as a kind of foundational moral fact) and for this to have no necessary connection to any claim about what attitudes anyone should take towards the claim.
I understand your claims about fittingness or desierability to be claims about relations between something like mental states and states of affairs and I'm suggesting that the claim that world A is morally better to world B need not imply anything about that.
If "better" does not entail meriting preference, or any other similar pro-attitude, what does it even mean? I worry that you've stripped the word of all possible normative content.
(I think we distinguish "better" and "worse" in terms of the different attitudes -- pro and con, respectively -- that they call for. It seems your account has no such basis for distinguishing them.)
There is a general problem explaining why any objective fact about the world could be the kind of thing that gives rise to some kind of moral fact (not to mention the access problem). That's why on even days I'm a moral anti-realist.
But on odd days I literally have the opposite intuition to you. How could something that can't be grounded out without mention of reasons to act/attitudes/agent behavior be the sort of thing that makes one choice morally better than another?
I mean if there is some objectively special notion of 'should' as you'd want to be a moral realist then there must be facts out there about the world that -- on pain of vicious circularity don't mention our reasons, intentions etc -- justify some asymetry that makes one claim about this special moral shouldness true and another false.
(ofc you could always just build in things to your definition of moral reasons but a moral realist needs to be able to say the alien who has scmoral reasons defined differently but with the same action guiding role is somehow wrong about something).
--
I mean you could ofc say there are just irreducible facts about what what kinds of things are appropriately motivating or produce this special normative force. That is that your notion of morality is fundamentally about the relation of actors to their motives/intentions/etc and doesn't depend on any independent objective facts about this world having more value than that (tho I don't mean to assume consequentialism just a notion of value not grounded in claims about reasons to act or the like).
All I can say to that is I'm just not interested in that notion of morality. If that's how you understand morality as an empirical matter I don't find any pressure to do the kind of things that theory would say I should.
I think there's some conceptual confusion here about where "mind-independence" enters the picture for the realist. Realism simply claims that *whether or not X is a reason to [...]* is a fact that holds independently of our agency, desires, etc. Realism does *not* have to claim that the prescribed [...] (an act, attitude, or whatever) is unrelated to our agency -- that would be absurd. The whole point of reasons is to provide us with normative direction. So they have to specify *what direction they are giving us*, i.e. something about our actions or attitudes.
For example, *the phenomenal feeling of pain* is such as to give us (every rational being) reason to wish that no innocent being suffer it (all else equal). As a moral realist, I think this reason-claim holds objectively: it would be just as true whether or not I personally believed it, or happened to have pro-torture desires, or whatever. None of those contingent details matter to the badness, i.e. the objective *undesirability*, of pain. The normative fact directs us to adopt anti-pain attitudes, and tells us that we are *in error* insofar as we fail to do so (and doubly so if we adopt the opposite, e.g. pro-torture attitudes).