It’s helpful for me when you make clear the views of yours I disagree with!
My strategy is to take the subjectivist view, that any actual desire creates reasons, at least agent-relative ones, and that’s all the reasons that there are.
However, I then note that the boundaries of agents aren’t clear - the desires of a person are in some sense constituted by their (often somewhat conflicting) sub-personal desires; the desires of a group are in some sense constituted by their (often somewhat conflicting) individual desires, etc. I then take morality to be about hypothetical normativity relative to the desires of the biggest group, of which we are all a part.
Everyone’s desires get counted as part of this, even the person who desires to torture children. But the children’s desires count too, and will surely overwhelm those of the would-be torturer.
Won't that undermine the reasons for *individual agents* to act morally, though? Even if the boundaries aren't always entirely precise, it's pretty clear that the individual is distinct from the group, and the vast majority of the group's desires are not shared by the individual (especially if that individual is a fanatic, torturer, etc.).
Assuming that reasons are the mark of normative authority, it seems like you'll end up having to say things like, "Although it would be immoral to do it, that isn't what really matters: as an individual agent, you really ought to just torture as many kids as you can. That's what you have most normative reason to do." I think normative nihilism sounds more plausible than that!
I think it makes sense to say that my 7 am self, qua member of my extended self, has reason to get out of bed, even though he qua time slice prefers to stay sleeping. Similarly, I want to say that I, qua rational being, have reason to do that which is best overall for the greatest number, even though I qua individual have reason to do something selfish. This actually feels right to me as an account of the general phenomenon that I have some reason to be moral, but that it’s often hard to get me to respond to these reasons.
Do you think it depends upon the extent to which the local agent "identifies with" the larger entity (as most of our momentary selves do with our extended selves)? Or is it, in some sense, rationally *mandatory* to so identify? I like the sound of the latter view, but it sounds hard to reconcile with subjectivism.
I’m not sure how to think about this. What I think I’m most attracted to is the idea that *every* level has some sense of normativity, and that identifying is only relevant for getting motivated. The only special thing about the highest level is that it is the only one we all have some connection to.
If I am itchy, does that give me a reason to scratch? Is this post using “reason” as a term of art that I am misapplying? Why would wanting something not be a reason to do it? Or is it just “motive” which is different from a reason somehow?
As flagged in the conclusion, we *typically* have reason to fulfill our desires, since our desires are typically reasonable. But suppose you find yourself with a weird compulsive desire to poke yourself in the eye. Doing so wouldn't relieve any other unpleasant sensations (like relieving an itch), it would just be painful. Still, you desire to do it. The mere desire doesn't give you any reason to poke yourself in the eye. It's not, in itself, a consideration that makes the act in any way *worth doing*.
Seems to me it is still a reason to do it, but it is a defeasible reason. I have stronger reasons not to want to poke myself in the eye. Wanting is not a decisive reason, not determinative, but still a reason.
So then the obvious next question is, are the stronger reasons also based on what I want (conflicting wants are possible) or are they based on something else that makes them clearly and importantly distinct? If they are also based on prudence and my dislike of pain, there is no point to the eye poking example (oops, accidental pun?). Similarly, if I do things because I care about specific other persons, that is still hypothetical. Do I have to care about persons in the abstract (and care about them all equally?) to get to categorical reasons? I am sure I have something wrong there.
Suppose you don't, at this moment, care at all about your future pain (or other future interests), so there are no countervailing reasons -- at least based on your present desires -- to weigh against the "reason" to poke yourself in the eye. It seems like you're making a rational mistake, and *ought* to care more the harm this would cause to yourself. So it seems that desires are rationally evaluable. (This is a version of Parfit's agony argument.)
It's an open question how much you ought to care about others. I personally find it plausible that you'd be making a mistake to not care *at all* about the interests of any other innocent being. But you could disagree on this point, and still believe in categorical reasons. (You could even be a rational egoist, and insist that it is categorically mistaken to ever care about anyone *other* than yourself.)
>Suppose you don't, at this moment, care at all about your future pain
And this is basically what Parfit objects to, right? He's already going to say this is irrational. And it doesn’t matter how unlikely it is for someone to arrive at a point where they have this attitude, it is the principle that matters. It is a counterexample to a general claim, that wanting something is a reason to obtain it.
But Hume might ask, where is the error in logic, or the mistaken factual claim? If not those, what sort of error do we find?There has to be a hidden premise somewhere, perhaps, “if you want pain for itself, you are irrational.” While it is difficult to imagine a circumstance that would produce a person who loves pain for itself, do we have a basis in rationality for criticizing that odd impulse? It is not a logical error or factually mistaken. Can we have erroneous desires? They are not claims to truth, so they can’t be false. They are not instrumental, so lack of efficacy doesn’t matter. It requires a larger picture of rationality as an obligatory constraint than just avoiding logical and factual errors. Maybe Parfit provides this, but I don’t know what it is. Maybe Hume is applying strictly instrumental rationality, and Parfit has an embellished account?
Yes, the thought is that the pain example shows that there is more to rationality than just instrumental rationality: we can (and do) also assess ends themselves as warranted or unwarranted. We intuitively appreciate that the feeling of agony inherently *merits aversion*, and so anyone who *fails* to be averse to agony is failing to respond appropriately (or as they have reason to respond).
Of course, we already know from the problem of induction that rationality requires more than just "avoiding logical and factual errors". Contra Hume, it would be deeply irrational to fail to expect the Sun to rise tomorrow. (Even if the Sun actually goes supernova overnight, it remains true that we *ought* to have expected otherwise.) So it's a quite general point that there's a lot more to rationality than Hume's narrow conception allows.
That presupposes a standard by which ends are evaluated. We can examine means to see that they are effective, or arguments to see if they are logical, or factual claims to see if they correspond to the world. What do we compare ends to? I guess the consequentialist would say, does it increase flourishing compared to alternatives? That is intractable in most nontrivial cases.
Cool article. I reject hypothetical imperatives because I think they retain too many of the elements that I find unacceptable with categorical imperatives, including the failure to eliminate normative reasons. In that respect, one might say I’m some sort of normative nihilist. However, my concern with this term is that it suggests that nonreductive conceptions of normative reasons are the only viable conceptions, and that if one rejects them, one rejects normativity outright. I’m not sure that I or other people others may think of as “normative nihilists” should grant this. If we conceive of normative reasons in reductive terms, where they are understood as descriptive facts about the relation between means and ends, we can conceive of normativity as a type of descriptivity without thereby eliminating it. Reduction need not be nihilism.
I’m also unsure what to make of remarks like this: “A view on which there are only hypothetical imperatives is thus a form of normative nihilism—no more productive than an irrigation system without any liquid to flow through it.”
An irrigation system without liquid would be useless. However, identifying the relation between means and ends doesn’t strike me as being unproductive. Knowing what my ends are doesn’t entail knowing what means would be most conducive to achieving that end. If I’m motivated to act in accordance with my goals, all I need to get the water flowing is to settle on which means would be conducive to those ends. Maybe I’m not understanding what you mean here.
All argument is hypothetical; if the premises are true, and the axioms of logic are true, and the argument includes no logical errors, then the conclusion is true. So the categorical argument is also hypothetical, it just begins with different assumptions. I guess the difference is, one is allowed to deny the premises of a hypothetical at will, but supposedly the premises of a categorical argument are undeniable, or at least independent of the opinions of agents.
This presupposes that we can identify a perfectly impartial point of view from which to evaluate things, and that we should/must do so. Maybe we can let in a bit of wiggle room, and say we only are obligated to do the best we can to identify this view from nowhere. But why would we want to adopt it? Do we need some meta-argument to make that conclusion? And meta-norms that give the reasons?
Stipulate that such a view from nowhere exists, and we are motivated to find and apply it. How would the choices we face differ from those of persons who disbelieve all that, but are committed to interacting socially and cooperatively for mostly selfish reasons (including within “selfishness” a regard for at least some others)?
Whatever standard gets adopted, someone will violate it. Arguments about what label to apply to that (wrong, bad, mistaken, evil) seem to go only half-way to the bottom line, which is “what happens as a consequence of the violation?”
It’s helpful for me when you make clear the views of yours I disagree with!
My strategy is to take the subjectivist view, that any actual desire creates reasons, at least agent-relative ones, and that’s all the reasons that there are.
However, I then note that the boundaries of agents aren’t clear - the desires of a person are in some sense constituted by their (often somewhat conflicting) sub-personal desires; the desires of a group are in some sense constituted by their (often somewhat conflicting) individual desires, etc. I then take morality to be about hypothetical normativity relative to the desires of the biggest group, of which we are all a part.
Everyone’s desires get counted as part of this, even the person who desires to torture children. But the children’s desires count too, and will surely overwhelm those of the would-be torturer.
Won't that undermine the reasons for *individual agents* to act morally, though? Even if the boundaries aren't always entirely precise, it's pretty clear that the individual is distinct from the group, and the vast majority of the group's desires are not shared by the individual (especially if that individual is a fanatic, torturer, etc.).
Assuming that reasons are the mark of normative authority, it seems like you'll end up having to say things like, "Although it would be immoral to do it, that isn't what really matters: as an individual agent, you really ought to just torture as many kids as you can. That's what you have most normative reason to do." I think normative nihilism sounds more plausible than that!
I think it makes sense to say that my 7 am self, qua member of my extended self, has reason to get out of bed, even though he qua time slice prefers to stay sleeping. Similarly, I want to say that I, qua rational being, have reason to do that which is best overall for the greatest number, even though I qua individual have reason to do something selfish. This actually feels right to me as an account of the general phenomenon that I have some reason to be moral, but that it’s often hard to get me to respond to these reasons.
Do you think it depends upon the extent to which the local agent "identifies with" the larger entity (as most of our momentary selves do with our extended selves)? Or is it, in some sense, rationally *mandatory* to so identify? I like the sound of the latter view, but it sounds hard to reconcile with subjectivism.
I’m not sure how to think about this. What I think I’m most attracted to is the idea that *every* level has some sense of normativity, and that identifying is only relevant for getting motivated. The only special thing about the highest level is that it is the only one we all have some connection to.
If I am itchy, does that give me a reason to scratch? Is this post using “reason” as a term of art that I am misapplying? Why would wanting something not be a reason to do it? Or is it just “motive” which is different from a reason somehow?
As flagged in the conclusion, we *typically* have reason to fulfill our desires, since our desires are typically reasonable. But suppose you find yourself with a weird compulsive desire to poke yourself in the eye. Doing so wouldn't relieve any other unpleasant sensations (like relieving an itch), it would just be painful. Still, you desire to do it. The mere desire doesn't give you any reason to poke yourself in the eye. It's not, in itself, a consideration that makes the act in any way *worth doing*.
Seems to me it is still a reason to do it, but it is a defeasible reason. I have stronger reasons not to want to poke myself in the eye. Wanting is not a decisive reason, not determinative, but still a reason.
So then the obvious next question is, are the stronger reasons also based on what I want (conflicting wants are possible) or are they based on something else that makes them clearly and importantly distinct? If they are also based on prudence and my dislike of pain, there is no point to the eye poking example (oops, accidental pun?). Similarly, if I do things because I care about specific other persons, that is still hypothetical. Do I have to care about persons in the abstract (and care about them all equally?) to get to categorical reasons? I am sure I have something wrong there.
Suppose you don't, at this moment, care at all about your future pain (or other future interests), so there are no countervailing reasons -- at least based on your present desires -- to weigh against the "reason" to poke yourself in the eye. It seems like you're making a rational mistake, and *ought* to care more the harm this would cause to yourself. So it seems that desires are rationally evaluable. (This is a version of Parfit's agony argument.)
It's an open question how much you ought to care about others. I personally find it plausible that you'd be making a mistake to not care *at all* about the interests of any other innocent being. But you could disagree on this point, and still believe in categorical reasons. (You could even be a rational egoist, and insist that it is categorically mistaken to ever care about anyone *other* than yourself.)
>Suppose you don't, at this moment, care at all about your future pain
And this is basically what Parfit objects to, right? He's already going to say this is irrational. And it doesn’t matter how unlikely it is for someone to arrive at a point where they have this attitude, it is the principle that matters. It is a counterexample to a general claim, that wanting something is a reason to obtain it.
But Hume might ask, where is the error in logic, or the mistaken factual claim? If not those, what sort of error do we find?There has to be a hidden premise somewhere, perhaps, “if you want pain for itself, you are irrational.” While it is difficult to imagine a circumstance that would produce a person who loves pain for itself, do we have a basis in rationality for criticizing that odd impulse? It is not a logical error or factually mistaken. Can we have erroneous desires? They are not claims to truth, so they can’t be false. They are not instrumental, so lack of efficacy doesn’t matter. It requires a larger picture of rationality as an obligatory constraint than just avoiding logical and factual errors. Maybe Parfit provides this, but I don’t know what it is. Maybe Hume is applying strictly instrumental rationality, and Parfit has an embellished account?
Yes, the thought is that the pain example shows that there is more to rationality than just instrumental rationality: we can (and do) also assess ends themselves as warranted or unwarranted. We intuitively appreciate that the feeling of agony inherently *merits aversion*, and so anyone who *fails* to be averse to agony is failing to respond appropriately (or as they have reason to respond).
Of course, we already know from the problem of induction that rationality requires more than just "avoiding logical and factual errors". Contra Hume, it would be deeply irrational to fail to expect the Sun to rise tomorrow. (Even if the Sun actually goes supernova overnight, it remains true that we *ought* to have expected otherwise.) So it's a quite general point that there's a lot more to rationality than Hume's narrow conception allows.
That presupposes a standard by which ends are evaluated. We can examine means to see that they are effective, or arguments to see if they are logical, or factual claims to see if they correspond to the world. What do we compare ends to? I guess the consequentialist would say, does it increase flourishing compared to alternatives? That is intractable in most nontrivial cases.
Those that think their desires are moral just because they can use the word 'should' are only shoulding themselves in the foot.
Cool article. I reject hypothetical imperatives because I think they retain too many of the elements that I find unacceptable with categorical imperatives, including the failure to eliminate normative reasons. In that respect, one might say I’m some sort of normative nihilist. However, my concern with this term is that it suggests that nonreductive conceptions of normative reasons are the only viable conceptions, and that if one rejects them, one rejects normativity outright. I’m not sure that I or other people others may think of as “normative nihilists” should grant this. If we conceive of normative reasons in reductive terms, where they are understood as descriptive facts about the relation between means and ends, we can conceive of normativity as a type of descriptivity without thereby eliminating it. Reduction need not be nihilism.
I’m also unsure what to make of remarks like this: “A view on which there are only hypothetical imperatives is thus a form of normative nihilism—no more productive than an irrigation system without any liquid to flow through it.”
An irrigation system without liquid would be useless. However, identifying the relation between means and ends doesn’t strike me as being unproductive. Knowing what my ends are doesn’t entail knowing what means would be most conducive to achieving that end. If I’m motivated to act in accordance with my goals, all I need to get the water flowing is to settle on which means would be conducive to those ends. Maybe I’m not understanding what you mean here.
All argument is hypothetical; if the premises are true, and the axioms of logic are true, and the argument includes no logical errors, then the conclusion is true. So the categorical argument is also hypothetical, it just begins with different assumptions. I guess the difference is, one is allowed to deny the premises of a hypothetical at will, but supposedly the premises of a categorical argument are undeniable, or at least independent of the opinions of agents.
This presupposes that we can identify a perfectly impartial point of view from which to evaluate things, and that we should/must do so. Maybe we can let in a bit of wiggle room, and say we only are obligated to do the best we can to identify this view from nowhere. But why would we want to adopt it? Do we need some meta-argument to make that conclusion? And meta-norms that give the reasons?
Stipulate that such a view from nowhere exists, and we are motivated to find and apply it. How would the choices we face differ from those of persons who disbelieve all that, but are committed to interacting socially and cooperatively for mostly selfish reasons (including within “selfishness” a regard for at least some others)?
Whatever standard gets adopted, someone will violate it. Arguments about what label to apply to that (wrong, bad, mistaken, evil) seem to go only half-way to the bottom line, which is “what happens as a consequence of the violation?”