It seems like there are useful analogies to the blameworthiness/blamability distinction in several nonmoral domains. For instance, a human move in a strategy game can simultaneously have a clear theoretical status as a "bad move", and also be the move an omniscient observer aligned with the agent would recommend, knowing the subsequent game-losing error it would cause the opponent to make. Relative to the same goal (winning the game -- or maximizing winning chances), these are two highly related but separable notions of instrumental goodness.
Fascinating discussion. I think I'm still a bit puzzled about how a utilitarian can adopt a "fittingness" account of blameworthiness. You rightly see (and accept) that this implies that someone who kills the innocent person with the right utilitarian motive in the trolley case is not blameworthy (though possibly "blame-able"). The other direction may be more troubling: Won't every act that I perform with a motive that is out of harmony with utilitarianism--for example, acts that benefit my friends and loved ones, done from a motive of caring about them more--be blameworthy (though perhaps not blame-able) on this view?
If all such acts from non-utilitarian motives are blameworthy, how will a virtuous utilitarian agent relate to this fact? The agent, on the one hand, will see that she is blameworthy, say, when she acts out of love for her child; but she may also see that it is morally right for her to cultivate in herself the tendency to act exactly that way, from exactly that motive, in that situation. So she doesn't feel any moral guilt, nor does she think that she should; but she still sees that she is blameworthy? I see what this "blameworthiness" points to--namely, the way in which her partial motive is inaccurate or incorrect in its basic relation to the impartial good--but I'm not sure it seems like the right term to capture that incongruity.
I'm inclined towards the "leveling-up" view of utilitarian impartiality, according to which imbalances are generally caused by *too little* concern for strangers rather than *too much* concern for loved ones:
So we typically act on good (as far as they go) motives. We just don't care about strangers as much as would be ideal. Even so, not every "falling short of ideal" qualifies as blameworthy -- see my 'Willpower Satisficing':
I'm a bit unclear on why we need the internal and/or intrinsic conceptions of rationality and goodness. It does seem clear that we have an intuitive conception of these sorts of things, and I can see why it might be consequentially good for us to have such intuitions. But I'm not sure why the theory needs to have a role for this kind of "fittingness" (I'm generally suspicious of any truth to "fittingness" judgments of any kind, though I suspect these judgments are often the result of dispositions that are good ones to have).
At a minimum, I think it's important to be clear on the concepts to avoid miscommunication, e.g. with people who use "blameworthiness" in the ordinary (fittingness) sense.
It remains open to pure consequentialists to then offer an error theory of fittingness (and rationality?), arguing that -- despite our intuitive grasp of the concept -- nothing actually satisfies it. But I guess I'd want to hear more about what we gain, theoretically, from such a revisionary stance. Most philosophers seem happy to admit of epistemic reasons for belief, for example, and other fittingness claims are just more along the same lines as that. So it doesn't strike me as a great theoretical cost to admit fittingness into our accounts. (By contrast, it would seem potentially difficult and costly to try to banish it entirely.)
Yeah, I suspect that my main opposition to fittingness comes from my work in epistemology, where I strongly disagree with the evidentialist intuitions people have in favor of a stricter consequentialism. I'm open to the idea of drawing a distinction between pragmatic value and the value of true belief, with properly epistemic value having to derive from consequences in terms of true belief, but I'm skeptical about there being something intrinsic to a conception of evidence that belief could even try to fit.
According to love consequentialism, we should not want Bob to "make the mistake," in this heinous scenario you've concocted which has really dark implications for the real world, we should *mourn the loss of the puppies* for that plays a role in their continued existence or, if you don't believe in that, it shows love for the puppies and so warms the heart of their loved ones. To "want Bob to make the mistake" is to be engaged in the scenario as a dark spectator at a demonic ball game "Do it, Bob! For the Puppies!" What *human beings* do in such a scenario is *look away* because when they are powerless to help, they do not want their memories of their loved ones to be fucked with by horrific images, and then -- again -- they *mourn the loss of innocent life* in part by seeking to enact just retribution on behalf of the puppies during their lifetime, with the end goal of eradicating in totality perverse demons, bobs, and their utterly *perturbing* spectators. See how love by itself brings about the best consequences?
This is all a bit outside my expertise, but I thought virtues were dispositions to be moved to do what you have reasons to do. If so, doesn't consequentialism entail that the only virtue is the disposition to maximize welfare (or whatever)? Then there's no conflict between acting rightly and being virtuous, just a revisionism about virtue.
I like that reasons-based conception of virtue! (Though, as I argue in 'The Right Wrong-Makers' - https://philpapers.org/rec/CHATRW-3 - consequentialism yields separate reasons to promote each good, not just one reason to maximize overall good. So it doesn't have the entailment you suppose.) I very much agree that there's no conflict between acting rightly and being virtuous. But there may be a conflict between being virtuous and having the (instrumentally) best dispositions of character. And one thing it might be right (and virtuous) for us to do, in such a case, would be to make ourselves more vicious (and so less apt to act rightly, or for the right reasons, in future).
On a more practical note, I think this essay on 'Virtues for Real-World Utilitarians' is excellent (but maybe implicitly conceives of virtues as "character traits/dispositions it would be robustly good to have", or something more along those lines): https://www.utilitarianism.net/guest-essays/virtues-for-real-world-utilitarians
This reads to me like an "ends can justify un-virtuous means" argument? So although it's blameworthy to torture the terrorist who has the code to stop the bomb going off that will kill thousands, it's still the correct thing to do on the other dimension?
Almost. While it is possible to assess actions in different ways (e.g. whether it's the correct thing to do vs whether it was blameworthy, or done for the right or wrong reasons), any correct action -- even torture! -- would be blameless if done for the right reasons. I don't think there's any such thing as "un-virtuous means". Whether an action is virtuous or not depends on how the agent was motivated, the quality of the reasoning that went into their choice, etc. I think only a non-consequentialist can really hold that certain means are inherently "un-virtuous".
(A consequentialist can of course agree that torture is bad *all else equal*, and so a virtuous agent who needed to torture a terrorist to prevent disaster would feel some reluctance to do it, even while affirming that it's ultimately more important to save thousands, so they would grit their teeth and force themselves to do what's necessary. An agent who didn't care enough about the thousands to overcome their reluctance would not be a truly virtuous person. Whereas a vicious agent would feel no reluctance to begin with, positively reveling in the torture -- or in doing nothing and letting thousands die.)
So the two dimensions I'm talking about in the OP are more applicable to attitudes than actions. Applied to blaming attitudes: it might be good (expedient for society) to blame the torturer-hero, just to help reinforce general anti-torture norms, even if the hero is not truly blame-worthy at all (meaning that one's blaming attitudes are inaccurate, so in one sense unjustified, insofar as they falsely imply that the torturer-hero was wrongly motivated or otherwise made a moral mistake).
*Merits* blame (i.e., negative reactive attitudes would be *fitting*), in the same way that:
* 'credible' means merits belief, not expedient to believe,
* 'desirable' means merits desire, not expedient to desire,
etc.
Note that moral responsibility skeptics hold that no-one is ever truly blameworthy. They certainly aren't claiming that it would never be expedient to blame anyone.
Glad to see you biting the bullet, and look forward to analyzing the teeth marks!
The goal is to stop bad people from using good theory to justify bad actions. I don’t know how you’ll could do it with consequentialism, but if anyone can, it’s you!
I don't think that's the goal of ethical theory. My goal as a philosopher is just to work out what's true. It's always possible that bad people might misleadingly cite truths (in combination with lies) to "justify" bad actions. The solution to that, in my view, is to point out the lies that are going into the mix and causing normative garbage to come out the other end. It's not to replace the normative truth with noble lies that are less combustible when combined with other lies. (Granted, it's possible that propagating noble lies would end up doing more good. But I'm a philosopher, not a propagandist, so that's not the business I'm in.)
Yes, you’re primarily responsible for the “good theory” part. The rest of us have to worry about the “rest”.
But it would be great if you considered the “rest”, rather than try to avoid the risk, especially when it wouldn’t take much for you to absorb some thus fairly sharing the burden.
It seems like there are useful analogies to the blameworthiness/blamability distinction in several nonmoral domains. For instance, a human move in a strategy game can simultaneously have a clear theoretical status as a "bad move", and also be the move an omniscient observer aligned with the agent would recommend, knowing the subsequent game-losing error it would cause the opponent to make. Relative to the same goal (winning the game -- or maximizing winning chances), these are two highly related but separable notions of instrumental goodness.
Fascinating discussion. I think I'm still a bit puzzled about how a utilitarian can adopt a "fittingness" account of blameworthiness. You rightly see (and accept) that this implies that someone who kills the innocent person with the right utilitarian motive in the trolley case is not blameworthy (though possibly "blame-able"). The other direction may be more troubling: Won't every act that I perform with a motive that is out of harmony with utilitarianism--for example, acts that benefit my friends and loved ones, done from a motive of caring about them more--be blameworthy (though perhaps not blame-able) on this view?
If all such acts from non-utilitarian motives are blameworthy, how will a virtuous utilitarian agent relate to this fact? The agent, on the one hand, will see that she is blameworthy, say, when she acts out of love for her child; but she may also see that it is morally right for her to cultivate in herself the tendency to act exactly that way, from exactly that motive, in that situation. So she doesn't feel any moral guilt, nor does she think that she should; but she still sees that she is blameworthy? I see what this "blameworthiness" points to--namely, the way in which her partial motive is inaccurate or incorrect in its basic relation to the impartial good--but I'm not sure it seems like the right term to capture that incongruity.
I'm inclined towards the "leveling-up" view of utilitarian impartiality, according to which imbalances are generally caused by *too little* concern for strangers rather than *too much* concern for loved ones:
https://rychappell.substack.com/p/level-up-impartiality
So we typically act on good (as far as they go) motives. We just don't care about strangers as much as would be ideal. Even so, not every "falling short of ideal" qualifies as blameworthy -- see my 'Willpower Satisficing':
https://philpapers.org/rec/CHASBE-4
I'm a bit unclear on why we need the internal and/or intrinsic conceptions of rationality and goodness. It does seem clear that we have an intuitive conception of these sorts of things, and I can see why it might be consequentially good for us to have such intuitions. But I'm not sure why the theory needs to have a role for this kind of "fittingness" (I'm generally suspicious of any truth to "fittingness" judgments of any kind, though I suspect these judgments are often the result of dispositions that are good ones to have).
At a minimum, I think it's important to be clear on the concepts to avoid miscommunication, e.g. with people who use "blameworthiness" in the ordinary (fittingness) sense.
It remains open to pure consequentialists to then offer an error theory of fittingness (and rationality?), arguing that -- despite our intuitive grasp of the concept -- nothing actually satisfies it. But I guess I'd want to hear more about what we gain, theoretically, from such a revisionary stance. Most philosophers seem happy to admit of epistemic reasons for belief, for example, and other fittingness claims are just more along the same lines as that. So it doesn't strike me as a great theoretical cost to admit fittingness into our accounts. (By contrast, it would seem potentially difficult and costly to try to banish it entirely.)
Yeah, I suspect that my main opposition to fittingness comes from my work in epistemology, where I strongly disagree with the evidentialist intuitions people have in favor of a stricter consequentialism. I'm open to the idea of drawing a distinction between pragmatic value and the value of true belief, with properly epistemic value having to derive from consequences in terms of true belief, but I'm skeptical about there being something intrinsic to a conception of evidence that belief could even try to fit.
According to love consequentialism, we should not want Bob to "make the mistake," in this heinous scenario you've concocted which has really dark implications for the real world, we should *mourn the loss of the puppies* for that plays a role in their continued existence or, if you don't believe in that, it shows love for the puppies and so warms the heart of their loved ones. To "want Bob to make the mistake" is to be engaged in the scenario as a dark spectator at a demonic ball game "Do it, Bob! For the Puppies!" What *human beings* do in such a scenario is *look away* because when they are powerless to help, they do not want their memories of their loved ones to be fucked with by horrific images, and then -- again -- they *mourn the loss of innocent life* in part by seeking to enact just retribution on behalf of the puppies during their lifetime, with the end goal of eradicating in totality perverse demons, bobs, and their utterly *perturbing* spectators. See how love by itself brings about the best consequences?
This is all a bit outside my expertise, but I thought virtues were dispositions to be moved to do what you have reasons to do. If so, doesn't consequentialism entail that the only virtue is the disposition to maximize welfare (or whatever)? Then there's no conflict between acting rightly and being virtuous, just a revisionism about virtue.
I like that reasons-based conception of virtue! (Though, as I argue in 'The Right Wrong-Makers' - https://philpapers.org/rec/CHATRW-3 - consequentialism yields separate reasons to promote each good, not just one reason to maximize overall good. So it doesn't have the entailment you suppose.) I very much agree that there's no conflict between acting rightly and being virtuous. But there may be a conflict between being virtuous and having the (instrumentally) best dispositions of character. And one thing it might be right (and virtuous) for us to do, in such a case, would be to make ourselves more vicious (and so less apt to act rightly, or for the right reasons, in future).
On a more practical note, I think this essay on 'Virtues for Real-World Utilitarians' is excellent (but maybe implicitly conceives of virtues as "character traits/dispositions it would be robustly good to have", or something more along those lines): https://www.utilitarianism.net/guest-essays/virtues-for-real-world-utilitarians
This reads to me like an "ends can justify un-virtuous means" argument? So although it's blameworthy to torture the terrorist who has the code to stop the bomb going off that will kill thousands, it's still the correct thing to do on the other dimension?
Almost. While it is possible to assess actions in different ways (e.g. whether it's the correct thing to do vs whether it was blameworthy, or done for the right or wrong reasons), any correct action -- even torture! -- would be blameless if done for the right reasons. I don't think there's any such thing as "un-virtuous means". Whether an action is virtuous or not depends on how the agent was motivated, the quality of the reasoning that went into their choice, etc. I think only a non-consequentialist can really hold that certain means are inherently "un-virtuous".
(A consequentialist can of course agree that torture is bad *all else equal*, and so a virtuous agent who needed to torture a terrorist to prevent disaster would feel some reluctance to do it, even while affirming that it's ultimately more important to save thousands, so they would grit their teeth and force themselves to do what's necessary. An agent who didn't care enough about the thousands to overcome their reluctance would not be a truly virtuous person. Whereas a vicious agent would feel no reluctance to begin with, positively reveling in the torture -- or in doing nothing and letting thousands die.)
So the two dimensions I'm talking about in the OP are more applicable to attitudes than actions. Applied to blaming attitudes: it might be good (expedient for society) to blame the torturer-hero, just to help reinforce general anti-torture norms, even if the hero is not truly blame-worthy at all (meaning that one's blaming attitudes are inaccurate, so in one sense unjustified, insofar as they falsely imply that the torturer-hero was wrongly motivated or otherwise made a moral mistake).
So what does blameworthy mean if you reject the notion that it means expedient to blame?
*Merits* blame (i.e., negative reactive attitudes would be *fitting*), in the same way that:
* 'credible' means merits belief, not expedient to believe,
* 'desirable' means merits desire, not expedient to desire,
etc.
Note that moral responsibility skeptics hold that no-one is ever truly blameworthy. They certainly aren't claiming that it would never be expedient to blame anyone.
Glad to see you biting the bullet, and look forward to analyzing the teeth marks!
The goal is to stop bad people from using good theory to justify bad actions. I don’t know how you’ll could do it with consequentialism, but if anyone can, it’s you!
I don't think that's the goal of ethical theory. My goal as a philosopher is just to work out what's true. It's always possible that bad people might misleadingly cite truths (in combination with lies) to "justify" bad actions. The solution to that, in my view, is to point out the lies that are going into the mix and causing normative garbage to come out the other end. It's not to replace the normative truth with noble lies that are less combustible when combined with other lies. (Granted, it's possible that propagating noble lies would end up doing more good. But I'm a philosopher, not a propagandist, so that's not the business I'm in.)
Yes, you’re primarily responsible for the “good theory” part. The rest of us have to worry about the “rest”.
But it would be great if you considered the “rest”, rather than try to avoid the risk, especially when it wouldn’t take much for you to absorb some thus fairly sharing the burden.