In ‘Consequentialism: Core and Expansion’ (forthcoming in The Oxford Handbook of Normative Ethics), I explore how we should think about consequentialism: what’s core to the view, and how it might best be expanded beyond that core. Consequentialist assessment of actions was already addressed in my previous post on Deontic Pluralism:
So in this post, I want to focus instead on what consequentialists should say about attitudes, decision procedures, and other (non-act) focal points for moral evaluation. My view is that we must distinguish two dimensions of moral assessment. It’s important for a moral theory to have plausible things to say about both. Consequentialists have traditionally only discussed the first. My aim is to get other consequentialists to be more comfortable also discussing the second.
Disambiguating “Good Intentions”
The natural form of moral assessment for consequentialists is evaluation, or assessment in terms of value. Just as some acts produce more or less value than their alternatives, so we can evaluate whether it would be better or worse (in terms of value-promotion) for someone to hold different attitudes, dispositions, or character traits.
Let’s focus on intentions for a moment. Suppose that an evil demon likes torturing puppies, but he likes even more to do the opposite of what Bob wants. So in fact some nearby puppies will be spared if and only if Bob wants the demon to torture them. Now suppose that Bob notices the puppies and cruelly intends to bring them to the demon’s attention, hoping that the demon will torture them. (He doesn’t realize that this will actually have the opposite effect.)
Now let’s ask: are Bob’s intentions “good”? There are two very different ideas we need to distinguish here. Bob’s intentions are certainly expedient—they’re the intentions an informed benevolent observer would want him to have, since they will serve to save the puppies, and thereby promote the good. But, equally obviously, Bob’s malicious intentions are not “good” in the sense we normally have in mind when speaking of “good intentions”: Bob does not mean well. He does not intend or aim at anything good (quite the opposite). So these malicious intentions—however fortunate they may turn out to be—do not reflect well on Bob. Since what Bob intends is bad rather than good, there is an obvious sense in which he has “bad intentions”, and is messed up as a moral agent. This is to assess his intentions in terms of their intrinsic warrant or fit with the moral truth, in contrast to our earlier evaluation of them in terms of promoting value.
I argue that consequentialists should feel comfortable making both of these obviously-correct assessments. Bob’s intentions are instrumentally valuable (or fortunate), but vicious (morally unfitting). There is no conflict between these two judgments, because they concern distinct dimensions of moral assessment, with different theoretical roles. The value dimension is what matters, and provides the standard of correctness for our choices and preferences about the evaluated attitude. The fittingness dimension concerns whether the attitude in question is itself correct/warranted, or whether its possessor is making a moral mistake. (This may also have relevance to third parties by influencing what reactive attitudes are warranted in response. For example, we would be justified in thinking poorly of Bob.)
In our evil demon scenario, we should want Bob to make a moral mistake, because that has better results. But we should not lose sight of the fact that Bob is indeed mistaken in his malicious desires. To make both of these claims, we need both dimensions of assessment.
Virtue and Value Agree: What Matters is Value, not Virtue
Other philosophers have a tendency to get really confused at this point, and think I’m claiming that we should encourage virtue (or fitting attitudes) even when this makes for an overall worse outcome. But that’s not my view at all. Whenever virtue conflicts with overall value, it’s overall value that matters more.
Just as it can be rational to make yourself irrational, so it can be virtuous to make yourself vicious. If circumstances call for it, the utilitarian thing to do may be to make oneself a non-utilitarian. So it’s important to be clear that when we (ethical theorists) set out to characterize a distinctively utilitarian psychology, we are not describing a goal-state. We’re characterizing what it looks like to pursue utilitarian goals in a systematically rational way. If one ceases to do this—to have utilitarian goals, or to pursue them rationally—then one ceases to be a “fitting utilitarian agent”. But that’s fine! After all, what the utilitarian ultimately wants is that the good be promoted, not that they valiantly pursue its promotion.
And I think this generalizes. Virtue is loving the good. On some views, virtue itself may be numbered among the goods, but on any sane view it can be outweighed by other goods. If the fate of the world depends upon your taking a “vice pill”, then virtue requires you to take it. (It would be viciously narcissistic—and incompatible with truly loving the good—to prioritize one’s own moral purity over all else in the world.)
Intrinsic vs Extrinsic Moral Properties
Even though virtue is not the goal (or what matters practically), it is still important to theorize about accurately, just as rationality is. It would be a mistake to insist that we can only talk about expedient attitudes and not rational/virtuous ones, or to insist that rationality/virtue is reducible to expediency. As I stress in my paper, expediency is an extrinsic form of evaluation, contingent on one’s external circumstances. Change the circumstances, and you change what’s expedient. (Even outright malice can be expedient in extreme circumstances, as we’ve seen.) But rationality and virtue (or fittingness more generally) are intrinsic features of a psychology. Any intrinsic duplicate of a rational or virtuous agent is themselves (equally) rational or virtuous, even if they have the misfortune of ending up in strange circumstances where rationality and virtue are punished.
So consequentialists need to develop suitably “intrinsic” conceptions of virtue, rational decision procedures, etc. It’s not enough to merely recommend whatever decision procedure would have the best consequences—one should of course do that too, but that merely answers the question of what decision procedure one should wish for in the circumstances. It doesn’t answer the deeper (less practical, but more theoretically interesting) question of what a rational consequentialist agent would look like.
One of my central research goals is to make progress on this neglected question. In the full paper, I briefly indicate my preferred approach (and explain how it differs from orthodox “sophisticated consequentialism”). But in this post, I’ll settle for making the case that the question (i) needs to be answered, but (ii) cannot be answered in “global consequentialist” fashion (i.e. just by evaluating everything in terms of expediency). A second dimension of moral assessment is called for.
Blameworthiness and Expedient Blame
Orthodox consequentialism’s collapse of the two moral dimensions is perhaps most egregious when it comes to talk of “blameworthiness”. Luminaries from Sidgwick (1874) to Norcross (2020) have insisted that “blameworthy” can only mean expedient to blame. I object:
But of course that isn’t what it means. We know from other judgment-sensitive attitudes that rational warrant can be distinguished from expediency. A belief is credible when it is well-supported by epistemic reasons, not practical ones. Similarly for our reactive attitudes: we can distinguish the expediency of blaming someone from the question of whether they truly merit negative reactive attitudes (perhaps for demonstrating ill will—counting some for less than one, in violation of utilitarian principle).
As such, I’m inclined to push back against the common assumption that there is anything vicious, disrespectful, or ill-willed about pushing someone in front of a trolley, if the act was done from entirely beneficent motivations (to save five other, equally irreplaceable individuals).
If it would turn out to do more good to blame the agent anyway, utilitarians might be tempted to call this an act of “blameworthy right-doing”. But this would be misleading. It might be blame-able right-doing: a right act of a type we have practical reason to discourage through social sanctions such as blame. But the agent, being (ex hypothesi) pure of heart, does not truly merit negative reactive attitudes. (Don’t let that stop you though, if expressing unwarranted attitudes would save innocent lives or otherwise do more good!)
Conclusion
I hope that, in future, more consequentialists will join me in distinguishing these two moral dimensions, and enjoy the greater conceptual clarity that results. (It’s also crucial for successfully addressing, rather than just talking past, character-based or “fittingness” objections to the theory.)
For more than you ever wanted to know about conceptualizing consequentialism, check out my full paper, ‘Consequentialism: Core and Expansion’ (recommended for grad students and above). Or, for a more concise introduction, better suited to non-specialists, check out Chapter 2: Elements and Types of Utilitarianism, at utilitarianism.net.
It seems like there are useful analogies to the blameworthiness/blamability distinction in several nonmoral domains. For instance, a human move in a strategy game can simultaneously have a clear theoretical status as a "bad move", and also be the move an omniscient observer aligned with the agent would recommend, knowing the subsequent game-losing error it would cause the opponent to make. Relative to the same goal (winning the game -- or maximizing winning chances), these are two highly related but separable notions of instrumental goodness.
Fascinating discussion. I think I'm still a bit puzzled about how a utilitarian can adopt a "fittingness" account of blameworthiness. You rightly see (and accept) that this implies that someone who kills the innocent person with the right utilitarian motive in the trolley case is not blameworthy (though possibly "blame-able"). The other direction may be more troubling: Won't every act that I perform with a motive that is out of harmony with utilitarianism--for example, acts that benefit my friends and loved ones, done from a motive of caring about them more--be blameworthy (though perhaps not blame-able) on this view?
If all such acts from non-utilitarian motives are blameworthy, how will a virtuous utilitarian agent relate to this fact? The agent, on the one hand, will see that she is blameworthy, say, when she acts out of love for her child; but she may also see that it is morally right for her to cultivate in herself the tendency to act exactly that way, from exactly that motive, in that situation. So she doesn't feel any moral guilt, nor does she think that she should; but she still sees that she is blameworthy? I see what this "blameworthiness" points to--namely, the way in which her partial motive is inaccurate or incorrect in its basic relation to the impartial good--but I'm not sure it seems like the right term to capture that incongruity.