The Utilitarian Tradition is Conceptually Stunted
There are normative concepts besides goodness
It’s no secret that I like (something close enough to) utilitarianism as a moral theory. But there’s one area where I think the utilitarian tradition really falls short, which is that it tends not to acknowledge the full range of normative concepts.
As we’ll see, I don’t think there’s any good reason for this: it’s just an oversight, and one that’s easily corrected. Of course, utilitarians are especially interested in goodness (for obvious reasons). And they’ve (mistakenly) thought that they need to invoke deontic rightness to explain their disagreement with any deontologists who are willing to grant their evaluative verdicts (about which outcomes are better than others).1 But the stubborn refusal of most utilitarians to so much as think about fitting attitudes is a really extraordinary blindspot.2 (Note: it’s possible to think about something without implying that it ought to take practical priority over impartial value.)3
[Warning: this is very much an “in the weeds” post for other moral philosophers, about how to do moral philosophy. General audiences may prefer to skip it entirely, as it’s unlikely to directly connect up to anything you care about.]
Background
My old post Consequentialism Beyond Action summarizes some key lessons from my forthcoming Oxford Handbook chapter on the topic:
We must distinguish two dimensions of moral assessment [roughly: value/desirability and warrant]. It’s important for a moral theory to have plausible things to say about both. Consequentialists have traditionally only discussed the first. My aim is to get other consequentialists to be more comfortable also discussing the second.
Read the whole post if you’re not already clear on the distinction.
Expressive Power as a Theoretical Virtue
As a general rule of philosophical methodology, we should aim to be capable of expressing every coherently thinkable thought. As I put it in Words Don’t Matter:
Words don’t matter; ideas do. Philosophical questions and concepts should be assessed for their intrinsic interest, not for how or whether they correspond to natural language terms. We should generally want to expand our expressive powers, so as to be able to consider a wide range of candidate views.
Following this methodology, my (2012) Fittingness paper argues that moral theorists should prefer a fittingness framework over the value primitivism that consequentialists have traditionally espoused. The argument is simple: any claim about value can easily be analyzed in terms of desirability (fittingness to desire), whereas many claims about fittingness can’t be analyzed in terms of value. So value talk fails to capture the full spectrum of normative claims.
In the remainder of this post, I’ll highlight two different costs of this conceptual stunting: (1) conceptual confusion, as exemplified by “global consequentialism”, and (2) failure to understand character-based objections.
Confused Debates about ‘Ought’
While this problem is by no means unique to utilitarians, moral theorists have an unfortunate tendency to get caught up in verbal disputes as a result of using ‘ought’ as a generic signifier of approval (or positive normative assessment) without sufficient care or attention to the precise nature of the assessment being offered.
There are many different kinds of positive normative assessment, which different philosophers signify by using the same word ‘ought’. When people don’t realize this, they end up in pseudo-debates where one side correctly asserts, in effect, “X is meritorious in a respect,” and another retorts, “No! Y is meritorious in a respect!” and they think they’re disagreeing with each other because they each expressed these thoughts using the shared word ‘ought’ instead of the distinct more specific normative ideas they actually have in mind. (A paradigmatic example of this may be the debate over ‘objective’ vs ‘subjective’/evidence-relative ‘oughts’. Extending ‘ought’ to non-maximal options is another. To be clear: lots of philosophers I respect engage in these debates! But I think they’re mistaken to do so, because I don’t see anything substantive for the debates to be about.)
Global Consequentialism
My 2012 paper demonstrates how the fittingness framework can help to clear up one such confused debate. In particular, it argues that “global consequentialism” is a non-starter: there’s just no logical space for consequentialists to make further true claims besides those that are already contained within act consequentialism.4 (This will take some explaining: feel free to skip if you’re not interested.)
The scope of normative theorizing is provided by the range of ‘rational outputs’ or ways agents can respond to reasons (that is, by forming judgment-sensitive attitudes and by performing actions). We assess an object as ‘good’ when it is fitting for an agent to desire it. We call an action ‘right’ when it’s fitting to (intend, choose, or) do it. And we can further assess what propositions warrant belief, what behavior warrants gratitude or resentment, what objects warrant aesthetic appreciation, and so on. Each of these domains raises substantive normative questions, and so invites normative theorizing.
By contrast, it makes no sense to offer a normative assessment that in no way calls for any sort of (even attitudinal) response. A putatively normative claim that made no sort of claim on us as agents would be normatively inert or empty. That’s why it’s nonsensical for global consequentialists to talk about the “right” eye-color (if this is supposed to mean anything more than good, i.e. desirable). Eye colors don’t respond to reasons: maybe you could act so as to change your eye color, or more generally want a change of this sort; but to assess that we should look to your reasons for action and desire. Nothing but confusion would be gained by calling reasons for eye-color-related actions and desires “reasons for eye colors” (understood as a distinct class of reasons, besides reasons for belief, action, desire, etc.).
Similar observations can be made about belief (and other judgment-sensitive attitudes). One important difference: there are (epistemic) reasons for belief: considerations that will directly produce rational beliefs in a rational (reasons-responsive) agent. But as with eye colors, there are also indirect belief-related reasons for action and desire—e.g. to undergo brainwashing if doing so would have sufficiently good effects. Nothing but confusion is gained by calling reasons for (belief-related) actions and desires “(practical) reasons for belief”. The reason does not call upon our rational belief-forming capacities, but upon our rational capacity to want and to act to pursue good things. If you fail to follow the reason, you don’t thereby have irrational beliefs, but just undesirable ones: you may have irrational desires (if you fail to want the value-promoting beliefs) or you have may have made irrational choices (if you passed up an opportunity to achieve this goal). But a merely unfortunate belief is no more rationally criticizable in itself than is an unfortunate eye color.
If the global consequentialist concurs with my verdicts about reasons and rationality, then they haven’t said anything that goes beyond act consequentialism: after starting with a general axiology (specifying what all moral agents have reason to desire, i.e. what is good),5 the only further normative claims for a direct consequentialist theory to add are ones about our reasons for action. (Note that there’s no gain to expressive power from simply repeating an old claim using new words, e.g. calling anything ‘right’ when you mean nothing more than that it is good.)
Alternatively, if the global consequentialist disputes these verdicts, and claims that useful false beliefs are rationally warranted (or fitting) in addition to being desirable, then they are making claims that are obviously false. (See above: a merely unfortunate belief is in no way rationally criticizable per se.)6
So global consequentialism is either a verbal variant of act consequentialism, or it is false. Most consequentialists don’t yet realize this. But I think the conclusion is completely inescapable once you pin down suitably precise normative concepts.
Many talk about “evaluative focal points” as though our axiology left it an open question what it could be applied to. That just seems confused to me: axiologies are inherently global, since there’s no constraint on the possible contents of desires—and, indeed, much of what global consequentialists say about evaluation is true (but trivial).7 The real question of normative structure is rather about the range of rational outputs—beliefs, desires, actions, etc.—and the associated classes of reasons. This is a normative structure that is occluded by global consequentialist rhetoric (flattening the normative differences between acts and eye colors). But it is accurately revealed by the fittingness view, which makes clear that consequentialism can be nothing but an account of our reasons for action.8
Neglected Objections
The second major cost of this conceptual stunting is that it leads utilitarians to not understand what their critics are objecting to. As I argue in my 2019 Fittingness Objections to Consequentialism, all the classic character-based objections to consequentialism are most charitably understood as targeting the view’s implications regarding the fitting moral psychology, not the recommended one. But as far as I can tell, every other consequentialist philosopher discussing character has exclusively addressed the latter. As a result, instead of addressing the objections, they’ve changed the subject.
Consider Stocker’s alienation objection: utilitarianism (he supposes) implies that the right reason to visit your friend in hospital is because it maximizes utility. But that’s far too cold and heartless a motivation for a genuinely good person (at least as their sole motivation). So utilitarians must embrace “moral schizophrenia” or disharmony between their motivations and the normative reasons posited by their theory. He might have added: since true normative reasons constitute fitting/virtuous motivations, utilitarianism (seemingly) implies that it would be fitting/virtuous to visit your friend in hospital solely from the motivation to maximize utility. But that’s clearly false: such an abstract motivation is not ideally virtuous or fitting.
This is a powerful objection, and one that should prompt us to reflect more carefully on the reasons we take our moral theories to bestow. It’s an objection I tackle head-on in The Right Wrong-Makers (2020), by arguing that Stocker was wrong about the reasons yielded by modern ethical theories like utilitarianism. This is in stark contrast to the “canonical” utilitarian response of embracing disharmony and then just ignoring the fact that this leaves them with a completely implausible view about fitting motivations. (A problem that Stratton-Lake aptly exploits in his brilliant 2011 paper, Recalcitrant Pluralism, to further press the “motive objection” against consequentialism—though again, my 2020 argues that he gets the normative details wrong in a way that ultimately undermines the objection.)
Conclusion: Don’t be an ideologue
I think the utilitarian tradition has been unnecessarily weakened by its failure to sufficiently grapple with the full range of normative concepts (particularly, those relating to fitting attitudes). But it’s easily remedied, and a minor enough problem in the grand scheme of things (especially compared to the conceptual failures of other traditions, which we’ll get into in a future post).
Something I really want to stress is that there is, as far as I can tell, literally nothing to be said for resisting the improvements I’ve suggested here. Often other philosophers say things to me like, “Well, that’s no longer really consequentialism then,” which I think is both silly and irrelevant. It’s irrelevant because labels don’t matter. And it’s silly because none of the arguments for consequentialism motivate adding “and also there are no such things as epistemic reasons” to the verdicts about what acts ought to be done. You’re actually making a substantive philosophical mistake if you think the reasons to be a consequentialist (about action) give you any reason at all to resist the further claims I think we should make about fittingness, virtue, epistemic reasons, etc. (Note that the further claims are not in conflict with what I identify as ‘core consequentialism’.)
I think what’s really going on here is that (many) philosophers have fallen victim to a pernicious ideology—an unargued-for metaphysical picture something like Scanlon’s imagined “philosophical” utilitarianism. According to this familiar ideology of utilitarianism, the value facts exhaust the normative facts, and everything else that other theorists want to talk about is literal non-sense. But this ideology seems absurd: not only is it subject to clear-cut counterexamples, but again, I’m not aware of any countervailing reason to take it seriously (since none of the standard arguments for utilitarianism as a moral theory support this conceptually stunted ideology—they’re two very different things!). So I implore my colleagues to stop taking this ideology seriously and instead just pay attention to the various arguments for adopting some conclusions and not others.
There’s a lot to be said for the view that we always have most reason to do what would produce the best outcome. And it seems both accurate and convenient to label this view ‘consequentialism’. But to further insist that anyone sympathetic to this view ought (why!?) to reject all other forms of normativity is, I think, substantively mistaken—confused, even. Pending any arguments to the contrary, we may dismiss it as ideological overreach.
Though the maximizing concept most utilitarians invoke bears little resemblance to what anyone else means by ‘right’ (or ‘permissible’). Kudos to Railton for highlighting this in his brilliant 1988 paper, How Thinking about Character and Utilitarianism Might Lead to Rethinking the Character of Utilitarianism.
The repeated conflation of blameworthiness and expediency to blame, by almost every major utilitarian since Sidgwick, is especially embarrassing.
I’m constantly baffled when others fail to understand this. As explained here: “Other philosophers have a tendency to get really confused at this point, and think I’m claiming that we should encourage virtue (or fitting attitudes) even when this makes for an overall worse outcome. But that’s not my view at all. Whenever virtue conflicts with overall value, it’s overall value that matters more.”
McElwee’s (2020) The Ambitions of Consequentialism is also very good on this point.
And, we might add, the general consequentialist principle that there are no deontological reasons for desire that override the general reasons for desire specified in our axiology.
Could an indirect consequentialist account of fitting attitudes be more plausible? You might claim, for example, that people truly warrant blame just when the consequentially best rules for blaming would tell us to blame them. But this seems just as prone to obvious counterexamples as the direct account. (We could easily make optimal actions “blameworthy”, for example.) Moreover, I don’t see any positive motivation or argument for adopting a consequentialist account of fitting attitudes. There’s plenty to be said for act consequentialism. But what’s the case for consequentialism about fitting attitudes? The view just seems transparently silly.
Except, of course, for their “meta” claims, such as that “act consequentialists only assess actions.” The truth is rather that act consequentialists only add to their global axiology a further claim about reasons for action—and aptly so, because action is the only rational output that is successfully rationalized by its own value. (Compare: a transparently valuable action is thereby a rational action, whereas a transparently valuable belief is not thereby a rational belief.)
If your only normative concept is that of value, by contrast, you can’t add any further claims at all. “The value-maximizing X is the best X” is not a substantive normative claim but a mere tautology, given that ‘best’ in this context means nothing but ‘value-maximizing’. And merely possessing an axiology is already sufficient for evaluating anything at all: no further normative claims are needed, or even helpful.
Together with, again, the negative claim about reasons for desire that there are no other (e.g. deontological) reasons to override those specified in our axiology.
I'm curious are you specifically interested in human morality and human normative concepts here, or do you think this stuff is more abstract?
For example, Kantians claim their theory (whatever that is) applies to "all rational beings". Does yours?
[Admittedly I haven't waded into the details here, so maybe my question is not specific enough.]
I would love to see your response to Miles Tucker's value first view - https://philpapers.org/rec/TUCCAO
Neil Sinhababu also believes in value first view.