Naïve Instrumentalism vs Principled Proceduralism
Not your standard consequentialism-deontology distinction
Naïve Instrumentalists are practically unconstrained in pursuit of their moral or political goals. If it seems to them, just based on the immediately legible evidence, that violence or deception would advance their goals, they won’t hesitate to act accordingly.
Principled Proceduralists, by contrast, allow their instrumental pursuits to be practically constrained by rules, principles, or procedures that promote co-operation and limit downside risk (incl. of escalating conflict) in a way that can be appreciated somewhat independently of their particular beliefs or commitments.
Now, if all you know about a person is that they’re either a naïve instrumentalist or a principled proceduralist, which option would you expect to be better for the world? Which do you take to be recommended by consequentialism? There’s a funny tradition of objecting to consequentialism by offering different answers to these two questions, which seems pretty incoherent.
Many people seem to associate consequentialism with naïve instrumentalism. I’ve always found this ironic, because consequentialist philosophers more than anyone else have written at length about why naïve instrumentalism is bad and irrational. (In short: it neglects higher-order evidence. Given familiar human biases, we have strong higher-order evidence that our first-order judgments on certain topics are less reliable than just sticking to “tried and true” moral rules. That is, following the rules actually has higher expected value, all things considered.)
Moreover, as Scott Alexander points out in ‘Less Utilitarian Than Thou’, non-utilitarians often seem far more open in practice to common forms of naïve instrumentalism, i.e. doing bad things for a putative “greater good” (typically, advancing their political ideology). Arch-liberal J.S. Mill was no aberration: a principled concern for the impartial good fits very naturally with liberal proceduralist commitments.
So I think the standard narrative here is quite badly confused. It may help to separately step through how we should think about (i) being principled; and (ii) what consequentialism really claims. There seem confusions in common ways of thinking about both.
Two Conceptions of Principle
What does it take to be a principled defender of, say, free speech? Distinguish two very different answers:
(1) To robustly support practical norms of free speech—that is, without pausing to assess, in any given case, whether you personally approve of what is being said.
(2) To hold the deontological theoretical belief that free speech norms have a non-instrumental justification.
These are importantly different, because you could robustly support free speech and inquiry on the (Millian) instrumental grounds that these norms seem more conducive to moral progress and overall well-being than any realistic alternative.1
I’d say the first answer—having a robust practical commitment to free speech—gets at what is practically important. We can always ask further, secondary questions about the basis of one’s principled commitment: whether it’s ultimately instrumental or non-instrumental, for example. But there’s little reason for non-theorists to care about this further, purely theoretical matter. Illustrating this, it would be very strange to deny that J.S. Mill was a principled defender of free speech, as the second (excessively theoretical) conception does.
I always think of this when people complain that utilitarians have “no principled objection” to slavery. Do they not think that slavery is robustly detrimental to human well-being? Do they not think that there’s anything principled about robustly opposing practices that are so harmful? Perhaps one can imagine an absurd scenario involving “happy slaves” to which the usual utilitarian objection would no longer apply.2 But it’s awfully misleading to infer from this that we have “no principled objection” to real-world slavery. You might as well claim that commonsense morality, in allowing a hypothetical surgery technique involving nanobot bullets, has no principled objection to shooting people. There is a true thought somewhere in this vicinity,3 but unless it is very carefully explained, that probably isn’t the thought that will actually get communicated to the typical reader. It certainly shouldn’t be our default way of talking about “principled” objections and commitments in applied ethics.
Two Conceptions of Consequentialism
When non-consequentialists think about consequentialism, they focus on its putative account of right action (“an act is right if and only if it maximizes (expected) value”). Many then implicitly assume naive instrumentalism and so infer that a rational consequentialist agent would go about blindly following their first-pass expected-value calculations.
This is really daft. But to fully grasp the error here, it helps to get clearer on some fundamentals of ethical meta-theory (i.e., theorizing about ethical theory).
As I explain in ‘Ethical Theory and Practice’, ethical theories are in the business of telling us what fundamentally matters. (The consequentialist answer is various good things—presumably including well-being—and that’s all: no special moralizing or treating life or agency as sacred.)
To get practical advice, the account of what matters (i.e. the morally correct goals or concerns) needs to be combined with an account of instrumental rationality (i.e. how an agent should seek to achieve the correct goals).
This latter point is broadly under-theorized. Decision theory provides a kind of ideal theory of instrumental rationality, applicable to cognitively unlimited and unbiased angels, perhaps. But I trust that nobody really thinks it is instrumentally rational for humans to go around constantly calculating expected utilities. (That is to say: we all recognize that naïve instrumentalism is irrational.) Humans are non-ideal agents, and accordingly require a non-ideal theory of instrumental rationality—a theory that’s fit for human-sized minds. I develop a rough picture of what I think this would look like in section 5 of my 2019 paper, ‘Fittingness Objections to Consequentialism’ (drawing especially on Pettit & Brennan’s brilliant 1986 ‘Restrictive Consequentialism’, along with general insights from the heuristics and biases literature). A simplified version is offered in the practical ethics chapter of utilitarianism.net.
But the main thing I want to emphasize for now is where naïve instrumentalism would enter the picture. It isn’t part of the core consequentialist moral theory, specifying what matters. Rather, naïve instrumentalism is a false theory of instrumental rationality that critics ignorantly associate with consequentialism.
Remember this the next time you see someone reference “naïve utilitarianism”. Remember, especially, that the “naïveté” is entirely orthogonal to the “utilitarianism”. Naïve instrumentalism is a false theory of instrumental rationality. Utilitarianism specifies that the moral goal is to maximize well-being. It’s possible to combine these two entirely separate views, and the result will be bad. But there’s no particular philosophical impetus to combine them. It’s not an especially “natural” combination of views, except in the brute psychological sense that many misguided individuals happen to believe that they go together.
If more people read this post, hopefully that brute error can be further reduced.
Indeed, this seems like the best justification for them. Naïve radicals are surely right that it would be objectionable to prioritize merely procedural justice over substantive justice. The only truly reasonable basis for proceduralism is faith that this is the most effective means to securing substantive justice in the long run.
Though a welfare objectivist might still think that a lack of autonomy makes one worse off. Conversely, does anyone really think that there is no conceivable situation in which the usual moral valence could be flipped?
For many bad things, there is a deeper explanation of why they are bad. Whenever something’s badness admits of such a deeper explanation, you might say that it is not itself among the “fundamental” bads, and so it should be possible to imagine a weird case in which a thing of this kind lacks all its usual deeper bads, and so is not bad at all. Low decouplers confuse this theoretical claim with the practical claim that you’re not robustly opposed to actual things of this kind, or that you don’t really regard them as (even derivatively) bad at all.
Richard, thanks for this very insightful post. I'm thinking about working on related topics for part of my dissertation, so I had some thoughts I wanted to run by you.
To begin with, rather than being false, isn't naive instrumentalism in fact the true theory of instrumental rationality if anything is? That is, naive instrumentalism describes how an ideally rational agent reasons. Not only that, but this fact is fairly easy to appreciate. What is hard to appreciate is that humans ought not to follow the true theory of instrumental rationality because humans are not ideally rational. What I find interesting is that philosophers in the consequentialist tradition have (as you observe) been the ones to appreciate this most clearly, whereas non-consequentialists often assume that knowing the ideal moral goals is sufficient to enable a good-willed, naively-instrumental person to be moral. However, as you say, this in fact has nothing to do with core moral differences between consequentialism and non-consequentialism, but is in fact a dispute about practical rationality - so what explains this difference?
One possible explanation is that as a substantive matter, non-consequentialists tend to believe that the ideal moral goals are ones which a good-willed, morally knowledgeable human being will do best at following by being naively instrumental. For example, the Rossian prima facie duties seem to be like this - or at least Ross and his followers seem to believe that they are.
Another possibility is that non-consequentialists believe (perhaps implicitly) that lowering the standards of morality or rationality in response to human imperfection is morally unjustified and/or tends to make us worse people because it removes a source of (internal and external) pressure for self-improvement. I don't care much for the in-principle objection, but I do think that the second point has been neglected in the consequentialist tradition, whereas (e.g.) virtue ethicists have always been impressed by it. It happens to be true about human beings, though it isn't true in the abstract, that we are habit-forming agents, and the choices we make now shape the way we make future choices. To this extent, there's pressure on the non-ideal theory of rationality to strike a balance between accommodating and correcting human imperfection.
What I am now thinking about is the prospects for an ecumenical (i.e. theory-neutral) synthesis of these ideas. Since these questions about rationality are properly independent from disputes over what you call the core content of morality, this seems reasonably achievable. But I may have overlooked some reasons for pessimism here. For example, one reason to be pessimistic is that it's really true that for some non-consequentialist theories, naively instrumental pursuit of the ideal moral goals is as good as anything else human beings could manage.
(Let's use Rossian deontology as a case study. The duty of beneficence seems to be an obvious candidate for one that naively instrumental humans would fail to successfully discharge; however, given the relatively lax demands of the duty of beneficence as Ross conceives of it this may not in fact be true. (After all, the types of actions commonsensically associated with beneficence, like giving money to the homeless and volunteering at soup kitchens, are genuinely beneficent! They just often fail to be effectively beneficent.) Insofar as the duties of non-maleficence, fidelity, reparations, and gratitude are limited to face-to-face human relations, I think naively-instrumental, good-willed human beings probably do as well as any alternative. The question is whether it is plausible, by Rossian lights, to have such a restrictive understanding of the scope of these duties. If we understood, for instance, the question of what to do about various forms of social injustice as falling within the scope of the duty of reparations, then I think there's a strong case to be made that naively-instrumental, good-willed human beings often go wrong and would do better by being principled proceduralists.)
I think this distinction sounds like its missing the point.
The ethics question that is imlicit is "how should people reason when it comes to moral questions ". So if you say that you are a utilitarian but you don't reason in a utilitarian way then you seem to have changed the target of the conversation.