I’m struck by how often two theoretical mistakes manage to (mostly) cancel each other out. For example, I think that common sense ethical norms tend to do a pretty good job (albeit with significant room for improvement), in practice, while resting upon significant theoretical falsehoods. These falsehoods may be part of a “local maximum”: if you corrected them, without making further corrections elsewhere, you could well end up with morally worse beliefs and practices.
This observation forms the kernel of truth in the claim that utilitarianism is self-effacing. Utilitarianism is not strictly self-effacing: I still expect the global maximum may be achieved by having entirely true moral beliefs (or a close enough approximation).1 But most people are stubbornly irrational in various ways, which may make it better for them to have false beliefs of a sort that limit the damage done by their other irrationality. These paired mistakes then constitute a protective equilibrium that stops these people from veering off into severe practical error (such as naive utilitarianism).
It’s important to note that these paired mistakes are not the only protective equilibria available. The corresponding paired truths also work! But a little knowledge is a dangerous thing: you don’t want people to end up in the situation of knowing enough to see through the illusory guardrails, but not enough to navigate successfully without the illusion.
In this post, I’ll suggest a few examples of such “paired mistakes”:
Using “collectivist” reasoning as a fudge to compensate for irrational views about individual efficacy.
Using near-termism as a fudge to compensate for irrational cluelessness about the long term.
Ignoring small probabilities as a fudge against Pascalian gullibility.
Using deontology as a fudge to compensate for irrational naive instrumentalism.
Tabooing inegalitarian empirical beliefs as a fudge for irrational (and unethical) essentializing of social groups.
Viewing all procreative decisions as equally good, as a fudge against unethical coercive interference.2
Further suggestions welcome!
1. Inefficacy and Anti-individualism
Many people have false views about individual efficacy and expected value (see my Five Fallacies of Collective Harm), that lead them to underestimate the strength of our individualistic moral reasons to contribute to collective goods (like voting for the better candidate) and to reduce one’s contributions to collective bads (like pollution and environmental damage—or voting for the worse candidate, for that matter).
If you make this mistake, it would be good to also make the paired mistake of believing that you have collectivistic moral reasons based on group contributions. There are no (non-negligible) such reasons, as I prove in ‘Valuing Unnecessary Causal Contributions’. But the false belief that there are such reasons can help motivate you to do as you ought, when you’re too confused about inefficacy to be able to get the practical verdicts right for the right reasons.
Conversely: if you correctly understand why collectivist reasons are such a silly idea, it’s very important that you also appreciate why there often are sufficient individualistic moral reasons to contribute to good things even when the chance of your act making a difference is very small. (Remember that All Probabilities Matter!)
2. Cluelessness and Anti-longtermism
Some people falsely believe that we cannot justifiably regard anything (even preventing nuclear war!) as having long-term positive expected value. I’ve previously argued that such cluelessness is less than perfectly rational, though it may itself be a useful protection against some forms of “naive instrumentalist” irrationality (see #4 below).
Still, if you make this mistake, it would be good to pair it with anti-longtermism, so you avoid decision paralysis and continue to do some good things—like trying to prevent nuclear war—albeit in partial ignorance of just how good these things are.
3. Pascalian Gullibility and Probability Neglect
Another form of misguided prior involves “Pascalian gullibility”: giving greater-than-infinitesimal credence to claims that unbounded value depends upon your satisfying another’s whims (e.g. their demand for your wallet)—yielding a high “expected value” to blind compliance.
If you are disposed to make this mistake, it would be good to pair it with another—namely, the disposition to simply ignore any sufficiently small probabilities, effectively rounding the Pascalian mugger’s threat down to the “zero” it really ought to have been all along. But this latter disposition is itself a kind of mistake (i.e. when dealing with better-grounded probabilities), as explained in my recent post: All Probabilities Matter. So it might be especially important to correct this pair.
4. Naive Instrumentalism and Anti-consequentialism
Many people (from academic censors to those who think that utilitarianism would actually justify Sam Bankman-Fried’s crimes)3 seem drawn to naive instrumentalism: the assumption that one’s moral goals are apt to be better achieved via Machiavellian means than by pursuing them with honesty and integrity, constraining one’s behaviour by tried-and-tested norms and virtues. Like most (all?) historical utilitarians, I reject naive instrumentalism as hubristic and incompatible with all we know of human fallibility and biased cognition. (See here for more on what sort of decision procedure I take to be rationally superior.)
Still, if you are—abhorrently—a naive instrumentalist, you’d best pair it with non-consequentialism to at least limit the damage your irrationality might otherwise cause!
5. Social Essentialism and Tabooed Empirical Inquiry
Most people are terrible at statistical thinking. As Sarah-Jane Leslie explains in ‘The Original Sin of Cognition: Fear, Prejudice, and Generalization’, people are natural “essentialists”, prone to generalize “striking [i.e. threatening] properties” to entire groups based on even a tiny proportion of actual threats. (She compares the generics “Muslims are terrorists” with “mosquitos carry the West Nile virus”.)
If you’re bad at thinking about statistical differences, and prone to draw unwarranted (and harmful) inferences about individuals on this basis, then it might be best for you to also believe that any sort of inquiry into group differences is taboo and morally suspect. You should just take it on faith that all groups are inherently equal, if anything more nuanced would corrupt you.4
But of course there’s no reason that any empirical possibility should prove morally corrupting to a clear thinker (rare though the latter may be). As I noted previously: “Just as opposition to homophobia shouldn’t be contingent on the (rhetorically useful but morally irrelevant) empirical claim that sexual orientation is innate, so our opposition to racial discrimination shouldn’t be contingent on empirical assumptions about genetics, IQ, or anything else.”5 Group-level statistics just aren’t that relevant to how we should treat individuals, about whom we can easily obtain much more reliable evidence by directly assessing them on their own merits.
6. Illiberalism and Procreative Neutrality
Naive instrumentalists assume that illiberal coercion is often the best way to achieve moral goals. As a result, they imagine that pro-natalist longtermism must be a threat to reproductive rights (and to procreative liberty more generally).
I think this is silly because illiberalism is so obviously suboptimal. There’s just no excuse to resort to coercion when incentives work better (by allowing individuals to take distinctive features of their situation into account).
But for all the illiberal naive instrumentalists out there, perhaps it is best if they also mistakenly believe in procreative neutrality—i.e., the claim that there are no reasons of beneficence to bring more good lives into existence.
Should we lie?
Probably depends on your audience! I’m certainly not going to, because I’m committed to intellectual honesty, and I trust that my readers aren’t stupid. Plus, it’s dangerous for the lies to be too widespread: plenty of smart people are going to recognize the in-principle shortcomings of collectivism, neartermism, probability neglect, deontology, moralizing empirical inquiry, and procreative neutrality. We shouldn’t want such people to think that this commits them in practice to free riding, decision paralysis, Pascalian gullibility, naive instrumentalism, social essentialism, or procreative illiberalism. That would be both harmful and illogical.
So I think it’s worth making clear (i) that these pairs are (plausibly) mistakes, but (ii) it could be even worse to only correct one part of the mistake, since together they form a protective equilibrium. To avoid bad outcomes, you should try to move straight from one protective equilibrium to another, avoiding the shortcomings of just “a little knowledge”.
We should typically expect the accurate protective equilibrium to be practically superior to the thoroughly false one, since accurate beliefs do tend to be useful (with rare exceptions that one would need to make a case for). But if you don’t think you can manage to make it all the way to the correct pairing, maybe best to stick with the old fudge for now! I’ll continue to try to clear the path,6 so it’s easier to see things right in future.
E.g., although I’m (like everyone) probably wrong about some things, I’m confident enough about the broad contours of my moral theory. And I’m not aware of any reason to think that any alternative broad moral outlook would be more beneficial in practice than the sort of view I defend. The only real danger I see is if people only go part way towards my view, miss out on the protective equilibrium that the full view offers, and instead end up in a “local minimum” for practicality. That would be bad. And maybe it would be difficult for some to make it all the way to my view, in which case it could be bad for them to attempt it. But that’s very different from saying that the view itself is bad.
I added this one after initial posting, thanks to Dan G.’s helpful comment on the public facebook thread suggesting a general schema for paired mistakes involving (i) openness to wrongful coercion and (ii) mistakenly judging all options to be on a par.
I think it’s interesting, and probably not a coincidence, that people with naive instrumentalist empirical beliefs are overwhelmingly not consequentialists. (A possible explanation: commitment to actually do what’s expectably best creates stronger incentives to think carefully and actually get the answer right, compared to critics whose main motivation may just be to make the view in question look bad. Alternatively, the difference may partly lie in selection effects: consequentialism may look more plausible to those who share my empirical belief that it typically prohibits intuitively “vicious” actions. Though it’s striking that the censors actually endorse their short-sighted censorship. Not really sure how to explain why their empirical beliefs differ so systematically from free-speech-loving consequentialists.)
I should stress that the “mistake” I’m attributing here is the taboo itself, not the resulting egalitarian beliefs. Due to the taboo, I have no idea what the first-order truth of the matter is. Maybe progressive dogma is 100% correct; it’s just that, for standard Millian reasons, we cannot really trust this in the absence of free and open inquiry into the matter. Still, if you would be corrupted by any result other than progressive orthodoxy, then it would also seem best to just take that on faith and not inquire any further. But the central error here, I want to suggest, is the susceptibility to corruption in the first place. That just seems really stupid.
I always worry about people who think there’s such a thing as inherently “racist (empirical) beliefs”. Like, suppose we’re unpleasantly surprised, and the empirical claims in question turn out to be true. (Philosophers have imagined stranger things.) Are you suddenly going to turn into a racist? I’d hope not! But then you shouldn’t think that any mere empirical contingency of this sort entails racism. Obviously we should be morally decent, and treat individuals as individuals, no matter what turns out to be the case as far as mere group statistics are concerned. The latter simply don’t matter to how we ought to treat people, and everyone ought to appreciate this.
Of course, conventionally “racist beliefs” may be (defeasible) evidence of racism, in the sense that the belief in question isn’t evidentially supported, but appeals to racists. After all, if the only reason to believe it is “wishful thinking”, except it wouldn’t be worth wishing for unless you were racist, then the belief is evidence of racism. But this reasoning doesn’t apply to more agnostic attitudes. This is because taboos prevent us from knowing what is actually evidentially supported: we know that people would say the same thing, for well-intentioned ideological reasons, no matter what the truth of the matter was. (Naive instrumentalism strikes again.)
At least on topics 1 - 4 & 6, all of which I’ve worked a fair bit on. The fifth I’ll probably continue to mostly avoid—the prospects for widespread clear thinking on such heated topics seem less hopeful—I just mentioned it as an example above for completeness.
Regarding 4, I suspect that the concentration of consequentialists amongst people who reject “consequentialism can reasonably imply naive instrumentalism” is evidence that most people are more committed to non-viciousness than to any underlying moral theory. I think it’s highly likely that (most) people who think consequentialism implies naive instrumentalism are less likely to be consequentialists as a direct result of this, for example. (Perhaps not all, however. I can imagine a person becoming enamoured of a story about themselves in which they have “the guts to make the hard choices” and being attracted to the consequentialism-plus-Machiavellianism pair accordingly.)
It may be related to this that atheists are considerably less likely than theists to believe that belief in God is necessary to provide a basis for morality. People who believe that atheism would remove their motivation for moral behaviour are often strongly motivated to hold onto their belief in God. One fascinating thing about this is that it is possible that some of these people are right about themselves! Ross Douthat, for example, once remarked that “If you dislike the religious right, wait till you meet the post-religious right.” Damon Linker discusses that comment here and thinks that it’s on to something: https://damonlinker.substack.com/p/how-the-religious-right-lost-while
Do you think you have any moral commitments that are prior to consequentialism, for you? I’m intrigued by your comment that you are “committed to intellectual honesty,” for example. It’s easy to make consequentialist arguments in favour of intellectual honesty, so it’s not that this would necessarily contradict your consequentialist views, but you talk about it like it’s foundational. If it is, I approve; I have similar feelings!
If people had unlimited computational power and speed, the accurate beliefs would surely be better. But many of these seem like the kinds of heuristics that lead us closer to the truth in individual cases even though they are generally false. (For instance, I think ignoring small probabilities likely protects us against many errors, not just pascalian gullibility, because the only cases where small probabilities matter are ones in which small errors causes by anything can easily change the sign of the result.)