9 Comments

Regarding 4, I suspect that the concentration of consequentialists amongst people who reject “consequentialism can reasonably imply naive instrumentalism” is evidence that most people are more committed to non-viciousness than to any underlying moral theory. I think it’s highly likely that (most) people who think consequentialism implies naive instrumentalism are less likely to be consequentialists as a direct result of this, for example. (Perhaps not all, however. I can imagine a person becoming enamoured of a story about themselves in which they have “the guts to make the hard choices” and being attracted to the consequentialism-plus-Machiavellianism pair accordingly.)

It may be related to this that atheists are considerably less likely than theists to believe that belief in God is necessary to provide a basis for morality. People who believe that atheism would remove their motivation for moral behaviour are often strongly motivated to hold onto their belief in God. One fascinating thing about this is that it is possible that some of these people are right about themselves! Ross Douthat, for example, once remarked that “If you dislike the religious right, wait till you meet the post-religious right.” Damon Linker discusses that comment here and thinks that it’s on to something: https://damonlinker.substack.com/p/how-the-religious-right-lost-while

Do you think you have any moral commitments that are prior to consequentialism, for you? I’m intrigued by your comment that you are “committed to intellectual honesty,” for example. It’s easy to make consequentialist arguments in favour of intellectual honesty, so it’s not that this would necessarily contradict your consequentialist views, but you talk about it like it’s foundational. If it is, I approve; I have similar feelings!

Expand full comment

Yeah, that seems plausible.

And yes, I do feel more personally pulled towards truth-seeking and intellectual honesty than general value-promotion. So I guess you could say that that's more *psychologically* foundational for me. I wouldn't necessarily endorse it as philosophically more foundational though: it seems easy enough to imagine cases in which truth-seeking and honesty would be harmful, and not worth prioritizing, however difficult I'd personally find it to be in such a situation.

Expand full comment

If people had unlimited computational power and speed, the accurate beliefs would surely be better. But many of these seem like the kinds of heuristics that lead us closer to the truth in individual cases even though they are generally false. (For instance, I think ignoring small probabilities likely protects us against many errors, not just pascalian gullibility, because the only cases where small probabilities matter are ones in which small errors causes by anything can easily change the sign of the result.)

Expand full comment

It seems better to combine accurate theoretical beliefs with full appreciation of the heuristics, since there could be special cases when we can recognize that the heuristics would mislead us.

Expand full comment

Funniest example of this that I've heard: a friend's teacher explained, in elementary school, that the gambler's fallacy was a fallacy. She explained that though a person who has been losing a lot is due for a win, a person who has been winning a lot is on a hot streak, so those effects cancel out. :).

Expand full comment

Really excellent! I’ve long believed something very similar, especially “pair 4”. It’s just so tempting for people without a strong Econ background and an adequate knowledge of, especially, 20th century history to, upon deciding consequentialism is correct, immediately assume that the best way forward is straight up communism. (This described me when I was a teenager.) I don’t believe any sort of non-aggression principle is literally *the* moral truth, but how many countless lives could have been saved if ruling elites in places like Russia, China, and Germany (or those who came to be the ruling elites, that is) had been brainwashed to believe from an early age in such a principle? 😅

Expand full comment

I'm not knowledgable about the details of SBF, but (as I've said before), I think it's very naive to think that, once you take into account human biases, it becomes immediately clear that utilitarianism doesn't recommend things like stealing from the rich to give to the poor etc.

At least for big decisions, I think utilitarians must decide such things on a case-by-case basis. Biases should definitely be accounted for, but still, utilitarians must remain ever vigilant, ready to strike ruthlessly in exceptional (but by no means fantastical) situations where other considerations are overriding.

Expand full comment

It surely just depends on whether such "readiness to strike ruthlessly" is a disposition that itself has positive or negative expected value. There's probably an optimal level of "readiness" to break generally-reliable rules which is (i) lower than most people naively expect, and yet (ii) non-zero. If I'm right about (i), then it seems wise to encourage most people to lower their level of naive instrumentalism.

Expand full comment

Ok. However colorful, I shouldn't have used that phrase.

This gets into meta-dispositions, meta-meta-dispositions ... Russell's paradox ... and type theory. I guess my point is just that these things are complicated.

Expand full comment