Utilitarianism
Bernard Williams notoriously predicted that “utilitarianism's fate is to usher itself from the scene.” The dubious claim that utilitarianism is self-effacing (or recommends against its own belief and promotion) is often thought to follow from the more credible claim that constantly engaging in crude calculation would be counterproductive. But of course self-effacingness doesn’t follow from this, because belief in utilitarianism opposes, rather than requires, engaging in counterproductive behaviours like constant calculation.
Notably, prominent utilitarians across the generations (from Bentham and Mill to Peter Singer to Toby Ord and Will MacAskill) have, I think pretty obviously, done immense good. So I think there’s basically zero chance that utilitarianism is strictly self-effacing. On the contrary, I think it would be obviously good if more people followed in the footsteps of these wonderful individuals.
Realistically, no human being is going to look much like the Platonic form of an impartial utilitarian. But belief in the theory can at least serve to nudge us in the right direction. In reality, the practical effect of belief in utilitarianism is not neglecting one’s family or pushing fat men in front of trolleys (seriously, who does that?), but just stuff like giving more to especially effective charities and otherwise seeking to improve the world with one’s marginal uses of time and money. (As I’ve previously argued, decent non-utilitarians should of course agree with utilitarians on this — we should all embrace beneficentrism — but for unknown reasons, proportionately fewer non-utilitarians seem to actually prioritize beneficence in this way.)
Anti-utilitarianism
Now consider: on what moral view would you not want others to do more good? (You might not want to make altruistic sacrifices yourself, but isn’t it plainly good, from your perspective, for others to do so?) Given that beneficentrism is so obviously desirable, and that beneficentrism closely correlates with utilitarianism in practice, it seems that everyone ought to want utilitarianism to be more widely accepted.
This strikes me as a pretty curious result. Considerations of self-effacement don’t speak to the question of which moral theory is true, of course. So I don’t take this to be any sort of argument against non-utilitarian views. But it may be an argument against vocally advocating for less beneficent views in public spaces, or (say) blasting out anti-utilitarian screeds on Twitter. (Note that I’m definitely not recommending engaging in deceptive teaching or research. Indeed, I’m not recommending deception at all. But there’s no obligation to publicly broadcast everything you believe, especially in cases where you’ve reason to expect that broadcasting a belief would be harmful. So it at least seems a legitimate question whether broadcasting anti-beneficent messaging is really a good idea.)
Philosophers’ attitudes
I know many academic philosophers, in particular, have a weirdly negative view of utilitarianism. (Like, some hate it with a passion.) I’m not really sure why this is, but I’d like to encourage them to reconsider. I think some of it, at least, stems from misunderstanding the view, which is why a lot of my research focuses on trying to address those misunderstandings and present the view in a more appropriately sympathetic light. Some people may have a quasi-aesthetic aversion to (their conception of) the utilitarian perspective. They may believe that utilitarianism neglects some important normative insights, and may find this aggravating. (Philosophers are easily aggravated by what they believe to be philosophical mistakes.)
On other days, I would try to convince anti-utilitarian philosophers that they’re wrong on the merits. But today, let’s grant them the truth of their own view, for sake of argument. Still, however aesthetically aggravating it might seem for others to not adequately appreciate the significance of the personal perspective (or whatever), don’t you agree that it’s objectively more important to save more innocent lives? And if so, shouldn’t that maybe temper your frustration with views that encourage others to do more of this more important thing?
All in all, I think there’s a surprisingly strong case to be made that it’s non-utilitarian views that ought to “usher themselves from the scene”—or at least from the public sphere. Perhaps “government house deontology” can continue to be debated in philosophy seminar rooms. But if it’s really true that utilitarianism, as a public philosophy, would do more good (without actually violating rights etc.), then shouldn’t even deontologists prefer to see it reign supreme? I don’t mean that they should lie in order to promote this desirable result—obviously they could still regard lying as wrong. But even to acknowledge widespread acceptance of utilitarianism as a desirable result would, I think, mark a striking change from how most currently think about it. (And it at least raises tricky questions about the moral advisability of anti-utilitarian public philosophy.)
< blockquote >objectively more important to save more innocent lives?<\ blockquote>
Whether X saves more lives than Y is an objective question. Whether those lives are innocent or not is a moral question. Importance is a subjective question, or you might argue that it is intersubjective. So this statement is not obvious.
Edit:
< blockquote >acknowledge widespread acceptance of utilitarianism as a desirable result<\ blockquote >
The post has made a good case that acceptance of utilitarianism is not obviously objectionable, as some who accept it have done good things, perhaps as a result of that acceptance. It has not established that acceptance of utilitarianism is desirable, or that those persons would stop doing good things if they altered their views. I don’t think the claim is that utilitarianism is necessary for doing good things, but that it tends to increase the likelihood. But that case was not made.
<blockquote>beneficentrism[sic] closely correlates with utilitarianism in practice, <\blockquote>
Cite? You have provided some cherry-picked anecdotal examples, not data analysis.
A moral theory needs more in order to be true or false. A moral theory includes a standard, which evaluates things (actions, circumstances, intentions, whatever) as conforming to the standard or violating it. “X violates standard Y” can be true or false; “standard Y” can’t even be true or false without implicitly adding premises of the form “everyone ought to adopt standard Y.” This is Hume's idea, no “ought” from “is” alone. “I accept standard Y” is much easier to derive than “everyone must accept standard Y by logical necessity or empirical inference.”
So it seems better to speak of why standard Y is superior to standard Z, rather than to speak of standard Y or a moral theory being true. But then, by what standard should we judge that standard Y is superior to standard Z? Do we need a meta standard to judge standards? And then a standard to judge meta standards? Or should we expect Y and Z each to contain ideas about how to judge standards? If they agree on which of them is better, that seems like a win, but the typical cases will involve each picking themselves as superior.
If we grant that our understanding of morality is less than perfect, a moral theory should include principles regarding how our understanding might be improved. When we look at individuals, this is difficult. Persons' moral intuitions derive from generalizations of their experience, using our evolved psychology. Intuition may be the elephant, and theory the rider. At the social level, the various actions and evaluations of persons combine into an intersubjective whole, where everyone influences everyone else's attitudes and beliefs to a greater or lesser degree. This social process seems able to adjust and improve. Ideally, it criticizes itself, and contains space for alternate hypotheses to receive attention and be rejected or incorporated. But it isn’t fool-proof; it produced Stalin, Mao, and Hitler.
I’m not sure what we should conclude, except that the post only considers these issues obliquely, and makes implicit but unexamined assumptions. This might be necessary. Perhaps finding and examining these assumptions will help the discussion to move forward, or maybe they can be taken for granted and left unstated, if we really all accept them.