Beneficentric Virtue Ethics
On the good intentions behind utilitarianism and effective altruism
My previous post suggested a constraint on warranted hostility: the target must be ill-willed and/or unreasonable. This is why I’m so baffled by common hostility towards both utilitarianism and effective altruism. I could see someone reasonably disagreeing with the former view, and at least abstaining from the latter project, but I don’t think either could reasonably be regarded as inherently ill-willed or unreasonable.
Perhaps the easiest way to see this is to just imagine a beneficentric virtue ethicist who takes scope-sensitive impartial benevolence to be the central (or even only) virtue. Their imagined virtuous agent seems neither ill-willed nor unreasonable. But the agent thus imagined would presumably be committed to the principles of effective altruism. On the stronger version, where benevolence is the sole virtue, the view described is just utilitarianism by another name.1
The Good-Willed Utilitarian
A lot of my research is essentially about why an ideally virtuous person would be a utilitarian or something close to it. (Equivalently: why benevolence plausibly trumps other virtues in importance.) Many philosophers make false assumptions about utilitarianism that unfairly malign the view and its proponents. For a series of important correctives, see, e.g., Bleeding-Heart Consequentialism, Level-up Impartiality, Theses on Mattering, How Intention Matters, and Naïve Instrumentalism vs Principled Proceduralism. (These posts should be required reading for anyone who wants to criticize utilitarianism.)
Conversely, one of my central objections to non-consequentialist views is precisely that they seem to entail severe disrespect or inadequate concern for agents arbitrarily disadvantaged under the status quo. My new paradox of deontology and pre-commitment arguments both offer different ways of developing this underlying worry. As a result, I actually find it quite mysterious that more virtue ethicists aren’t utilitarians. (Note that the demandingness objection to utilitarianism is effectively pleading to let us be less than ideally virtuous.)
At its heart, I see utilitarianism as the combination of (exclusively) beneficentric moral goals + instrumental rationality. Beneficentric goals are clearly good, and plausibly warrant higher priority than any competing goals. (“Do you really think that X is more important than saving and improving lives?” seems like a pretty compelling objection for any non-utilitarian value X.) And instrumental rationality, like “competence”, is an executive virtue: good to have in good people, bad to have in bad people. It doesn’t turn good into bad. So it’s very puzzling that so many seem to find utilitarianism “deeply appalling”. To vindicate such a claim, you really need to trace the objectionability back to one of the two core components of the view: exclusively beneficentric goals, or instrumental rationality. Neither seems particularly “appalling”.2
Effective Altruism and Good Will
Utilitarianism remains controversial. I get that. What’s even more baffling is that hostility extends to effective altruism: the most transparently well-motivated moral view one could possibly imagine. If anyone really think that the ideally virtuous agent would be opposed to either altruism or effectiveness, I’d love to hear their reasoning! (I think this is probably the most clear-cut no-brainer in all of philosophy.)
A year ago, philosopher Mary Townsend took a stab, writing that:
any morality that prioritizes the distant, whether the distant poor or the distant future, is a theoretical-fanaticism, one that cares more about the coherence of its own ultimate intellectual triumph—and not getting its hands dirty—than about the fate of human beings…
This is so transparently false that I cannot imagine what coherent thought she was trying to express. Is she really not aware that distant people are people too? Combine concern for all human beings with the empirical facts that we can often do more for the distant poor (and perhaps the distant future, in expectation), and Townsend’s rhetoric immediately collapses into nonsense. Any morality that cares about “the fate of human beings” without restriction could very easily end up “prioritizing the distant” for obviously good and virtuous reasons.
Prioritizing the homeless person before your eyes over distant children dying of malaria is not virtuous. As I argue in Overriding Virtue, it rather reflects a failure of empathy: you feel for those you see (good so far!), but not those you don’t (which is obviously less than morally ideal). To make up for the latter failing, a more virtuous agent will use their abstract benevolence to compensate, and ensure that the distant needy aren’t unjustly neglected as a result of one’s own emotional shortcomings. Put another way: to prioritize the lesser nearby need, simply because it’s more salient to you, is a form of moral self-indulgence—prioritizing your own feelings over the fate of real human beings. Nobody should consider such emotional self-indulgence to be ideally virtuous.
So the virtuous agent would obviously be an effective altruist. But a second mistake I want to address from Townsend’s essay is her dismissal of moral interest in quality of will:
Being fair and just to MacAskill and the still-grant-dispersing EA community doesn’t mean we have to search out a yet-uncut thread of quixotic moral exemplariness in them. The assumption that there must remain something praiseworthy in EA flips us into a bizzarro-Kantianism wherein we long for a holy and foolish person who fails in everything consequential yet whose goodwill, as Kant put it, shines like a jewel. In fact, the desire to admire EA despite its flaws indulges a quixotic longing to admire an ineffective altruist. Do not be deceived.
I think it’s very hard to deny that effective altruism is good in expectation, for the reasons set out in What “Effective Altruism” Means to Me. It also seems clear that the actual positive impact of the EA movement to date dwarfs even the harm done by SBF’s massive fraud (which is not to excuse or downplay the latter, but just to emphasize the immensity of the former).
But suppose that weren’t the case. Suppose that, despite his best efforts and all the evidence to the contrary, MacAskill turned out to be an “ineffective altruist” for some unpredictable reason—imagine SBF later breaks out of jail and somehow nukes New York City, and none of it would have happened if it weren’t for WM’s original encouragement to consider “earning to give”. You might then say (speaking very loosely) that WM “failed in everything consequential”.3 Even then, would it follow that he’s a bad person? Obviously not! To think otherwise is just an abject failure to distinguish the two dimensions of moral evaluation.
You don’t have to be a “bizarro-Kantian” to think that quality of will is importantly distinguishable from actual outcomes. Any minimally competent ethicist should appreciate this basic point. What sort of virtue ethicist would deny that there is “something praiseworthy” in having virtuous motivations, even in the event that the agent’s best efforts turn out unfortunately (through no fault of their own)?
Townsend here sounds like the crudest of crude utilitarians. The alternative to her unmitigated hostility to effective altruism—even if you believed it to have turned out unfortunately—is not “bizarro-Kantianism”, but universal common sense. Of course an individual’s intentions are relevant to our moral assessment of them. And of course there is something deeply admirable about altruistic concern, especially when melded with concern to be effective in one’s altruism. These are literally the intrinsically best motivations any moral agent could possibly have.4 What sort of person would deny this?
Or, equivalently, a form of Rossian deontology on which beneficence and non-maleficence are equally weighted and together exhaust the prima facie duties.
This breakdown is also helpful for bringing out why some common “objections”, e.g. cluelessness and abusability, are really nothing of the sort. Nobody should think either one speaks to the truth of the view, since neither casts doubt on the appropriateness of either beneficentric goals or instrumental rationality. They’re more like expressions of wishful thinking: “our task would be easier (in some sense) if utilitarianism were false.” But so what?
This has to be speaking loosely, because lives aren’t fungible. So the lives saved are still consequential, even if outweighed!
Of course, that’s not to say that actually-existing effective altruists are themselves so virtuous. You could imagine more cynical motivations in many cases. Realistically, I think all human beings (including both EAs and their critics) are apt to have mixed motivations. But I generally prefer to give folks the benefit of the doubt; I don’t know that much is gained by defaulting to cynical interpretations when a more charitable alternative is available. It’s very obvious what charitable interpretation is available for making sense of effective altruists. It’s much harder to charitably interpret the anti-EAs, given their apparent indifference to the obvious harms they risk in discouraging effective philanthropy.
If you want to construct “beneficentric virtue ethics,” don’t just slap a virtue ethical label on utilitarianism. Take it further. Suppose an actual virtue ethicist: someone who cares deeply about personal development and who centres their morality around the idea that understanding the Good is a complicated process that is intertwined with that personal development, because good things are understood to be good by contributing to them, not just by sitting in an armchair theorising.
They’re beneficentric, which per your definition means that “promoting the general welfare” is high on their list of priorities. They are not necessarily utilitarian, however, which means that they may not believe that “the general welfare” is best understood mathematically. Subjectively apprehending both suffering and wellbeing might be more their style. (Note, by the way, that naive emotionalism about morality plays a similar role with respect to virtue ethics as naive instrumentalism does to utilitarianism.)
“The general welfare” is impossible to subjectively apprehend in full, but they are nevertheless devoted to it as a concept. They might try to approach it simultaneously from the general and the particular, using very rough and lightly held mathematics on the one hand while also trying to see and contribute to the good of others near them. This would allow them to develop their understanding locally, where deeper information is easier to come by, whilst also remaining engaged with the broader context that is the main goal.
They might split their charity money, with the lion’s share on international projects but with enough stake in local things to care about how they turn out and use that to inform their global decisions. Or, they might decide that the local is best kept interpersonal, since that will give them the deepest insight. Perhaps they’d volunteer locally and give money exclusively to global charities.
I think, if they were wise, they’d still sometimes give money to beggars directly. There are some things you can only learn that way. Moreover, treating people like utility machines is not the right way to deal with interpersonal contexts, and immediately jumping to a utility calculation about the small sums of money involved will interrupt your ability to be kind in a potentially very detrimental way.
They would probably also be invested in preserving good things, both in terms of their own character and in terms of societal structures. They might prefer the proverbial forgoing of coffee as a way to increase their global charitable giving, rather than taking from their contributions to existing charities that are connected to parts of their character that they want to maintain. They might also have concerns about issues that are more critical in developed countries, such as fostering community connections and diminishing loneliness. Preserving endangered community structures can rank highly if you think of these things as difficult to properly build from scratch. Societal development, like personal development, is not just a matter of “add resources, number goes up.”
There’s a decent probability that they would indeed be an Effective Altruist, or else be trying to engage with and learn from the movement. They would probably have significant differences in method and emphasis when compared to some utilitarian Effective Altruists, however.
Perhaps worth thinking about Virtue Ethics 's relation to justice and political philosophy.
Below I reproduce a paragraph from the intro to Hursthouse's On Virtue Ethics (2001) where she says this is underexplored. [She says a bit more and points to lit in the paragraphs after what I copied]. Also relevant is the Effective Justice paper by Pummer and Crisp. https://philpapers.org/archive/CRIEJ-2.pdf
QUOTE:
> An obvious gap is the topic of justice, both as a personal virtue and as the central topic in political philosophy, and I should say straight out that this book makes no attempt at all to fill that gap. In common with nearly all other existing virtue ethics literature, I take it as obvious that justice is a personal virtue, and am happy to use it as an occasional illustration, but I usually find any of the other virtues more hospitable to the detailed elaboration of points. But, in a book of this length, I do not regard this as a fault. I am writing about normative ethics, not political philosophy, and even when regarded solely as a personal virtue (if it can be), justice is so contested and (I would say) corrupted a topic that it would need a book on its own.