17 Comments
May 17Liked by Richard Y Chappell

If you want to construct “beneficentric virtue ethics,” don’t just slap a virtue ethical label on utilitarianism. Take it further. Suppose an actual virtue ethicist: someone who cares deeply about personal development and who centres their morality around the idea that understanding the Good is a complicated process that is intertwined with that personal development, because good things are understood to be good by contributing to them, not just by sitting in an armchair theorising.

They’re beneficentric, which per your definition means that “promoting the general welfare” is high on their list of priorities. They are not necessarily utilitarian, however, which means that they may not believe that “the general welfare” is best understood mathematically. Subjectively apprehending both suffering and wellbeing might be more their style. (Note, by the way, that naive emotionalism about morality plays a similar role with respect to virtue ethics as naive instrumentalism does to utilitarianism.)

“The general welfare” is impossible to subjectively apprehend in full, but they are nevertheless devoted to it as a concept. They might try to approach it simultaneously from the general and the particular, using very rough and lightly held mathematics on the one hand while also trying to see and contribute to the good of others near them. This would allow them to develop their understanding locally, where deeper information is easier to come by, whilst also remaining engaged with the broader context that is the main goal.

They might split their charity money, with the lion’s share on international projects but with enough stake in local things to care about how they turn out and use that to inform their global decisions. Or, they might decide that the local is best kept interpersonal, since that will give them the deepest insight. Perhaps they’d volunteer locally and give money exclusively to global charities.

I think, if they were wise, they’d still sometimes give money to beggars directly. There are some things you can only learn that way. Moreover, treating people like utility machines is not the right way to deal with interpersonal contexts, and immediately jumping to a utility calculation about the small sums of money involved will interrupt your ability to be kind in a potentially very detrimental way.

They would probably also be invested in preserving good things, both in terms of their own character and in terms of societal structures. They might prefer the proverbial forgoing of coffee as a way to increase their global charitable giving, rather than taking from their contributions to existing charities that are connected to parts of their character that they want to maintain. They might also have concerns about issues that are more critical in developed countries, such as fostering community connections and diminishing loneliness. Preserving endangered community structures can rank highly if you think of these things as difficult to properly build from scratch. Societal development, like personal development, is not just a matter of “add resources, number goes up.”

There’s a decent probability that they would indeed be an Effective Altruist, or else be trying to engage with and learn from the movement. They would probably have significant differences in method and emphasis when compared to some utilitarian Effective Altruists, however.

Expand full comment
author

Thanks, that all sounds very reasonable to me!

Expand full comment
May 16·edited May 16Liked by Richard Y Chappell

Perhaps worth thinking about Virtue Ethics 's relation to justice and political philosophy.

Below I reproduce a paragraph from the intro to Hursthouse's On Virtue Ethics (2001) where she says this is underexplored. [She says a bit more and points to lit in the paragraphs after what I copied]. Also relevant is the Effective Justice paper by Pummer and Crisp. https://philpapers.org/archive/CRIEJ-2.pdf

QUOTE:

> An obvious gap is the topic of justice, both as a personal virtue and as the central topic in political philosophy, and I should say straight out that this book makes no attempt at all to fill that gap. In common with nearly all other existing virtue ethics literature, I take it as obvious that justice is a personal virtue, and am happy to use it as an occasional illustration, but I usually find any of the other virtues more hospitable to the detailed elaboration of points. But, in a book of this length, I do not regard this as a fault. I am writing about normative ethics, not political philosophy, and even when regarded solely as a personal virtue (if it can be), justice is so contested and (I would say) corrupted a topic that it would need a book on its own.

Expand full comment
May 16·edited May 16Liked by Richard Y Chappell

I'm sympathetic to utilitarianism, on some days even wonder if it follows analytically from a plausible definition of "Ethics".

Yet I can't quite wholly reduce to it, there's an irritant in the System I can't expunge. That I made a promise is some reason to keep it, it seems to me, distinct from the good consequences that follow from the practice of promise-keeping. That my mother sacrificed for me is an intrinsic reason to help her, to favor her somewhat.

It's just an irritant! I'm willing to concede that the deontic reasons are often, maybe usually - maybe always! - rightly trumped by consequentialist reasons! Yet isn't the fact that the deontic reasons have *some* weight incompatible with Utilitarianism?

Maybe that's simplistic U, overly reductive U, I'm working with. I haven't read the corrective posts yet, so forgive if this is redundant.

Expand full comment
author

I think partiality is a very reasonable basis for questioning utilitarianism, and to perhaps judge some slightly different (agent-relative) form of welfarist consequentialism to be more likely true. In this post, I'm mostly just wanting to argue that utilitarianism shouldn't elicit as much *hostility* as it (sometimes) does.

re: promises, the rejection of naive instrumentalism might help a bit with making sense of those commitments. But I do think the ultimate justification has to come down to the benefits of the practice (and find it a bit mysterious to think otherwise). YMMV!

Expand full comment
May 16Liked by Richard Y Chappell

welfarist consequentialism with a bit of agent-relative warping, maybe that's my Bag! thanks.

Expand full comment
May 17Liked by Richard Y Chappell

me against my brother; me and my brother against my cousin...

One reason that some people are hostile to utilitarianism is not because it is so different or wrong, but because it is close to them, and a little bit different (i.e., the brother).

Expand full comment
May 17Liked by Richard Y Chappell

The unfortunate political polarization of the United States, which has carried over into much of the rest of the western world, has resulted in increased tribalism which runs contrary to caring about the well-being of the entire population.

Most Americans have chosen a political tribe to belong to, and they are not merely loyal to that tribe, but actively hate and loath members of other tribes.

I think that when such people encounter utilitarianism, it kind of pricks their conscience in a deeply uncomfortable way. I think at some level, they realize this political tribalism has gone too far, utilitarianism reminds them of that, but they can't see a way back. So they feel a flash of momentary guilt, and then irritation, and take it out on the source of that guilt/irritation.

Just speculation.

Personally, I mostly like utilitarianism. I think it's normal/healthy for people to care more about close family members than other humans in their day-to-day lives... but that in the workplace, and especially in our governments and institutions, utilitarianism should be a guiding principle.

Expand full comment

Ineffective altruism is self-indulgent and harmless, but effective altruism courts disaster. It’s how you get quagmires (e.g. Vietnam, Ukraine). It’s how you end up having to destroy the village in order to save it. There’s always a way to make things better, the cumulative effects of which would make things worse (see: the repugnant conclusion).

Expand full comment
author

Any form of significant impact involves risk. But doing nothing is often even riskier. (I do not think it is "harmless" to do nothing while children drown. Allowing preventable death and suffering is extremely harmful!)

Expand full comment

Suppose you pass by a pond. Suppose you have bread in your backpack. Should you feed the ducks? Feeding any particular duck on any particular occasion will make things better. But if you keep at it, you’ll eventually have to destroy the pond in order to save it. Every particular intervention is “effective,” but the aggregate leaves things worse than where you started (https://zworld.substack.com/p/could-i-make-you-eat-shit).

Expand full comment
author

Your description is incoherent. Let's grant that the non-obvious harms outweigh the obvious benefits of (repeatedly) doing X in aggregate. Then, on average, the harms of X outweigh the benefits.

Let's consider two more precise possibilities. Either the marginal effects change depending on how much X has already been done, or they don't. If they don't, then the marginal effect = the average effect, at every point, and doing X on any occasion is a bad idea: it is *never* an effective means for promoting the overall good. If the marginal effect varies, then some increments are above average and some are below average. This at least raises the *possibility* that some of the above-average increments are positively good and worth doing. But this could only be so *in expectation* if one can identify the threshold where marginal impact becomes negative, and take care to avoid crossing it. (If one believes the marginal impact varies but has no idea *how*, then indifference principles would recommend again assigning each increment average value in expectation, which again was stipulated to be negative.)

So, whichever coherent precisification of the case we imagine, correctly applying expectational reasoning will not recommend "keep[ing] at it" when doing so is counterproductive. See also: https://www.dropbox.com/s/lh0fn7qj4kuaxid/Chappell-CollectiveHarm.pdf?dl=0

Expand full comment

The incoherence is in part-whole relations and your reductionist approach and not in my description. The marginal benefit of moving from A to B could be the same as the marginal benefit of moving from B to C, C to D, and so on. Thus B is better than A, C is better than B, etc. But A could nonetheless be better than Z. That’s the repugnant conclusion. And that’s why effective altruism is so dangerous. If you’re at A, it incentivizes you to find a B. And then a C. And so on until you’re in Z, like with the ducks. The incoherence you detect is a generic feature of analysis: the parts (B is better than A, C is better than B, etc.) don’t add up to the whole (A is better than Z).

Expand full comment
author
May 18·edited May 18Author

I don't think that's a coherent analysis of what's going on in the repugnant conclusion. It could be that B is actually worse than A, just for non-obvious reasons (perhaps to do with perfectionist values or the priority of quality over quantity); or -- as Huemer and others argue -- it could be that Z is not so "repugnant" after all, and our initial intuition to the contrary is not to be trusted. It's a notoriously tricky case. But one thing I do find obvious is that we should reject any analysis that has your "generic feature" of incoherence between evaluations of parts and wholes.

But this is all very abstract philosophy, and not clearly relevant to real-world EA recommendations. Are you suggesting that we should refrain from helping the global poor, and also factory farmed animals, and also refrain from pandemic prevention and other global catastrophic risk mitigation efforts, all because of your unsubstantiated fear that these efforts (any or all of them?) might somehow "add up" to doing more harm than good?

If your fears were really warranted, the obvious "effectively altruistic" thing to do would be to look at the big picture and work out how to achieve good results *in aggregate* without undue risk of countervailing harms. Are you suggesting that this is impossible?

Expand full comment

Well then I’m afraid you’ll have to reject all analyses but the trivial ones. And as for the analysis in question, it’s both very abstract and very relevant. It’s why horrible outcomes so often come from good intentions, why humanitarian interventions so often end in disaster, why colleges have climbing walls and lazy rivers and obscenely high tuition (you keep adding amenities and ever so slightly increasing tuition, each time making things better relative to how they were before). As to the cases you mention, I haven’t a clue. Maybe you’re damned if you do and damned if you don’t. My advice would be to handle problems as they come but don’t go looking for problems to fix. If you happen to see a drowning child, rescue him. But don’t go looking for children to save.

Expand full comment

I think it's funny that a lot of EAs just seem to think that virtue ethics is the idea that you should be virtuous and that's an important bit of morality.

Expand full comment