34 Comments

I agree in principle, yet because I'm poor, taking the beneficentrist route would require me to fundamentally restructure my life to a degree that (admittedly selfish in "kin preference is hard wired" way) concessions to partiality (and also my own psychological integrity) doesn't allow for. There's a massive disjoint between the call to "give what I can" and the pledge of 10% combined with the "47000 USD" number. Adopting beneficentrist/utilitarian position would require something akin to what would need to follow if I converted to Christianity. Alternatively I could donate all the money I in any form "give away" (I'm actually including Substack subscriptions in this) to Give Well or a direct cash transfer charity, but the amount is so paltry in comparison to "10% of 47k+ USD" that my ego defences make me figuratively run away sobbing. The problem with attractiveness of EA to "normal" people (let's say global top 10%) is that EA is targeted at the global 1%, and maybe that's enough.

Expand full comment

Do you have an argument for beneficentrism, or just take it to be self-evident or intuitive? FWIW I would be inclined to accept some version of it, but that's based on various background moral commitments that motivate beneficence as a central moral principle. (E.g., I'm sympathetic to overlapping consensus as a moral methodology, and it aligns with my Christan faith, pace TonyZa's gloss)

Expand full comment

Mostly self-evident (as I say, it "strikes me as impossible to deny while retaining basic moral decency"). But it can additionally be supported by all the arguments given here:

https://www.utilitarianism.net/arguments-for-utilitarianism#arguments-for-utilitarianism

i.e., (i) reflecting on what fundamentally matters, (ii) The Golden Rule, the Veil of Ignorance, and the Ideal Observer, and (iii) learning from historical moral atrocities, to motivate moral circle expansion.

[On the linked page, (i) and (ii) are presented as supporting the stronger conclusion of full-blown utilitarianism; but any argument for utilitarianism is ipso facto *at least as good* an argument for the logically weaker claims of beneficentrism.]

Expand full comment

It always makes me scars how shallow the moral grounds are we base our decisions on.

Expand full comment

"Beneficentrism strikes me as impossible to deny while retaining basic moral decency. Does anyone disagree? Devil's advocates are welcome to comment."

That depends on how you define basic moral decency or if you even believe that there is such a thing as a moral tenant that is universal.

But the truth is that most moral/religious/ideological systems in history didn't place much value on utilitarianism/beneficentrism. Ancient Greeks, Romans, Vikings and Mongols were not big on utilitarianism. Even Christianity, which puts a lot of weight on Works, is first concerned with Faith. Marxists might look utilitarian, but they see class struggle, the Revolution, and the accompanying blood shedding as taking priority with utopia happening only in the last phase. Nietzsche saw utilitarianism as the defining characteristic of a slave morality.

Expand full comment

Two devil's advocate points:

1. general wellbeing is hard if not impossible to quantify

2. historically, appeals to doing what's best for the general good have been used to justify countless violations of individual rights.

Expand full comment

You don't need me to remind you that there's a fascinating disconnect between what we think (and say) we should do and what we actually do.

I believe understanding the actual reasons behind that paradox is the necessary first step. "The Righteous Mind" by Haidt is one of my favorite books.

A personal theory of why we will never see eye to eye on everything is the completely logical and obvious one: we all have different perspectives, quite literally speaking.

I believe if people were forced to secretly face a trolley problem from the perspective of being the one guy or gal tied to the track, almost everyone would choose to let the train run over X strangers instead, where X is an absolutely depravingly high number.

Many people probably truly feel their lives are more valuable than the entire universe, because what good is the universe to them from the perspective of them being dead, right? They'll never admit it, obviously.

People's true perspectives are demonstrated via their actions every single day, f.ex. when they go on vacation instead of donating to malaria nets, or silently refrain from joining your mentioned pledges.

Expand full comment

The problem with EA is precisely (one of) the issues you took with deontic minimalism (didn’t murder anyone today, hooray!)

EA maximizes the impact of the least possible effort. We may be great EAs, and we can pat ourselves on the back for spending time to find the best place to donate our 10%, but that is still only 10%! 100% effectiveness of a grape is nothing compared to 10% of a watermelon.

Perhaps EAs could recruit all the watermelons, BUT this IS the problem with EA: in order to be a watermelon (have enough money that your impact is actually an impact) you need to do some rather seedy things.

Instead of trying to figure out how to squeeze the juice equivalent to 10% of watermelon out of grape, EAs should spend their efforts (working and charitable) towards designing and implementing systems which achieve those ends without the sacrifice. And this is possible. It only requires a few watermelons to accept slightly more risk than they are used to, RATHER than all watermelons becoming EAs (fat chance).

If we switch perspective from “cash on hand and what to with it” to “how much more risk can I reasonably take on”, then we will naturally develop solutions to the same challenges EAs have rightly determined we should address (I won’t enumerate).

And to understand fair risk distribution we have to use deontic structures. Utilitarians do not have metrics for risk, only results.

Glad to see you on SS!! Enjoy ;)

Expand full comment

Your claim that only watermelon's have a big impact is false--the average person can, if they donate 10% to effective charities, save hundreds of lives over the course of their lives.

I have no idea what specific things you would propose about designing and implementing systems which would achieve those ends without the sacrifice.

I also don't know what risk you're talking about.

Expand full comment

I know it’s nice to say the average person can make an impact and in a very limited, local sense…sure. But not enough to have a “real”, structural impact.

I have quite a few things in fact!! It’s a blog post response, apologies for not giving you a dissertation.

I will note that reframing the “sacrifice” from dollars spent to risks assumed would be a good starting point for a discussion.

Expand full comment

MacAskill has a nice line about how "it's the size of the drop that matters, not the size of the bucket." Saving several lives each year is, in my view, plenty "real" and a big deal. Even if it's true that certain structural changes could be an even bigger deal.

Can you say more about what risks you have in mind?

Expand full comment

If I wanted to give an excuse to hedge fund managers to do as little as possible, I’d say the same thing. It’s strange that he says that knowing what he was knows about finance, though much of his formulation is cynical in that vein: “whelp, it’s pretty much hopeless so let’s do what we can”. I’m happy to talk more about him, and note I greatly appreciate his contribution and I don’t think the larger solutions I have in mind would be conceivable without reflecting on his work.

I want to clarify that I’m talking about risk distribution, not any one specific risk. Those with power do not manage as much risk as they are able to. Instead, they shuffle that risk onto people less capable of managing it. For example: the fund that manages the 401k doesn’t assume the risk for the 401k losing value, instead their fees remain the same.

If we fairly redistributed RISK where those capable of managing it are the ones who assume it, then we wouldn’t need effective altruism to help us send “band aid funds” to organizations cleaning up the mess.

One more (Platonic) metaphor:

We’re on a boat. The captain is driving it and crashes, but written on my ticket is that I assume the risk of any crash and any disagreements are settled out of court. That’s our system. And in that system we need EA. However for EA to really work, we need all those whales to contribute, but they never will. For risk distribution to work, we only need a few whales to step up. We’re more likely to sell a few whales to step up rather than get all whales to commit to EA. So we should do that.

Expand full comment

That's odd. If I wanted to give an excuse to hedge fund managers to do as little as possible, I’d say they don't have to do a thing. EA is all about encouraging people to do more (and more effectively) than they otherwise tend to do. The suggestion that it's all about complacency for the rich is simply bizarre.

Now, this isn't the place to argue about the most effective means. If you have a specific proposal that you think is better than what most EAs are doing, that's awesome -- go share it on the EA forums, and I expect you'll find plenty of receptivity if the idea is good. Recent posts consider suggestions ranging from buying coal mines (to shut them down) to thinking about space governance.

https://forum.effectivealtruism.org/

Expand full comment

There’s EA intention and then what actually happens.

Come talk to some rich people with me (you’ll be shocked at how they abuse EA, it’s like a get out jail free card).

Convincing another philosopher won’t help. Asking someone to chip in a few grand won’t do much. We need the whales, and we need our best explicators working them.

Expand full comment

I feel like this doesn't need an essay or a new term.

Valueing the general wellbeing of other creatures should be the most basic assumption in basically any discussion, and if someone disagrees they simply don't deserve to be in a civilization or have their ideas taken seriously.

Sam Harris put this beautifully once, (I can find the video if you want me to, he was talking with Singer and a few other people), "if someone thinks bleeding, being in pain all day and almost dying is healthy, because their definition of health is different, you simply don't invite them to the health conference" (a paraphrase)

Expand full comment

I'd love to know if you had any reflection on. what Nadia wrote, which is less a deep philosophy take but more a practical philosophy take of EA and EA adjacent ideas being "ideas machines". Also have you read the recent Larry Temkin book looking at EA thinking (though most wrt to global poverty EA). https://forum.effectivealtruism.org/posts/CbKXqpBzd6s4TTuD7/thoughts-on-nadia-asparouhova-nee-eghbal-essay-on-ea-ideas

Expand full comment

One issue might be that there can develop a bit of a status competition/hierarchy as to who's being the most altruistic and that can be kind of off-putting sometimes.

Expand full comment

But one can donate to effective organizations without being officially part of the ea community. I also haven't found what you described to be the case, but perhaps your experience is different.

Expand full comment