34 Comments
⭠ Return to thread

The problem with EA is precisely (one of) the issues you took with deontic minimalism (didn’t murder anyone today, hooray!)

EA maximizes the impact of the least possible effort. We may be great EAs, and we can pat ourselves on the back for spending time to find the best place to donate our 10%, but that is still only 10%! 100% effectiveness of a grape is nothing compared to 10% of a watermelon.

Perhaps EAs could recruit all the watermelons, BUT this IS the problem with EA: in order to be a watermelon (have enough money that your impact is actually an impact) you need to do some rather seedy things.

Instead of trying to figure out how to squeeze the juice equivalent to 10% of watermelon out of grape, EAs should spend their efforts (working and charitable) towards designing and implementing systems which achieve those ends without the sacrifice. And this is possible. It only requires a few watermelons to accept slightly more risk than they are used to, RATHER than all watermelons becoming EAs (fat chance).

If we switch perspective from “cash on hand and what to with it” to “how much more risk can I reasonably take on”, then we will naturally develop solutions to the same challenges EAs have rightly determined we should address (I won’t enumerate).

And to understand fair risk distribution we have to use deontic structures. Utilitarians do not have metrics for risk, only results.

Glad to see you on SS!! Enjoy ;)

Expand full comment

Your claim that only watermelon's have a big impact is false--the average person can, if they donate 10% to effective charities, save hundreds of lives over the course of their lives.

I have no idea what specific things you would propose about designing and implementing systems which would achieve those ends without the sacrifice.

I also don't know what risk you're talking about.

Expand full comment

I know it’s nice to say the average person can make an impact and in a very limited, local sense…sure. But not enough to have a “real”, structural impact.

I have quite a few things in fact!! It’s a blog post response, apologies for not giving you a dissertation.

I will note that reframing the “sacrifice” from dollars spent to risks assumed would be a good starting point for a discussion.

Expand full comment

MacAskill has a nice line about how "it's the size of the drop that matters, not the size of the bucket." Saving several lives each year is, in my view, plenty "real" and a big deal. Even if it's true that certain structural changes could be an even bigger deal.

Can you say more about what risks you have in mind?

Expand full comment

If I wanted to give an excuse to hedge fund managers to do as little as possible, I’d say the same thing. It’s strange that he says that knowing what he was knows about finance, though much of his formulation is cynical in that vein: “whelp, it’s pretty much hopeless so let’s do what we can”. I’m happy to talk more about him, and note I greatly appreciate his contribution and I don’t think the larger solutions I have in mind would be conceivable without reflecting on his work.

I want to clarify that I’m talking about risk distribution, not any one specific risk. Those with power do not manage as much risk as they are able to. Instead, they shuffle that risk onto people less capable of managing it. For example: the fund that manages the 401k doesn’t assume the risk for the 401k losing value, instead their fees remain the same.

If we fairly redistributed RISK where those capable of managing it are the ones who assume it, then we wouldn’t need effective altruism to help us send “band aid funds” to organizations cleaning up the mess.

One more (Platonic) metaphor:

We’re on a boat. The captain is driving it and crashes, but written on my ticket is that I assume the risk of any crash and any disagreements are settled out of court. That’s our system. And in that system we need EA. However for EA to really work, we need all those whales to contribute, but they never will. For risk distribution to work, we only need a few whales to step up. We’re more likely to sell a few whales to step up rather than get all whales to commit to EA. So we should do that.

Expand full comment

That's odd. If I wanted to give an excuse to hedge fund managers to do as little as possible, I’d say they don't have to do a thing. EA is all about encouraging people to do more (and more effectively) than they otherwise tend to do. The suggestion that it's all about complacency for the rich is simply bizarre.

Now, this isn't the place to argue about the most effective means. If you have a specific proposal that you think is better than what most EAs are doing, that's awesome -- go share it on the EA forums, and I expect you'll find plenty of receptivity if the idea is good. Recent posts consider suggestions ranging from buying coal mines (to shut them down) to thinking about space governance.

https://forum.effectivealtruism.org/

Expand full comment

There’s EA intention and then what actually happens.

Come talk to some rich people with me (you’ll be shocked at how they abuse EA, it’s like a get out jail free card).

Convincing another philosopher won’t help. Asking someone to chip in a few grand won’t do much. We need the whales, and we need our best explicators working them.

Expand full comment

Again, if you have a proposal for how the EA community could do better, the place to put it is the EA Forum.

Expand full comment

Have you ever tried to convince to convince an EA what they are doing is a waste of time and there are way better things to do?

There are cults and then there’s EA…

But fair enough.

Expand full comment

What specific things are they doing that are unwise and what should they be doing instead?

Expand full comment

People with less than $5mil pooling cash and delivering it to charitable organizations is a waste of time.

Those same people should organize a coordinated pressure campaign to convince (maybe even threaten!) whales to absorb more risk.

EA is a cynical attempt to something rather than nothing, which is a good intention, but ultimately counter productive as the concept gives whales an excuse to do as little as possible (giving away money) and not address the core issue: risk distribution.

That’s my view. I’m happy to discuss or elaborate. I’m happy to reduce it to clear premises and explore counterexamples or address responses from EAs. I’m also happy to drop it or take it elsewhere as Richard has suggested. Though if any part of it is appropriate here, I think would be taking seriously the cyclical bent. That’s a metrics issue: just how much good does 10,000 people each tossing in $1,000 (or even $10,000) really do compared to other activities such as convincing whales to take on more risk?

Expand full comment