21 Comments

Here's one way to make a consequentialist critique of EA as it currently exists.

Consider the US-China status quo. The US is not attacking China in pursuit of regime change, and China is not conquering Taiwan. The risk of the former seems minute; the risk of the latter does not. What if a 5% increase in the chance that this status quo holds was more of a net positive than all non-x-risk-related EA efforts combined?

Here are some of the possible negative outcomes if China tries to conquer Taiwan:

-conventional and nuclear war between China and the US, and their allies, with the possibility of up to several billion deaths;

-hundreds of satellite shootdowns causing Kessler syndrome, leading to the destruction of most other satellites, leaving us with little warning of impending natural disasters such as typhoons and drought;

-sidelining of AI safety concerns, in the rush to create AGI for military purposes;

-end to US-China biosecurity cooperation, and possible biowarfare by whichever side feels it is losing (which might be both sides at once - nuclear war would be a very confusing experience);

-wars elsewhere following the withdrawal of overburdened US forces, e.g. a Russian invasion of Eastern and Central Europe backed by the threat of nuclear attack, or an Israeli/Saudi/Emirati versus Iranian/Hezbollah war that destroys a substantial share of global oil production;

-economic catastrophe: a deep global depression; widespread blackouts; years of major famines and fuel shortages, leading to Sri Lanka type riots in dozens of countries at once, with little chance of multinational bailouts.

-substantial decline in efforts to treat/reduce/vaccinate against HIV, malaria, antibiotic resistant infections (e.g. XDR/MDR tuberculosis), COVID-19, etc.

If your simplified approach to international relations is more realist than anything else, you probably believe that a major factor in whether war breaks out over Taiwan is the credibility of US deterrence.

How much of EA works on preserving, or else improving, the status quo between the US and China, whether through enhancing the credibility of US deterrence (the probable realist approach) or anything else? Very little. Is that due solely to calculation of risk? Is it also because the issue doesn't seem tractable? If so, that should at least be regularly acknowledged. Could the average EA's attitude to politics be playing a role?

To the extent that the US-China war risk is discussed in EA, I do not think it is done with the subtle political awareness that you find in non-EA national security circles. Compare e.g. the discussions here (https://forum.effectivealtruism.org/topics/great-power-conflict) with the writing of someone like Tanner Greer (https://scholars-stage.org/) and those he links to.

In case you are wondering, I have no strong opinion on which US political party would be better at avoiding WW3. There are arguments for both, and I continue to weigh them, probably incompetently. I do think it would be better if there were plenty of EAs in both parties.

I have no meaningful thoughts on how to decide whether unaligned AI or WW3 is a bigger threat. (Despite 30-40 hours of reading about AI in the past few months, I still understand very little.)

Expand full comment

I've read one alternative approach that is well written and made in good faith: Bruce Wydick's book "Shrewd Samaritan" [0].

It's a Christian perspective on doing good, and arrives at many conclusions that are similar to effective altruism. The main difference is an emphasis on "flourishing" in a more holistic way than what is typically done by a narrowly-focused effective charity like AMF. Wydick relates this to the Hebrew concept of Shalom, that is, holistic peace and wellbeing and blessing.

In practical terms, this means that Wydick more strongly (compared to, say, GiveWell) recommends interventions that focus on more than one aspect of wellbeing. For example, child sponsorships or graduation approaches, where poor people get an asset (cash or a cow or similar) plus the ability to save (e.g., a bank account) plus training.

I believe that these approaches fare pretty well when evaluated, and indeed there are some RCTs evaluating them [1]. These programs are more complex to evaluate, however, than programs that do one thing, like distributing bednets. That said, the rationale that "cash + saving + training > cash only" is intuitive to me, and so this might be an area where GiveWell/EA is a bit biased toward stuff that is more easily measurable.

[0]: https://www.goodreads.com/book/show/42772060-shrewd-samaritan

[1]: https://blog.brac.net/ultra-poor-graduation-the-strongest-case-so-far-for-why-financial-services-must-be-a-part-of-the-solution-to-extreme-poverty/

Expand full comment

Cool pointer. I googled around a bit and found this from Wydick: http://www.acrosstwoworlds.net/why-i-cannot-fully-embrace-effective-altruism/

Expand full comment

I very much agree with the sentiments of this article. I've been super frustrated by many critics of EA. The criticisms of EA generally seem to involve just egregious reasoning. The template for many of them seems to be

1 Criticize a thing done by EA

2 Call EA names

3 Give an incredibly vague prescription that sounds nice but has no details, along the lines of, "So EA is right, we should reshape our giving. But not in a way that bolsters capital or focuses on ridiculous terminator scenarios or eliminates the heart of giving in favor of a beaurocratic, technocratic top down elitist approach to giving. Instead, we should invest in a community of care, with bottom up programs, that opts for radical reforms that help make the world better." It feels like a campaign ad. People seem to be unaware that they can be part of EA, while not giving to specific parts of EA that they find objectionable. If one is not a longtermist, they can still give to combat malaria and factory farming.

When a movement has saved hundreds of thousands of lives and improved the conditions of vast numbers of animals on factory farms, criticism of random hyper specific action is not sufficient to be a criticism of the movement as a whole--when that action is not a necessary condition of being part of the movement.

Expand full comment

This line, "I think it’s now widely acknowledged that early EA was too narrowly focused on doing good with high certainty—as evidenced through RCTs or the like" made me think of the parallel issues with the "evidence-based medicine" movement. There's surely a lot of good that this movement has done, but there are also widely-accepted criticisms of it (pointing out that it often ignores evidence that doesn't come through RCTs, and that it focuses on statistical significance over effect size in things like the classification of carcinogens). And yet, I'm not aware of any particular competing movement.

Expand full comment

Agreed, this is a major frustration when reading criticisms.

Would you consider the progress studies movement to be an alternative? They seem genuine in their belief that enabling conditions for scientific progress will alleviate a lot of suffering, and are going about it in a much different way from EA branded organizations.

Expand full comment

I think they're more overlapping than "alternatives"? I'd guess a lot of the same people are enthusiastic about (or critical of) both. Similar to more explicitly EA subcommunities like effective animal advocacy.

Expand full comment

Pragmatic goals drift from altruistic goals quickly if altruistic values are employed in means selection. The result is that, to get anything done, compromises and acceptance of hypocrisy is common-place in any group identifying with pragmatic values.

How does one achieve a pragmatic goal? By adjusting one's values until one can use the means. Now hypocrisy seems to a requirement of pragmatism unless you stop claiming values that you ignore in your priority to achieve your goals,.

The EA community has the problem of identifying itself with altruism. It will always be vulnerable to criticism so long as it reaches pragmatic goals.

For example, if jobs that pay well are anti-altruistic but provide money for altruistic causes through private donations, should an EA person take one of those jobs? If they do, should they walk around feeling like a hypocrite? These are, in our society, personal choices, precisely because our society allows jobs that are anti-altruistic. In that case, what role does EA play in our society? Is it enabling the system of harm that high-paying jobs enable? etc, etc... Sure, there's some moral compromise somewhere along the line, but giving large chunks of your earnings to effective charities has obvious altruistic intention.

Still, if you want to annoy your critics, you could always stop using p**n, stop all drinking and drug use, become a vegan, use public transit, wear a sweater indoors on cold days, and opt to not have children. The criticisms will change, from self-serving heckling about your systemic corruption to self-serving heckling about your poor quality of life. The critics will adapt, but I don't see that any strong measurement or calculation of altruistic impact of your new choices was made.

EA should offer methods to quantify such personal lifestyle choices as having children, using alcohol, or eating meat. I think that would change the narrative on its critics substantially.

Expand full comment

I agree most criticisms of EA are bad.

But I think the majority of humanity can plausibly claim to be doing better than effective altruists, since most humans are Christian or Muslim. Effective altruists and consequentialists acknowledge the problem of infinity utility but don't really have a way to deal with. I think anyone who thinks they are following the most plausible path to infinite utility can legitimately claim to be doing a much better thing than EAs are, even if they think their religion is almost certainly wrong.

Most EAs seem to just ignore infinity or Pascal's Wager, or just declare it out of bounds but I don't think this is very principled.

" I’m dubious—it seems awfully fishy to just insist that one’s favoured form of not carefully aiming at the general good should somehow be expected to actually have the effect of best promoting the general good."

I laughed! i think this is a good point. Maybe in their defense we can see EAs that think a lot about politics or write a lot about politics have ended up just giving money to some conventional causes- criminal justice reform, animal welfare, and so on. I could see an activist saying EAs spend a ton of time to get to the obvious conclusion.

"I really think the great enemy here is not competing values or approaches so much as failing to act (sufficiently) on values at all."

Agreed.

Expand full comment

> Most EAs seem to just ignore infinity or Pascal's Wager

These are very different things. Infinite ethics is difficult, and I suspect most people are right to just ignore it (since attending to it would probably just lead them astray). That said, you'll find more work on infinite ethics, expected utility fanaticism, etc., amongst EA philosophers than anyone else.

Pascal's wager, on the other hand, is easy. It gives no reason whatsoever to endorse traditional religions, because there is no good reason to think that this is the best route to infinite utility. (A god who rewards epistemic rationality and good works strikes me as many orders of magnitude more likely than a jealous god who rewards belief in the absence of evidence.) Probably getting a bit far afield to pursue this line of debate further here, however!

Expand full comment

I see moral complacency on the EA side.

“I’m going to spend my time doing bad things and then cover my moral being by donating cash or a weekend here and there.”

Rather than:

“I’m going to do good things”

PITHY ASIDE, PLEASE IGNORE ——-

It’s ironic that beneficiaries of the academic institution (the worst offender…yes, even worse than hedge fund managers!) would be critical since it’s the best opportunity to offset their moral failings without having to sacrifice much.

ASIDE COMPLETE ———

The replacement is to do work that does good. Doing good means affecting how risk is distributed in society. You’re either assuming your fair share of risk, or you aren’t.

Expand full comment

This seems rather vague. Could you give a clear example of a program that you would endorse. Ideally it would be sufficiently specific to be the type of thing that a person could set up with adequate time and money.

Expand full comment

Hoovedao.xyz

Give up short term pumps like saving a few people from malaria or hunger and instead participate in a network that enables long term growth.

I don’t know how to say this without sounding really dumb…but there yah have it.

This community uses tokenomics to distribute effort across projects to support one another. So you end up helping another without having to directly engage anything they are doing.

And please give me more credit than to whallop the classic crypto smash. Some of us are good people and care. And I’m not asking to you to buy anything or “invest”, just give me the benefit of the doubt that I’m not trying to scam you please…you can see I’m quite raw as I hear it all day.

Expand full comment

I understand your proposal for what should be given up. I don't know what it means to participate in a network that enables long term growth, unless it just means do things to grow EA, which are already being done. If your claim is that EA should fund your blockchain, I'd need to know more details about that blockchain.

Expand full comment

I respectfully asked you not to make those insinuations. I don’t want your money, can you drop that criticism there?

EA relies on people doing bad things to fund good things. There’s another way, Hoove is that way.

It’s in the tokenomics, I’m happy to walk you through it.

Expand full comment

I wasn't intending to insinuate anything--I currently have no idea what you're proposing. How does EA rely on doing bad things? What is Hoove and why is it good?

Expand full comment

The best way to do EA is to murder a bunch of people, steal their money, then distribute it to charities. You can work a normal job, make $175k/year (probably working for a pretty crappy company doing pretty suspect things) and donate 50%, but the overall impact is negligible, even if you get 1,000 people to work with you. Rather, you need to do what Blankman-Fried did and steal money from people who dont know what they are doing so you can unilaterally decide how its impact is distributed. This is of course unsustainable, and mostly inevitable.

Hoove allows people to absorb risk so that others can do things. You working on your water project shares risk with Bob working on an animal shelter. Nothing requires anyone to be the bad guy to get the money because the collective shares the risk of any given enterprise. If your animal shelter fails, you arent ruined financially.

If it was risk free to do the right thing, then people would. Instead, they bite the bullet and work in the belly of the beast because they HAVE to, and the system loves that. The system also loves EA because it gives people a moral outlet while they bite their tongue at work. But we shouldnt have to bite our tongues. Our daily work SHOULD BE good work, not something we have to pay a price for...and yes 99% of people making over $100k SHOULD pay a price for the damage they cause...but they shouldnt have to do that work, and they wouldnt have to do that work if they worked in an organization like Hoove.

Expand full comment