79 Comments
Nov 22, 2023Liked by Richard Y Chappell

Why does Srinavasan use the expected value of being an anticapitalist revolutionary as an example of something that is hard to quantify? There have been anticapitalist revolutionaries around for more than a century now and they have enough of a track record to establish that their expected marginal value is massively negative. Becoming an anticapitalist revolutionary is a rational thing to do if you want to maximize death and suffering. If EA philosophy stops people from becoming anticapitalist revolutionaries then it's already made the world a better place, even if they don't go on to do any good at all.

Expand full comment
Nov 23, 2023Liked by Richard Y Chappell

Others have said similar things, but to add my two cents:

First, I am sympathetic to, and probably count as, an EA, so I am not really the kind of person you are addressing, but I can think of a few things:

First, you really might disagree with some of the core ideas: you may be a deontologist, so that some proposed EA interventions, though positive in expectation, are still impermissible (e.g. a "charity" that harvests organs unwillingly from the homeless and donates them to orphans is bad, no matter how compelling your EV calculation). Or as Michael St. Jules points out, on longtermism, you might reject any number of the supporting propositions.

Second: Agreement with the core ideas doesn't imply all that much; you say to Michael that you are only interested in defending longtermism as meaning "the far future merits being an important priority"; but this is hardly distinctive to EA! If EA just means, "we should try to think carefully about what it means to do good", then almost any program for improving the world will endorse some version of that! What makes EA distinctive isn't the versions of its claims that are most broadly acceptable!

You can agree in principle with "core" EA ideas but think there is some methodological flaw, or a particular set of analytical blinders in the EA community such that the EA version of those ideas is hopelessly flawed. This is entangled with

Third: So, if you agree with the EA basics, and you think EA is making a big mistake in how it interprets/uses/understands those basics, why not try to get on board to improve the program? Either because those misunderstandings/methodologies/viewpoints are so central to EA that it makes more sense to just start again fresh, or because EA as an actual social movement is too resistant to hearing such critiques.

Like, take the revolutionary communist example from the other end: lots of people (even many EAs) would agree to core communist principles like "Material abundance should be shared broadly", and revolutionary ideas like "We shouldn't stick to a broken status quo just because it would take violence to reach a better world"--and there is a sense in which you can start as a revolutionary communist, and ultimately talk yourself into a completely different viewpoint that still takes those ideas as fundamental but otherwise looks nothing like revolutionary communism (indeed, I think this is a journey many left-leaning teenagers go through, and it wouldn't even surprise me if some of them end up at something like EA).

But I don't think people who don't start from the point of view of communism should feel obliged to present their critiques as ways of improving the doctrine of revolutionary communism. This is for both philosophical reasons (there is too much bad philosophy in there that takes a long time to clear out, better to present your ideas as a separate system on their own merits) and social ones (the actual people who spend all their time thinking about revolutionary communism aren't the kind of people you can have productive discussions with about this sort of thing).

Obviously that's an unfair comparison to EA, but people below have pointed out that EA-the-movement is at least a little bit cult-y, and has had a few high-profile misfires of people applying its ideas. I personally think its successes more than outweigh the failures, but I think it's fair for someone to disagree.

Finally, I'd like to try steelman the "become an anticapitalist revolutionary" point of view. Basically, the point here is that "thinking on the margin" often blinds one to coordination problems--perhaps we could get the most expected value if a sufficiently large number of people become anticapitalist revolutionaries, but below some large threshold, there is no value--then the marginal benefit of becoming a revolutionary is negligible, yet it still may be the case that we would wish to coordinate on that action if we could. This is (I think) what Srinivasan is getting at: the value of being a revolutionary is conditional on lots of other people being revolutionaries as well. It's not impossible to fit this sort of thinking into an EA-type framework, but I think it's a lot more convoluted and complicated. But I don't think we should rule it out as a theory of doing good, or prioritizing how to do good, even if I don't find that particular example very compelling.

Expand full comment
author

Interesting, thanks!

(1) I don't think any EA charities violate deontic constraints!

(2) I think the core ideas are more neglected than you appreciate, esp. anything in the vicinity of prioritization / optimization. Most people aren't even *trying* to optimize (even within deontic constraints), and find the whole EA perspective very weird/alien.

(3) I agree that people aren't obliged to frame their critiques as ways to improve EA. But I don't care about the label. More people should be trying to do good effectively, and they simply aren't. Again, the *core* prioritizating/optimizing goals underlying EA are massively and unjustifiably neglected.

(4) re: "It's not impossible to fit this [coordination] sort of thinking into an EA-type framework, but I think it's a lot more convoluted and complicated."

I don't think it's especially convoluted or complicated, but it is more accurate than a heuristic like "Do what would be best if everyone did it." Philosophers, especially, should be identifying moral truths, not (just) heuristics.

Expand full comment

"(1) I don't think any EA charities violate deontic constraints."

Promoting access to birth control/"family planning" might be an exception here, depending on one's deontology. Before EA was a known term, I think that the Gates foundation got some criticism for this. I don't know if any EA charities promote abortion access, but if they do, that's a very obvious deontological barrier for many.

I agree with you that most EA charities aren't doing anything that could plausibly be labelled deontologically wrong.

Expand full comment

Ea charities don’t do that

Expand full comment

(1) They don't, but they could! And that fact alone means that a deontologist should be skeptical of the EA movement

(2) Agree about prioritization and optimization (uncharitably, this may actually be the most important point: people don't want to say "yes, this is a good cause, but we can't make it a priority")

(4) But EA is a social/political movement, not (just) a philosophical movement so if a heuristic works better to achieve EA-type ends that's an important point for the movement (and frankly, probably for the philosophers too!)

Expand full comment
Nov 22, 2023Liked by Richard Y Chappell

An interesting case is that Émile Torres is among the best-known and most aggressive critics of effective altruism, and I recall them (very admirably) helping to run a fundraiser for GiveDirectly -- in fact via the GWWC website.

I really think it is worth taking seriously that the main concern is with the peculiar and sometimes troubling social scene that has sprung up around the EA idea. (And the adjacent and much more troubling rationalist social scene.)

If people let their (IMO justified) worries about the people and social dynamics bleed over a bit into their judgment of the philosophy, well, maybe that's a good heuristic if you aren't a professional philosopher.

Expand full comment
author

Yeah, that's fine for non-philosophers. But most of the critics I'm aware of *are* (or at least claim to be) philosophers. So I think they're badly failing in their professional duties.

Expand full comment
Nov 22, 2023Liked by Richard Y Chappell

I've seen a lot of bizarre criticisms of EA lately. If someone says EA fails in some way, I wonder what movement they find better.

Expand full comment

The “changing the underlying paradigm to permanently make human and earth outcomes better” movement.

If you look at the mission statements of the charities recommended by Givewell you will see what I mean. Or listen to people like Nora Bateson or Douglas Rushkoff on the matter. The actual manifestation of the EA movement seems to be massively enabling of a continuation of industrial capitalism, which is itself the key cause of most of the problems that the EA charities target. This is what I understand to be the real criticism of EA and it is easy to miss because you must place your mind completely outside of the current system to see it.

Expand full comment

Should people stop sending money for malaria nets and vitamins? What do you want them to do to stop continuing to enable capitalism?

Expand full comment

Listen to James Schneider talk about the three levels of crisis. EA is generally addressing the first level only. The “system normal” level. These kinds of interventions are valuable but will never actually free anyone from the underlying system that is the cause of the problems in the first place. A lot of westerners think think that developing countries are just a bit silly or backwards. People have no idea about the IMF or the petrodollar or US financial seigniorage or the emergent impoverishing effects of globalised capitalist economics. EA doesn’t tend to confront these drivers of poverty. This is also why billionaires love EA; it does not threaten the existing system that they do very well from. It is like going into a fire to save someone, but instead of putting out the fire (paradigm change) or moving them to safety (system intervention) you just treat their burns where they lay (mosquito nets and vitamins) and nothing else.

This criticism is not unique to EA. Nor is it fatal. It is simply a call to notice the huge deficiencies in the definition of “effective”.

https://youtu.be/3dxLs-zX5xA?si=UyIYyrVvTSus9Lda

Expand full comment

I wouldn't underweight the do-gooder derogation, I think that's most of it.

That instinct isn't merely projecting some vague internal sense of shame or guilt by turning it into hostility at that source making you feel that. It's often a threat reaction. Threats create fear and fear/threat reaction is the source of all hatred.

It always seems inexplicable and totally irrational when someone else has a hate-filled threat reaction, about something you don't personally value or care much about (or might assign a negative value to). Someone who gets all angry and hostile about proposals to ban or limit access to guns are incomprehensible to someone who doesn't like guns. To get unto this mindset you have to think of something that you truly LOVE. Or are addicted to. Without which life barely seems worth living, or like it would be unbearable. And then imagine someone trying to persuade others to ban it or at least make the social consequences of it very severe.

I could easily make a perfectly rational and statistically supported public health and safety argument for banning, for example, bicycles, or dogs, or the internet, or alcohol, or porn, or having more than one child, or experimenting with new technologies without democratically approved permission and government oversight. Easy to imagine the hostility those would generate.

A lot of people feel like life wouldn't be worth living, and are extremely emotionally attached to, their money and the idea that it should be distributed in any manner other than it is now. And also to their opinion of themselves as very rational, smart, moral and ethical. And they will react with hatred to any perceived threat to those things, or people advancing an argument that view as a threat.

On the topic of altruism more generally, I think this often goes further in a subset of people who have a wired in primal impulse to be repulsed by and hate those they perceive as weak and excessively compassionate. Some type of carryover from times with a much higher risk to basic survival, when one weak member of the tribe (or members being too compassionate to them or outsiders) could threaten the survival of the whole tribe. This is a pretty useless instinct nowadays but clearly still exists. If you search yourself, you can probably think of a few examples where you have a mild reaction of contempt for someone you view as being irrationally and stupidly bleeding heart oriented. Take that feeling and amplify it by a hundred and I think that's why EA gets hostility from some.

I don't put much stock in people's rationalized explanations for this stuff and think it's mostly emotional orientations with narratives layered on top.

And on that note, the overly academic, colorless and emotionless/non vivid language and insider vocabulary that most EA proponents use strike many as a distasteful and non self-aware status game, which turns people off.

Expand full comment

On the question of earning to give I think there is a principled critique which doesn't rely on the edge cases regarding doing immoral work.

The critique involves what you are asking people to do: to split their life up in such a way that they go into the highest earning job in order to then donate it all without any further involvment on their part. Unless you're already a committed consequentialist this is a totally unreasonable thing to demand of people. What you do and what you value are just completely divorced from one another. This seems untenable (at least intuitively) to many people.

Expand full comment
author
Nov 22, 2023·edited Nov 22, 2023Author

Who is "demanding" any such thing? EA simply notes that EtG is often a good option -- and in some cases, even the best option. Like donating a kidney, I don't think this is a requirement. But it is certainly *admirable*.

Also, I don't think anyone is advising people to pursue jobs that they would hate on a daily basis. Rather, the suggestion is that if you'd be a good fit for a high paying job (i.e. would enjoy the job itself), then you should consider earning to give as an altruistic option.

See, e.g.: https://forum.effectivealtruism.org/posts/gxppfWhx7ta2fkF3R/10-years-of-earning-to-give

"I like my work. I get to work with incredibly sharp and motivated people. I get to work on a diverse array of intellectual challenges. Most of all, I've managed to land a career that bears an uncanny resemblance to what I do with my spare time; playing games, looking for inconsistencies in others' beliefs, and exploiting that to win.

But prior to discovering EtG, I was wrestling with the fact that this natural choice just seemed very selfish....

I would encourage critics to take an honest look at what my reference class - STEM graduates from elite universities and privileged backgrounds - is otherwise doing. When I look, I see a fair amount of frivolous expenditure and minimal attention given to non-financial ways of doing good; the choice is less 'banker who donates' vs. 'doctor' and more 'banker who donates' vs. 'banker'."

Expand full comment

I'll leave aside the part about whether EA demands it or merely congratulates (i.e. finds admirable) such an option. Where I'd push back is on the idea that there are many such jobs which would fit the parameters of having an "uncanny resemblance to what I do with my spare time". Perhaps certain jobs in tech or your own career fit the bill, but overall it's few and far between. So would EA then only suggest it to those people in such a peculiar position, and hold that otherwise earning to give isn't a viable option?

One rejoinder might be that many careers are pursued for status reasons rather than for the pure love of the work (assuming financial concerns are already of secondary concern). However, I think that status and financial interests are too bound up with one another for this to help the case much. For example, you will often be surrounded by people (e.g. in banking) who flaunt their wealth to some degree and expect you to do similar.

From what I understand EA has moved away from suggesting earning to give in most cases, perhaps because of these very issues, though my information might be out of date.

Expand full comment
author

I don't understand the question. Most people may not have any job options that they love. The suggestion is just that, when considering careers, you have *some moral reason* to prefer a higher-paying one if you would then donate the proceeds. In particular, if you were going to pick a high-paying career *anyway*, then it's great to note that you now have an option for turning it into an altruistic one.

So: If you can find a high-paying career that you love, great! If all your career options seem boring, well, that sucks, but at least you have some moral reason to prefer a higher-paying boring career (if you'd then donate more) over a lower-paying boring career. I don't see anything to critique in this basic claim.

When people should be willing to sacrifice some degree of intrinsic interest in order to save more lives is just part of the broader question of how much we should be willing to sacrifice to help others. You absolutely do NOT have to be a consequentialist to find this question challenging. See: https://www.utilitarianism.net/peter-singer-famine-affluence-and-morality/

Expand full comment

I think of greater concern is that these jobs eventually change you. And it is rather ignorant of human behavior to think they don't. There is ample social psychological research showing the negative impact of merely having more money or status than others...it literally rewires your brain, makes you blind to the suffering of others and less compassionate.

Obtaining and keeping a high paying job virtually always means that you will have to regularly, over the years, face and experience rather disgusting moral trade-offs that erode your integrity. It means you will be surrounded by others who are obsessed with money and status games, and ruthless about maintaining and advancing their position on those measures. It means you will almost never be around anyone professionally who seems to care about values other than status, money, or reputation and who are either flagrantly and shamelessly direct about that or if they aren't, are often only paying lip service for reputational purposes. They will denigrate and condescend to anyone who doesn't share their values. You will witness borderline evil in the actions that high earning and rich people take to secure and maintain their assets. And because all of these types of jobs require working all the time, you will have little to no time for socializing with family or friends outside of these circles. You will witness unethical or greedy behavior so often that it will start to seem normal, and others will start to seem hopelessly naive. You will probably get to see how the sausage is made and it will be even worse than you'd imagined in your wildest dreams.

If you think that one can live in these conditions for 90% of their waking life for years and decades on end, and not eventually be corrupted by them, you are wrong. We are all influenced by the culture and people around us, and the ways we spend our lives hour by hour and day by day. I don't care how principled you think you are and if you have the discipline of the monk, after you work this type of job for 20 years, you will not be the same and your values and the very wiring of your brain will be compromised.

After you have sacrificed all of your time and likely your youth and your health in service of such wretched conditions, you will start to feel like you deserve all that money, and like you're a huge sucker if you don't try to hide it all from taxation and hoarde it for yourswlf and your own little family/group, like everyone else around you. You'll start thinking that others simply can't imagine the hard work you've put in or the sheer level of pressure and stress you've withstood, so you earned every penny. You'll see that everyone around you only looks out for themselves and giving it away seems like a pointless drop in an ocean of selfishness and greed. Worse, you will probably have been made painfully aware of how much charitable giving is at best a tax deduction and way to further glorify reputation, and at worse an outright scam. Once you start making gobs of money, in amounts far surpassing your earliest dreams, it will no longer seem like much. You will find yourself increasing your spending while still telling yourself you're prudent and not wasteful or indulgent, yet somehow find your budget of "necessities" ballooning. You will start to view the more progressive and charitable beliefs of others in less high stakes and demanding positions to be look luxury beliefs that the holders don't even realize are entirely dependent on their own easy and secure lives. You'll start to resent their naivete and idealism, and the seeming self righteousness that goes along with it.

Source: myself. I'm a corporate lawyer. I started off extremely compassionate and charitable, very interested in making things more equitable, and had all kinds of ideas of what I would do with my money and career. I was what a lot of other people called a socialist, though I never categorized myself that way. I used to have very low material needs and always shunned luxury items and extravagant spending (still do). But after almost 20 years doing this, even though I'm still far more charity and equity oriented than 95% of those I work with, and still far more opposed to the man, I'm not the same. In a bad way. Simply put, I'm way more of a selfish greedy jerk than I used to be. And it feels uncontrollable and inevitable. It feels sort of... right. There's a small part of me that can stand aside and objectively be critical and say "look what you've turned into". But the bulk of my impulses and feelings are that this is not controllable and I'm too far into this world to get out.

That's a long way of saying, money corrupts. That's the criticism. And everyone thinks NOT ME. But it's been proven a billion times over. So the argument against earning to give is that it doesn't work because once you've earned you'll have become a selfish asshole. Examples abound. Even the billionaires who are lauded for their charity have almost all pledged their big donations only after they die. Which is just to get estate taxes down for their kids, and no skin off their back.

Notably, most of the exceptions to this rule, such as Bezo's ex-wife, are not the ones who actually did the work to "earn" that wealth. They didn't go through the meat grinder and become assholes to get rich. So I guess if you want to target rich people for EA giving, I'd say focus on heirs and ex spouses...they can often still be basically kind of good people.

Expand full comment
author

People vary, and it helps to have a supportive community who share your values. I know plenty of good people in EA who (either do or did) earn to give and aren't (weren't) corrupted by it. Jeff Kaufman is a nice example: https://forum.effectivealtruism.org/posts/L8N4uh4GixhWCoGAg/ama-earning-to-give

Expand full comment

It looks like he's a tech worker who was able to make millions along with his wife working at some of the best tech companies in the world. That might work. But the way that 99% of people can make that much income will be much less pleasant and expose them to a lot more brutally ruthless and distasteful situations on a regular basis. Perhaps being a lawyer is particularly bad on this measure, but my clients all got rich in other avenues and for the most part, it's what I said. The rare scientist or engineer that strikes it rich doing what they enjoy does seem to be an exception, but that is vanishly rare. For the most part I think the risk of being corrupted is great enough to not make this a good method to promote pursuing. Targeting the spouses, kids, heirs, and ex-spouses for EA philosophy would be much more fruitful (and these people are in fact most of the famous philanthropists, historically).

Expand full comment

Agreed. I suppose I'm thinking of EA recommending earning to give as a way of leading your life. And this has to be a way of leading a satisfactory life to some degree (or at least in order to maintain your own motivation). And, it's just much more difficult to see how that happens with most examples of earning to give in which the recommendation to go into some career for that purpose actually changes your decision. So we should take cases in which you forgo going into a higher impact career in order to go into a higher paying one and donate your earnings.

The way you framed things by saying: "If all your career options seem boring, well, that sucks, but at least you have some moral reason to prefer a higher-paying boring career (if you'd then donate more) over a lower-paying boring career." This is as you say totally uncontroversial, but who would choose the boring low paying career over the boring high paying one in the first place? It seems like the earning to give idea isn't adding anything at all in such a case.

Expand full comment
author

Anyone who finds traditionally "altruistic" careers boring/unappealing, but worried that they had moral reasons to pursue them anyway?

Expand full comment

Ah that actually clarifies it. I just think they're such a tiny minority that it's almost not relevant and this ends up really diminishing the importance of "earning to give" as an idea. If you are interested in helping others, you will in most cases find such work more fulfilling than the alternatives with very few exceptions.

Maybe there are some people who aren't motivated to help others, but just feel compelled to act in a certain way based off abstract moral principles. But, they again seem like a tiny minority. Perhaps EA is mostly interested in talking to those people exclusively though?

Expand full comment

This is so true!

Expand full comment
Dec 1, 2023Liked by Richard Y Chappell

I know this is a bit late, but I wanted to say one more thing, that I think maybe gets at the objection to EA in a different way.

Finance journalist Matt Levine wrote something relevant to this recently. I'll paraphrase what he says to avoid extended quotation:

1 the core of EA is that your charitable donations should focus on saving lives, not on getting your name put on buildings

2 but you can take a wider view and note that "Spending $1 million to buy mosquito nets in impoverished villages might save hundreds of lives, but spending $1 million on salaries for vaccine researchers in rich countries has a 20% chance of saving thousands of lives, so it is more valuable"

But...

3 there is no obvious limit to this reasoning; paying EAs high salaries to research how to save lives might have higher expected value than bed nets OR vaccines, so

4 "Eventually the normal form of “effective altruism” will be paying other effective altruists large salaries to worry about AI in fancy buildings, and will come to resemble the put-your-name-on-an-art-museum form of charity more than the mosquito-nets form of charity."

He says, "You do the extended-causality thing because … you think it is better? Because it has more capacity, as a trade — it can suck up more money — than the mosquito nets thing? Because it is more convenient? Cleverer? More abstract?"

And then he goes on to compare to carbon offsets, and I think a tidier way to express all the above, and the reason why EA and offsets and so forth go together well is because they are examples of the financialization of charity.

When people object (as in other comments in the thread) to attaching a numerical value to love, or whatever, I think what they are really objecting to is this sense that we've financialized it... Not just that there's a numerical value, but that value perfectly summarizes how we should treat love in terms of trades and trade-offs and so forth.

This kind of a response is more of a core disagreement with EA, and with utilitarianism more broadly: it can encompass deontological critiques, for example.

But even if you're ok with that, I think the cult objections, and the objections to earning to give, and so forth, still can flow from a certain critique of the finance-ness of EA.

Which is to say that, even normal, regular finance, the kind that's just about money and stocks and bonds, has a tendency to abstraction and opaqueness that has historically contributed to speculation, fraud, and other things of that nature. And I think people might feel of longtermism, or earning to give, or donating millions of dollars to deworming charities that might not accomplish anything, that they are sort of like the subprime mortgages of the charity world.

As with normal finance, efficiency is good, and only the craziest people think banks should make *no* efforts to find complex trades that they expect to pay off--but the more abstract and convoluted a trading strategy, the more divorced from the "real" economy, the more likely it is to just be Dutch tulips or bored apes.

And the fact that sbf was a prominent figure in both actual financial speculation and fraud, and in a certain kind of EA that I'm arguing is analogous, feels more like a core flaw in EA's approach than just a coincidence, or a bad apple, or whatever.

I think that's why I find myself in the middle: I'm not unsympathetic to many of the criticisms of EA the movement, just as I thought the NFT boom and the WeWork saga were examples of finance run amok, creating the illusion of value out of speculation and fraud--but I still think banks should make loans and mortgages, and I still think charities and donors should think about how to get more value from each charitable dollar they spend. The problem is, as Levine suggests, there's no bright line, no place to stop and say, "this is clearly as abstract and high level as we should be".

Expand full comment

Great essay. My thoughts on EA hate:

1. I don’t get why humans have the drive to leave no good deed unpunished, but this is core to a lot of people’s nature. Explaining that probably explains EA hate.

2. The movement of EA could be separated from the philosophy, there are some legitimate critiques of how the movement operates. Slippery arguments can conflate the two convincing naive people effectively doing good is wrong.

Expand full comment
Nov 22, 2023·edited Nov 22, 2023

"core EA claims on controversial topics (from “earning to give” to “longtermism”) are clearly correct"

That seems pretty disputable for longtermism, and I would say that longtermism is clearly not clearly correct. I don't mean that it's clearly false, and I definitely give some longtermist views moderate weight myself under normative uncertainty, but I can imagine someone reasonably giving it very little weight and practically ignoring it. It typically relies on many controversial claims, including about aggregation, population ethics, tractability, predictability and backfire risks, attitudes towards uncertainty/ambiguity, risk and decision theories. Bias and motivated reasoning can make people more likely to do things that are overall harmful, and this is easier to do when there's poor evidence and feedback, as is often the case for longtermist work.

(To be clear, I think other areas of EA are subject to bias and motivated reasoning, too.)

Longtermists also have some high-impact failures that should make them worry about the judgement of longtermists and the promotion of longtermism or its concerns: Sam Bankman-Fried and potentially accelerating AI risk. On the latter:

1. there are the recent events with OpenAI,

2. longtermists potentially being counterfactually responsible for the founding of OpenAI, funding of DeepMind (https://twitter.com/sama/status/1621621724507938816 and https://forum.effectivealtruism.org/posts/fmDFytmxwX9qBgcaX/why-aren-t-you-freaking-out-about-openai-at-what-point-would; see also the comments there), and

3. Open Phil supporting OpenAI's early growth (although Moskovitz disputes this https://forum.effectivealtruism.org/posts/CmZhcEpz7zBTGhksf/what-happened-to-the-openphil-openai-board-seat?commentId=weGE4nvBvXW8hXCuM) for board seats and influence it has now lost.

Expand full comment
author

> "Longtermists also have some high-impact failures that should make them worry about the judgement of longtermists and the promotion of longtermism or its concerns"

Sure, but that doesn't have anything to do with whether the core claims are true. The paper is very clear about separating these forms of evaluation. "Those 'longtermists' are doing longtermism wrong!" is a perfectly reasonable position that I have no beef with.

Expand full comment
Nov 22, 2023·edited Nov 22, 2023

It supports the position that it's too hard to ensure our net impact over the far future is large and positive in expectation (after appropriately accounting for the failures and risks of future failures). These high-impact failures were partially caused or supported by longtermist leaders, who are supposed to be selected on the basis of their judgement. But if our best judgement or selection processes produce these failures, maybe human judgement just isn't up to the task.

Longtermism isn't just a philosophical claim about stakes or where much, most or almost all of the potential value is. It's a claim that influencing the far future should in fact be an important priority in practice, which depends on the particulars of the world we live in. It depends on who will be doing longtermist work and what they'll be doing. It depends on people's judgements, and how bad bias, motivated reasoning and the unilateralist curse are and will/would be. It depends on how others react to longtermism, its ideas and its work. To make a strong case for longtermism, you have to address backfire risks and our potentially deep uncertainty about them.

That being said, I expect and hope longtermists are learning from these mistakes, and my impression is that many are in fact very worried about backfire risks, including some leadership, although recent events with OpenAI aren't very encouraging. And I don't think what I've written is overwhelming evidence against longtermism.

Expand full comment
author

> "Longtermism isn't just a philosophical claim..."

I don't want to get into a verbal dispute here, so I'll just note that there is a philosophical claim in this vicinity -- call it "longtermism*": the claim that the far future *merits* being an important priority.

That's the claim that I'm interested in defending here. Then there are a bunch of downstream empirical questions about "who will be doing longtermist work and what they'll be doing". That's important for determining one's overall verdict on whether, e.g., donating to the Long-Term Future Fund is a good idea or not. But it's not something that philosophers have any professional expertise in evaluating either way, so I make no attempt to speak to that here.

Many people get off the boat earlier, by disputing even the underlying philosophical claim (longtermism*). So it's important to be clear that that's my target here.

Expand full comment
Nov 24, 2023Liked by Richard Y Chappell

I don’t think “longtermism” is a good name for that, since it is easily confused with the existing real-world movement that uses all these assumptions that you are not interested in defending. Perhaps you could call it something else. Does it need to be a single word? Let’s be super clear and call it “giving equal moral weight to all present and future people, in principle.” You would then be able to say “I understand why some people don’t support longtermism as a movement, but I don’t see why you wouldn’t give equal moral weight to all present and future people, in principle.” I think this would be less confusing.

Expand full comment
author

Depends on the audience! For philosophers, I think it's very natural to read 'longtermism' as the temporal analogue of 'cosmopolitanism' (neither of which requires *strictly* equal weight, but also doesn't commit you to any particular empirical assumptions, or to supporting any particular real-world individuals or movements).

Expand full comment

You say this, but of course many philosophers also want to comment on longtermism as a movement, too! This is true whether we are talking about Will MacAskill, who supports it, or Mary Townsend, who doesn’t. So the risk of confusion must surely still exist, even amongst philosophers.

Expand full comment

You accuse Mary Townsend of motivated reasoning. Interestingly, she accuses Effective Altruists of motivated reasoning, too. She writes "The pathos of distance allows the good deed on another continent to take on a brilliant purity of simple cause and simple effect—you have no connection to the recipients of the hypothetical net beyond their receipt of your gift—and so you, back at home, can walk right past the homeless guy without having to look at him, or for that matter, smell him. You have purchased the carbon offset credits of the heart."

If you truly want to understand the viewpoint of a critic of EA, it seems to me that the obvious first attempt, akin to the philosopher's "Have you tried turning it off and on again?", would be to consider the possibility that the people you disagree with believe what they say they believe, for the reasons they say they believe it. In the case of Mary Townsend, she outlines a theory of value that is entirely in line with caring for the homeless guy in front of you over distant people to whom you send impersonal cash. She says that "almost every human good there is beyond mere accumulation of healthy days or years—for instance, the goods of justice, love, truth, and compassion—are not amenable to numbers, let alone predictable by dint of them." Accordingly, rather than attempting to calculate such things mathematically, we should presumably instead respond in a human, interpersonal way to the people around us. We cannot love someone with whom we cannot interact, so, even though it would be (hypothetically) just as virtuous to love someone who is halfway around the world, or who is not going to be born for another five hundred years, we should not try to do this because it is impossible.

I do not expect you to agree with her on this, obviously. But one might hope that you would not need to agree with it in order to consider the possibility that others might.

Expand full comment

I've written a detailed reply to Townsend https://benthams.substack.com/p/the-bulwarks-article-on-effective.

Expand full comment

Thanks, I’ll take a look!

Expand full comment
author

Townsend claims that EAs and utilitarians "deserve" bad things. Simply disagreeing with someone is not generally sufficient reason to adopt such a hateful attitude. I didn't notice any argument in her piece that would support this hateful attitude.

The substantive view you describe doesn't sound like anything a reasonable person could actually believe. It would imply, for example, that you have more reason to give a homeless person a hug than to solve world hunger. The suggestion that human goods like love are "not amenable to numbers" is patently absurd. If you save a million children from dying of malaria, there is no question that there will be more love in the world as a result, since most of those children are loved by their mothers, and will go on to form new loving relationships of their own. The suggestion that we should not try to save those distant children, because it is "impossible" for *us* to love them, is obscenely self-centered. The agent's love is not the only love that matters.

Still, I wouldn't say that Townsend deserves pie in the face. I just think she's wrong, and it's easy to explain why.

On the issue of "walking past the homeless guy", I discuss precisely this issue in depth in my paper "Overriding Virtue": https://philpapers.org/rec/CHAOV

(I argue that a good agent should feel torn about it, even while prioritizing higher-impact ways of helping others. This is a straightforward implication of the token-pluralistic utilitarianism that I've defended elsewhere. So Townsend is partly just mistaken about the moral theory she's attacking. And of course it's not as though non-EAs have any difficulty ignoring homeless people. By encouraging us to also ignore the distant needy, Townsend is pushing her readers in entirely the wrong moral direction.)

Expand full comment

It’s easy to explain why Mary Townsend is wrong if you assume utilitarianism as a premise, yes! But you know perfectly well that she disagrees with that premise. When you write “she’s wrong, and it’s easy to explain why,” you sound smug as heck about your refusal to consider Mary Townsend’s perspective. I’m not sure that it’s right, exactly, for her to want to see people like you humiliated in response, but I do think it’s understandable. Moreover, I think irritation with smug narrow-mindedness from utilitarians is a much more likely explanation for what you are referring to as “viciousness,” here.

You might respond that any smug narrow-mindedness is outweighed by the good done, but that would be an extremely utilitarian argument to make. You might also respond that you are neither smug nor narrow-minded, which is possible, but if so, the perception that you are is nevertheless easy to understand.

Expand full comment
author

Where do you think I am "assuming utilitarianism as a premise"?

You don't have to be a utilitarian to think that, if given a choice between the two, one should sooner end world hunger than give a homeless person a hug! I'd expect ~99+% of people would agree with this verdict, and it certainly is not the case that anything like 99% of people are utilitarians.

For more on this mistake, see: https://rychappell.substack.com/p/beneficence-is-not-exclusively-utilitarian

Or are you denying that saving children's lives results in more love in the world?

Or are you suggesting that the agent's love *is* the only love that matters?

I have no idea what you're trying to suggest, because instead of substantively engaging with my argument, you resort to puerile name-calling. If you want to comment here again, please actually engage with the ideas. If you can't do that, I'll simply ban you -- I have no interest in hosting vacuous nastiness.

Expand full comment

My apologies. I wasn’t trying to be nasty for the sake of it. Part of what I am disputing is your characterisation of Mary Townsend’s likely motives. In order to do that, I have to provide a model of my best guess at the kind of internal state that might lead to her feeling negatively towards Effective Altruism. I suspect that some form of interpersonal dynamics is the most likely answer. However, I should not have accused you of smugness, because there are kinder explanations for your behaviour just as there are kinder explanations for Mary Townsend’s behaviour than the ones you choose to give. Sorry for not being more gentle.

The short answer to your question about where you assumed utilitarianism is that you responded to the idea that love is good by making a calculation: love is good, therefore more love is more good, therefore maximise the amount of love. The entire pattern of thought here assumes that “identify goods and maximise them globally” is the correct style of moral reasoning to pursue.

Expand full comment
author

Thanks. But I wasn't assuming that we should maximize love. Rather, I was there responding to the (seemingly descriptive) claim that human goods like love are "not amenable to numbers, let alone predictable by dint of them." That seems factually inaccurate, for the reason I gave. What we should do about this fact is a further question.

My argument against the view you describe (that we should simply try to instantiate love in our own lives, and not care at all about distant people dying of malaria) was rather the seemingly absurd implication of prioritizing a hug over ending world hunger.

Expand full comment

I think you’ve failed to understand what it means for certain moral truths not to be amenable to numbers. Just because you can find a way to describe love in terms of numbers does not mean that those numbers are sufficient for the purposes of moral analysis.

I find it very plausible that the heuristic of “try to put numbers on things before making moral decisions about them” would have the potential to lead people badly astray in many situations. Rather than reading “not amenable to numbers” as “it is impossible to come up with numbers that vaguely describe this,” I think you should read this as a claim that a numerical style of reasoning about such things is at best insufficient and at worst harmful.

Expand full comment

Until you understand the critique, yes, your addressing it will feel strange. You don’t get it. And don’t even seem to want to. Which is so weird, indeed.

Expand full comment
Nov 22, 2023Liked by Richard Y Chappell

Care to elaborate on what he is not understanding? Saying he does not want to understand the critique seems unfair given this explicit request for explanations.

Expand full comment

Yes!

In short:

- effective altruism is a solution to poor investment strategies in a post FOREX world (if you don’t accept that, we will have to have that discussion.

- Its fails as a solution to that problem because it uses the same infrastructure that enabled those poor strategies: optimizing a dollar qua dollar approach (which is inherently limited by the structure)

- The result being that those who participate lose their voice in the strategic discourse by giving up their votes (aka dollars), thus further enabling those who don’t participate to accelerate their poor strategy

——

Taking EA on its own - without considering broader implications - is the right way to live, fair enough. But it won’t solve the problem it sets out to, and thus for those capable of addressing that problem is not a viable solution.

In a sense, as an approach to the death of the prudent person (the problem of poor investment strategy), EA fails its own test.

(Go on laugh, ignore, chalk it up, pat yourself on the back, hardy har har: if I hadn’t made similar comments to deaf ears on Richard’s posts, I might be less cynical)

Expand full comment

I also don’t understand what “effective altruism is a solution to poor investment strategies in a post FOREX world” means well enough to agree or disagree. I’m not familiar with post-FOREX as a piece of terminology, and google’s explanation that FOREX means, essentially, currency exchange, is not sufficient for me to work it out.

It sounds to me like you’re saying that

(1) Politics (writ large, not just literal electoral politics) is heavily influenced by dollars.

(2) Most people use their influence poorly, leaving unspecified whether that is through selfishness or malice or misguidedness or whatever combination thereof.

(3) The direction society takes is ultimately determined by political influence.

(4) By spending money on altruism instead of political influence, EAs are ceding control to those who wouldn’t do such a thing. Therefore, (5) EAs are ultimately giving up social control to those who *wouldn’t* be charitable, thus

(6) EA is either net ineffective (because any good done is offset by the harms of their giving up social influence) or net harmful (because over time bad voices have greater influence on society as a whole, which both harms the world directly and may further lock us into harms via path dependency).

Does this sound like what you’re claiming?

Expand full comment

Following up here. If you’re just busy, all good, but please do respond to my comment.

We can debate the problem EA wishes to solve

I can share my corrections if necessary (that is, if you think the view you laid out is ridiculous)

Or we can explore what would solve the problem in the form you presented with or without the corrections (again the corrections are only to help you accept that view OR stimulate further discussion on that structure and its alternatives)

OR maybe you think this isn’t worth your time…

Expand full comment

Discussion will be easiest if I start with critiques of each line of the (informal) proof I laid out. Bear in mind that while this is a proof I think one could give, I’m not sure it’s your argument. Some of these are minor points that just give me pause, some I think are fatal.

(1) Politics is heavily influenced by dollars.

True, but politics is also influenced by lots of other things. The acclaim from doing great works of charity, for example, can buy lots of influence. EA is specifically not optimized for acclaim (at least, it isn’t supposed to be), but also this stuff is hard to predict and hard to measure.

(2) Most people use their influence poorly, leaving unspecified whether that is through selfishness or malice or misguidedness or whatever combination thereof.

True again, but if malice/selfishness are the cause, that implies an opening for EA (is this what you mean by “EA isn’t effective but it could be”?), and if misguidedness is the cause, that implies that EA or its alternatives might similarly struggle to affect change via influence.

(3) The direction society takes is ultimately determined by political influence.

Partially true. It’s also influenced by things like technological change, which depends on things like the number of scientists. More smart children surviving malaria, or not dying in an apocalypse, could be very important.

(4) By spending money on altruism instead of political influence, EAs are ceding control to those who wouldn’t do such a thing.

Those children who survived malaria might not have good politics! And they might not have political influence in the Chinese government or with the executives of FAANGs or… But also, how is influence actually accrued? Voting in the US is one source of influence that is far less controlled by others’ money than most partisans think. Influence often comes from good looks, from the right accent, from a host of other things that are not affected much by where someone puts their dollars. Sure, merely having lots of money gets you in a lot of rooms. But it does not follow from “wealthy people have disproportionate influence” that charity is impossible, even if you accept the assumption that society is controlled by a zero-sum contest of influence.

(5) EAs are ultimately giving up social control to those who *wouldn’t* be charitable.

5 follows from the previous premises, if we accept them.

(6) EA is either net ineffective (because any good done is offset by the harms of their giving up social influence) or net harmful (because over time bad voices have greater influence on society as a whole, which both harms the world directly and may further lock us into harms via path dependency).

6 also follows.

Obviously I don't think the premises survive scrutiny.

Bonus: not a philosophical counterargument, but a practical one: suppose altruists give their all to accruing social influence. If this is all zero-sum and highly competitive (the latter of which seems to be necessary for the claim that EA can’t work), then how are they actually supposed to do good with their influence later? If they spend that influence to do good, they’re not optimizing for later influence, and they end up losing anyway.

Expand full comment

Thank you so much for your careful thoughts.

I believe my adjustments to the argument will make it much stronger in your view.

However, you have raised other issues we would eventually arrive at and are more important than and some what independent of whether you accept the argument or not.

Still, in a few days or so I’ll address each point as carefully as you have, and will suggest a set of issues which I think will still be interesting whether we can align on this argument or not, meaning that we would otherwise get to keep talking!

Expand full comment

Thanks for the engagement! I plan to reply, but tomorrow, because I think this discussion is worth engaging with in a way that I can’t do effectively on my phone. Today and yesterday I’m stuffed with family obligations.

The short version of my reply is that I think there’s a core truth to the argument I laid out, which doesn’t necessarily rebut EA (it takes the form of suggesting that a different approach might better maximize the good, not that such a thing is impossible), but does suggest that politics is underrated as an avenue.

But I don’t think the overall argument is strong, stemming largely from ambiguities in how political influence works.

Beyond that, It is not clear to me, for example, that someone buying a yacht retains more influence than someone buying bed nets, and if the bed nets also do some good, that’s an opportunity for real impact. And I don’t think that most wealthy people are avoiding all luxury purchases for the sake of maximizing influence. Therefore there’s at least some room for good to be done.

Expand full comment

No problem, get through the day and hopefully have some fun.

I appreciate your short response, look forward to your long response, and further expanding on all these issues!

Expand full comment

That’s the jist of it, but I’d correct a few minor points.

ALSO, I wouldn’t say that’s what I’m claiming, I’m saying that’s what effective altruism is a solution to. Now, there are many EAs who might reject that in favor of “they are doing what they can”, but I don’t think that level of cynicism rejects the broader goal of EA. Which, btw, I agree with and think we need to address. I just don’t think EA does the job, as you laid out nicely (again, minor corrections aside…and I’ll make those if you for whatever reason think the framework you laid out is flawed…I imagine my corrections will change your mind, if necessary.)

Expand full comment

Thanks! I guess I'm not sure I understand the "poor investment strategies in a post FOREX world" piece, and the rest seems to flow downstream. Is this a reference to poor management of existing resources, and therefore providing additional resources will likely also be mismanaged?

Expand full comment

Yup!! I get what you think EA solves says that in other words, but it would be great to hear what you think EA solves. And if I can’t translate what you say into that view, then it would definitely make sense for me to explain it more completely and we can debate what problem EA portends to solve.

Expand full comment

I don’t know much about it, but from the outside and in the wake of SBF, EA can seem like an elitist social movement that seeks to recruit people with potential and teaches them that they have an obligation to seek out positions of power and wealth (‘earn to give’) because now they belong to a group that has figured out morality. In other words EA seems to have cult-like elements.

Expand full comment
author

I'm not sure that those features are "cult-like". See my comments about billionaire philanthropy: this is just instrumental rationality. If you want to change the world in any way, it's helpful to have power and money.

Of course, many who pursue power and money do so for bad reasons. So you may have a general heuristic of being skeptical of anyone who pursues these. Such skepticism is probably healthy -- in retrospect, I wish EAs had been less trusting of SBF! But it is important to recognize that "this theory implies that we have instrumental reason to pursue power and money" doesn't show that the theory is false. Because any theory that hopes to improve the world will *also* have this implication. And it's very plausible that the true moral theory would tell us to try to improve the world.

Expand full comment

Calling them "instrumental rationality" is not a defense against their being cult-like, I don't think the two things are mutually exclusive.

If one thinks that EA members do more good in powerful positions then other people, then an EA member in a hiring position in a company has a moral obligation (according to the kind of utilitarian principles they use) to hire EA preferentially over other candidates (have to save lives!). If this kind of preferential treatment is more effective if the whole thing is kept secret, then there is a moral obligation to keep it secret. (I am not claiming this is what EA people do, I am just pointing out that these arguments are easy to make.) So now we have what looks like a secret conspiracy to take over the world.

Keep in mind that any group that claims that it is "rational" and in possession of "the true moral theory" is going to attract suspicion.

Expand full comment
Nov 22, 2023·edited Nov 22, 2023

Richard, considering you take the view that longtermism is common sensical (as I do), how concerned are you about AI risk? I know that Will MacAskill is not nearly as persuaded about the likelihood of that particular disaster as others, and I would be curious to know what you, another academic ethicist, thinks.

Expand full comment
author

I'm moderately concerned, as a result of being largely agnostic: https://rychappell.substack.com/p/x-risk-agnosticism

Expand full comment

Re: earning to give and billionaire philanthropy, impulsively disliking these things is not that

unreasonable, given a certain outlook on what morality is supposed to be.

I think the people that criticise EA on these points tend to think of morality in a very meta way - for them, it's a system that assigns praise (or blame) in a society, as opposed to the object-level, prescriptive stuff analytic philosophers like to talk about. You might say that they conceive of morality as an institution, and said institution is but one of the many levers you can use to change the way things are.

With that mindset, it's easy to interpret the standard EA spiel about donating (and the implicit acknowledgement that the good you do mostly scales linearly with the amount you donate) as a claim about people's virtue, since evaluating people's virtue is the only thing morality-as-an-institution ever does. Specifically, they read it as "that billionaire who rides around in a Lamborghini happened to have a sudden spasm of conscience and donated 1% of his fortune to GiveWell, so now society ought to regard him as literally 100x more virtuous than the guy who's been volunteering in soup kitchens for ten years of his life". Obviously this is pretty wild, and is not the direction they'd like public morality to take. Hence the strong negative reaction to (what they think is) effective altruism.

Not that there's any excuse for refusing to talk about object-level morality because "it's an institution, a historically contingent phenomenon!" - it's not like saying these magic words makes the normativity inherent in your every action go away.

Expand full comment
author

A relevant excerpt from my contribution to a forthcoming book on EA:

"A good moral agent will prioritize morally more important ends over less important ones. This suggests that we can assess how morally good a person is (to a first approximation) simply by looking at how much they (want to) do to improve global well-being.

We may split this assessment into two dimensions. How *virtuous* or well-meaning a person is depends on their desires, the expression of which depends on contingencies of their life situation. A billionaire who donates one million dollars to charity is much less generous, in disposition, than a poorer person who donates a large portion of her income and would give many millions were she as wealthy as the billionaire. But how morally *beneficial* a person is to the world simply depends on the net value of their contributions: in giving a million dollars, the stingy billionaire presumably does more good than most of the rest of us are able to.

Both dimensions of agential assessment can be approximately reduced to assessments of beneficence, just in different ways. One measures the strength of a person’s beneficent desires; the other, how much they actually contribute to promoting beneficent ends."

Expand full comment

Unfortunately, as good as it might feel, sneering about capitalism or whatever on social media doesn’t actually help anyone very much, and the unflattering comparison to people who donate their kidneys and give money to effective charities must be very annoying.

Expand full comment