8 Comments

I am one of those who finds Effective Altruism (EA), especially in its Oxford doctrinaire version, both silly and misguided, and I think you may be doing a disservice by "eliding" the concept of effective altruism (wanting to make a difference) with the orthodoxy of Effective Altruism, which is extremely specific, even to the extent of proposing mathematical approaches.

The silliness of EA is that it fails to understand what purpose it serves, and thereby is blind to its own weaknesses. EA falls into a general category of "ways of making decisions about resources" that includes a massive range of ideas, algorithms, and historical examples. In many ways, what service can and should be provided to society by its resources (people, capital and land) is the underlying concern of Capitalism, Socialism, and every economic system ever proposed. How we reach decisions to select and optimize these systems is the entire study of political science.

The fact that EA arbitrarily constrains its interest to eleemosynary activities enhances the deception. It appears to be "sensible" because it applies modern management theory to non-market concerns, but is worthless in effect beyond its basic suggestion "we should think about whether our giving matters."

If I am a society with resources that are beyond the barest individual sustenance, I am faced with the question of how to use those resources whether individually or collectively. I could take the extra seed and feed another family, trade with another person, or plant it to expand future yields. At a discrete level, understanding the best use of this (especially WRT future states) is the basic concern of economics.

If the resources are more than trivial, a society will inevitably have more than one valid option for extra resources, and they will need a way to figure out the optimum approach. In our above example, we could decide to create a seed bank, collectively trade surplus, or use it to feed an army. These questions are political science. Democracy? 50%+1? Dictatorship? Philosopher king? All forms of collective decision optimization.

Charitable giving (or investment) is a subset of both of these issues. It is economic in that it involves calculating/predicting optimizing the use of resources towards an outcome, whether effectively or not, and it is political in that it involves making a decision among multiple options where the outcome is not precisely knowable.

We have economics and political science because we live in an ecosystem/society/economy that is so monstrously complex that it cannot be predicted nor optimized without simplifying assumptions. We don't need democracy because we like voting, we need it because giving one person control tends to lead to worse predictive power over the future. We don't need capitalism because we like day trading, we need it because it is good algorithm for resource pricing absent monopolies/monopsonies/market distortions.

What is most critical is that our best approaches to both are deeply flawed, again because the arena in which we are working is so complex that the only model for predicting the future is the system itself.

What EA tries to do is wipe away 50,000 years of thinking and just say: "Calculate which activity will benefit the most people and put your money there." If I am charitable, this is just a sad rehashing of Utilitarianism that chooses to ignore why it failed as a political and economic approach. If I am being more cynical, it is just the latest incarnation of the divine right of kings, where the person doing the donating is uniquely qualified to assess the best outcome because they are the person who accumulated the money.

"Effective" giving, like effective corporate management, effective venture investing, effective foreign policy, hell, even effective personal dieting, is only possible rhetorically, not in reality. We cannot individually predict what will be the most effective use of capital (mostly we are wrong). The economy and society are far too complex, and the only way we can assess effectiveness is in retrospect.

In my opinion, the most effective altruism is to distribute money broadly, allowing the underlying system to allocate the money dynamically. Collecting money is always distortionary. Giving money directly is a better algorithm for finding truth than a dude at Oxford.

None of this relates to "selfishness"

Expand full comment

Agreed. I don't think it does either the supporters or the opponents of Effective Altruism any favours to say that everyone's an Effective Altruist. (I have similar objections to the view that "all Buddhists are Engaged Buddhists".)

Expand full comment

> I also think that most people’s explicit moral beliefs make it hard for them to deny that scope-sensitive beneficentrism is more virtuous/ideal than their unreflective moral habits. So my hope is that prompting greater reflection on this disconnect could help to shift people in a more beneficentric direction.

It could also cause people to shift their explicit moral beliefs to line up more with their behaviors and intuitions. I'm curious, would you expect both to happen?

Expand full comment
author

Yes, I'd expect a bit of both (different people resolving the inconsistency in different ways). Seems worth it though, since changing behaviour in a good direction seems like a bigger plus than the minus of prompting some beliefs to change for the worse (without changing behavior).

Expand full comment

Are there any good responses to cases of non religious martyrdom from tautological egoists?

Expand full comment
author

It's not a *good* response, but the view relies on a global revealed preference theory of well-being, such that *whatever* you do -- even killing yourself -- thereby *automatically* qualifies as being what is personally best for you.

The two steps are to: (1) use the idea of revealed preferences to claim that, trivially, people always do what they (in this sense) most want to do. (This is true by definition, if you define what they want to do in terms of what they actually do.)

(2) Adopt a "global preference" theory of well-being, so that what is best for you is to achieve what you overall most want.

Putting the two together, you can (foolishly) claim that whatever anyone does is *thereby* what is best for them.

Expand full comment
Jun 2Liked by Richard Y Chappell

Ok, thanks.

Expand full comment

What is really at stake?

Perhaps you mean that a religious martyr can console themselves that they will have a reward in the afterlife? But this isn't really altruism, is it?

Tautologies are usually unhelpful. The question in this case is, is the alternative distinction between egoism and altruism any more useful. The conventional view distinguishes one’s impulses from the more refined moral judgements one makes. The tautology just lumps them all together as parts of the self. What can one say in one case that one can’t in the other?

Is it a contradiction for an egoist to care about someone/something more than about their own life? I think the distinction is sometimes made between a normative egoist, who thinks one ought to care more about one's self, and a psychological egoist, who just thinks people tend to be self-centered or something. That distinction is left out of this post.

Expand full comment