Against Tautological Motivations
Not everyone is selfish; not everyone cares about the impartial good
Human motivations vary widely (at least on the margins; “human nature” may provide a fairly common core). Some people are more selfish than others. Some more altruistic. Among the broadly altruistic, I think there is significant variation along at least two dimensions: (i) the breadth of one’s “moral circle” of concern, and (ii) the extent to which one’s altruism is goal-directed and guided by instrumental rationality, for example seriously considering tradeoffs and opportunity costs in search of moral optimality.
I think some kinds of altruism—some points along these two dimensions—are morally much better than others. Something I really like about effective altruism is that it highlights these important differences. Not all altruism is equal, and EA encourages us to try to develop our moral concerns in the best possible ways. That can be challenging, but I think it’s a good kind of challenge to engage with.
As I wrote in Doing Good Effectively is Unusual:
We all have various “rooted” concerns, linked to particular communities, individuals, or causes to which we have a social or emotional connection. That’s all good. Those motivations are an appropriate response to real goods in the world. But we all know there are lots of other goods in the world that we don’t so easily or naturally perceive, and that could plausibly outweigh the goods that are more personally salient to us. The really distinctive thing about effective altruism is that it seriously attempts to take all those neglected interests into account.…
Few people who give to charity make any serious effort to do the most good they can with the donation. Few people who engage in political activism are seriously trying to do they most good they can with their activism. Few people pursuing an “ethical career” are trying to do the most good they can with their career. And that’s all fine—plenty of good can still be done from more partial and less optimizing motives (and even EAs only pursue the EA project in part of their life). But the claim that the moral perspective underlying EA is “trivial” or already “shared by literally everyone” is clearly false.
So I find it annoyingly stupid when people dismiss effective altruism (or the underlying principles of beneficentrism) as “trivial”. I think it involves a similar sleight-of-hand to that of tautological egoists, who claim that everyone is “by definition” selfish (because they pursue what they most want, according to their “revealed preferences”). The tautological altruist instead claims that everyone is “by definition” an effective altruist (because they pursue what they deem best, according to their “revealed values”).
Either form of tautological attribution is obviously silly. The extent to which you are selfish depends upon the content of what you want (that is, the extent to which you care non-instrumentally about other people’s interests). Likewise, the extent to which you have scope-sensitive beneficentric concern depends upon contingent details of your values and moral psychology. Innumerate (“numbers don’t count”) moral views are commonplace, and even explicitly defended by some philosophers. Much moral behavior, like much voting, is more “expressive” than goal-directed. To urge people to be more instrumentally rational in pursuit of the impartial good is a very substantive, non-trivial ask.
I think that most people’s moral motivations are very different from the scope-sensitive beneficentrism that underlies effective altruism. (I suspect the latter is actually extremely rare, though various approximations may be more common.) I also think that most people’s explicit moral beliefs make it hard for them to deny that scope-sensitive beneficentrism is more virtuous/ideal than their unreflective moral habits. So my hope is that prompting greater reflection on this disconnect could help to shift people in a more beneficentric direction. (Some may instead double-down on explicitly endorsing worse values, alas. One can but try.)
As with the “everyone is really selfish” move, I suspect that appeals to tautological altruism tend to reflect motivated reasoning from people who don’t want to endure the cognitive dissonance of confronting the disconnect between their everyday moral reasoning and the abstract moral claims they appreciate are undeniable. I think that’s super lame, and people who are opposed to the EA conception of beneficence should stop eliding the differences, grow a spine, and actually argue against it (and for some concrete, coherent alternative).
I am one of those who finds Effective Altruism (EA), especially in its Oxford doctrinaire version, both silly and misguided, and I think you may be doing a disservice by "eliding" the concept of effective altruism (wanting to make a difference) with the orthodoxy of Effective Altruism, which is extremely specific, even to the extent of proposing mathematical approaches.
The silliness of EA is that it fails to understand what purpose it serves, and thereby is blind to its own weaknesses. EA falls into a general category of "ways of making decisions about resources" that includes a massive range of ideas, algorithms, and historical examples. In many ways, what service can and should be provided to society by its resources (people, capital and land) is the underlying concern of Capitalism, Socialism, and every economic system ever proposed. How we reach decisions to select and optimize these systems is the entire study of political science.
The fact that EA arbitrarily constrains its interest to eleemosynary activities enhances the deception. It appears to be "sensible" because it applies modern management theory to non-market concerns, but is worthless in effect beyond its basic suggestion "we should think about whether our giving matters."
If I am a society with resources that are beyond the barest individual sustenance, I am faced with the question of how to use those resources whether individually or collectively. I could take the extra seed and feed another family, trade with another person, or plant it to expand future yields. At a discrete level, understanding the best use of this (especially WRT future states) is the basic concern of economics.
If the resources are more than trivial, a society will inevitably have more than one valid option for extra resources, and they will need a way to figure out the optimum approach. In our above example, we could decide to create a seed bank, collectively trade surplus, or use it to feed an army. These questions are political science. Democracy? 50%+1? Dictatorship? Philosopher king? All forms of collective decision optimization.
Charitable giving (or investment) is a subset of both of these issues. It is economic in that it involves calculating/predicting optimizing the use of resources towards an outcome, whether effectively or not, and it is political in that it involves making a decision among multiple options where the outcome is not precisely knowable.
We have economics and political science because we live in an ecosystem/society/economy that is so monstrously complex that it cannot be predicted nor optimized without simplifying assumptions. We don't need democracy because we like voting, we need it because giving one person control tends to lead to worse predictive power over the future. We don't need capitalism because we like day trading, we need it because it is good algorithm for resource pricing absent monopolies/monopsonies/market distortions.
What is most critical is that our best approaches to both are deeply flawed, again because the arena in which we are working is so complex that the only model for predicting the future is the system itself.
What EA tries to do is wipe away 50,000 years of thinking and just say: "Calculate which activity will benefit the most people and put your money there." If I am charitable, this is just a sad rehashing of Utilitarianism that chooses to ignore why it failed as a political and economic approach. If I am being more cynical, it is just the latest incarnation of the divine right of kings, where the person doing the donating is uniquely qualified to assess the best outcome because they are the person who accumulated the money.
"Effective" giving, like effective corporate management, effective venture investing, effective foreign policy, hell, even effective personal dieting, is only possible rhetorically, not in reality. We cannot individually predict what will be the most effective use of capital (mostly we are wrong). The economy and society are far too complex, and the only way we can assess effectiveness is in retrospect.
In my opinion, the most effective altruism is to distribute money broadly, allowing the underlying system to allocate the money dynamically. Collecting money is always distortionary. Giving money directly is a better algorithm for finding truth than a dude at Oxford.
None of this relates to "selfishness"
Agreed. I don't think it does either the supporters or the opponents of Effective Altruism any favours to say that everyone's an Effective Altruist. (I have similar objections to the view that "all Buddhists are Engaged Buddhists".)