10 Comments
Jul 25, 2022Liked by Richard Y Chappell

I'm not a utilitarian, but effective altruists who are saving lives are doing something very good and they should continue to be encouraged. Even most non-utilitarians can recognize that.

I think that with things like the giving 10% pledge, people have decided that it's good to treat it like a binary duty in some sense. Whereas in reality, you should give and give until you have hardly anything left. But that's unnappealing and it's probably not a good strategy to highlight this to get people to give more. You gave 10%? Great, now give XX% or else innocent human beings will die for trivial reasons. This is obviously true, but maybe it isn't so good for EA people to highlight this? Not sure.

Expand full comment
author

Yeah, it's tricky. I think one thing that can help to reconcile theory and practice here is to remember that the moral "should" connotes a sense of *demand* or requirement that doesn't really have any place in utilitarian theory. It would be both more accurate and less stress-inducing to simply say that it would be *even better* to give more. I expanded upon this theme in my earlier response to Caplan: https://rychappell.substack.com/p/caplans-conscience-objection-to-utilitarianism

Expand full comment

Thank you!

Expand full comment
Aug 1, 2022·edited Aug 1, 2022

< blockquote >objectively more important to save more innocent lives?<\ blockquote>

Whether X saves more lives than Y is an objective question. Whether those lives are innocent or not is a moral question. Importance is a subjective question, or you might argue that it is intersubjective. So this statement is not obvious.

Edit:

< blockquote >acknowledge widespread acceptance of utilitarianism as a desirable result<\ blockquote >

The post has made a good case that acceptance of utilitarianism is not obviously objectionable, as some who accept it have done good things, perhaps as a result of that acceptance. It has not established that acceptance of utilitarianism is desirable, or that those persons would stop doing good things if they altered their views. I don’t think the claim is that utilitarianism is necessary for doing good things, but that it tends to increase the likelihood. But that case was not made.

Expand full comment

<blockquote>beneficentrism[sic] closely correlates with utilitarianism in practice, <\blockquote>

Cite? You have provided some cherry-picked anecdotal examples, not data analysis.

A moral theory needs more in order to be true or false. A moral theory includes a standard, which evaluates things (actions, circumstances, intentions, whatever) as conforming to the standard or violating it. “X violates standard Y” can be true or false; “standard Y” can’t even be true or false without implicitly adding premises of the form “everyone ought to adopt standard Y.” This is Hume's idea, no “ought” from “is” alone. “I accept standard Y” is much easier to derive than “everyone must accept standard Y by logical necessity or empirical inference.”

So it seems better to speak of why standard Y is superior to standard Z, rather than to speak of standard Y or a moral theory being true. But then, by what standard should we judge that standard Y is superior to standard Z? Do we need a meta standard to judge standards? And then a standard to judge meta standards? Or should we expect Y and Z each to contain ideas about how to judge standards? If they agree on which of them is better, that seems like a win, but the typical cases will involve each picking themselves as superior.

If we grant that our understanding of morality is less than perfect, a moral theory should include principles regarding how our understanding might be improved. When we look at individuals, this is difficult. Persons' moral intuitions derive from generalizations of their experience, using our evolved psychology. Intuition may be the elephant, and theory the rider. At the social level, the various actions and evaluations of persons combine into an intersubjective whole, where everyone influences everyone else's attitudes and beliefs to a greater or lesser degree. This social process seems able to adjust and improve. Ideally, it criticizes itself, and contains space for alternate hypotheses to receive attention and be rejected or incorporated. But it isn’t fool-proof; it produced Stalin, Mao, and Hitler.

I’m not sure what we should conclude, except that the post only considers these issues obliquely, and makes implicit but unexamined assumptions. This might be necessary. Perhaps finding and examining these assumptions will help the discussion to move forward, or maybe they can be taken for granted and left unstated, if we really all accept them.

Expand full comment

<blockquote> proportionately fewer non-utilitarians seem to actually prioritize beneficence in this way.)<\blockquote>

Cite?

<blockquote> stuff like giving more to especially effective charities and otherwise seeking to improve the world with one’s marginal uses of time and money <\blockquote>

This description is distinct from utilitarianism. While utilitarianism might (?) form a subset, it's not clear that this excludes any rival to utilitarianism. And if this is the critical distinction, perhaps we should create a new ism to distinguish this from its actual rivals. Maybe this is what you are doing with “beneficentism?” However, it seems a bit vague, and I would not be surprised if it turned out to depend very critically on undefined concepts like “effective” and “improve.”

<blockquote> on what moral view would you not want others to do more good? <\blockquote>

Perhaps fallibilism? If persons try to do good but are bad at it, they can make things worse. Stalin, Mao, and Hitler may have thought they were trying to do more good. Most people wish they hadn’t bothered.

Expand full comment
deletedSep 11, 2022·edited Sep 11, 2022
Comment deleted
Expand full comment
author

Wow, I think that's the first time I've heard someone count it as a "disadvantage" of a moral view that it requires us to take into account the interests of helpless infants, the severely disabled, and other "non-reciprocators". (FWIW, I take the exclusive focus on reciprocation to be a decisive objection to contractarian accounts of morality!)

I think hedonism is false, and think moral uncertainty should suffice to make even hedonists wary of hedonium-shockwave futures that score disastrously on other reasonable theories of value. See: https://rychappell.substack.com/p/the-nietzschean-challenge-to-effective#conclusion

Expand full comment

Indeed, I don't think you need morality at all to derive the principle "help people who can reciprocate" because that follows purely from self-interest. Almost by definition, morality has to mean something beyond "help people who can reciprocate" (unless you're an egoist).

Expand full comment
Comment deleted
Expand full comment
author

I'm not arguing that utilitarianism is good *for us*. I'm arguing that it's *good*, simpliciter. I reject hedonism because I don't think it's the correct account what's good.

Expand full comment
Comment deleted
Expand full comment
author

I think you're missing the "altruism" part of "Effective Altruism". The point is to help people, including, e.g., those who would otherwise die of malaria. If you don't see any value in that, then yes, let's leave the conversation there.

Expand full comment