4 Comments

Sounds pretty close to my view on these distinctions. My preferred metaphor is three guys who work out. Andy wants to be as swole as he possibly can be at all times. Bob wants to make sure he always stays above some particular threshold of measurable fitness. Carlos generally works out quite a bit, but goes through waves where he does it more or less.

Consequentialist morality is analogous to the physiological facts about diet and exercise, which are the same for all three guys. The difference between them is just a personality difference, not a difference in theory of the underlying physiology.

Expand full comment

This makes a lot of sense to me. It also seems related to "ought implies can" and compatibilism. We shouldn’t expect people to be appropriately sensitive to reasons that aren't salient to them.

But I also wonder if pointing out moral demands to people (like how much their donations could help and the consequences of their diets on animals) actually raises the salience enough that normal acts and inaction don't satisfice anymore, and they become "blameworthy". Then we'd be back at fairly demanding morality again, although still probably much less demanding than anything less than maximizing being blameworthy. To be clear, I am pretty sympathetic to this.

Related: https://forum.effectivealtruism.org/posts/QXpxioWSQcNuNnNTy/the-copenhagen-interpretation-of-ethics

Also, maybe it gives a pass to people who are bad at recognizing moral reasons, which could include people who cause harm. This often or usually makes sense for nonhuman animals, but we might wonder if this is too soft on basically rational humans. Or, maybe the response to them should really be similar: prevention, deterrence and teaching/training, not judgements of blameworthiness.

Expand full comment
Comment deleted
May 7, 2023
Comment deleted
Expand full comment

I like Tucker's paper! But I wasn't convinced by his argument for thinking that it's the "general features that matter" (rather than, as I argue, the specific ones) for constituting right-making reasons. His "proportionality" test assumes that we want to find the most general feature that co-varies with wrongness. But that would be a test of being (what I call) *criterial* for rightness, not for *grounding* it. It implicitly assumes that all wrong acts must be wrong *for the same reason*. But we should not assume this. We should leave room for the possibility that acts may be wrong for related-but-normatively-distinct reasons; and so we should reject Tucker's proportionality test.

But this is a subtle disagreement, and there is much else in the paper that I agree with. (And even the bits that I happen to disagree with are, I think, nonetheless well-argued, and present a very reasonable alternative to my preferred view.)

Expand full comment
Comment deleted
May 7, 2023
Comment deleted
Expand full comment

Oh, yes, you can certainly be a hybrid utilitarian (allowing for fittingness assessments of motives / quality of will, and hence blameworthiness) without believing in desert adjustments. See: https://www.utilitarianism.net/types-of-utilitarianism/#global-utilitarianism-and-hybrid-utilitarianism

Expand full comment