To continue my Parfit series, today’s topics are (i) priority and (ii) aggregation.
Equality vs Priority vs Diminishing Marginal Value
Suppose you could bring some extra happiness to either Joy (who is already pretty happy) or Misery (who is not). It seems like you ought to benefit Misery, even if you could give slightly more extra happiness to Joy. Cases like this are often taken to support the egalitarian view that equality matters intrinsically: the existence of a well-being gap between Joy and Misery makes the world worse (more unjust, perhaps), so we ought to reduce the gap if we can.
Parfit rejects this view, due to the leveling down objection: one way to reduce the gap would be to harm Joy without benefiting Misery in the slightest. But there would be nothing good about doing that. So equality per se is not good. It’s only good to reduce inequality when we do this by improving the lot of the worse off (rather than by harming the better-off). Parfit takes this intuition to instead support prioritarianism: the view that “benefiting people matters more the worse off these people are.”
Prioritarianism is a non-comparative view. It would be just as important to help Misery, given her low absolute level of well-being, even if Joy didn’t exist at all. It’s not the gap that matters, but the people who are badly off. This view strikes me as a clear improvement over egalitarianism.
Utilitarians, by contrast, will often support more equal distributions of resources for purely instrumental reasons. After all, resources such as money tend to have diminishing marginal utility: the more you have, the less of a difference one more unit tends to make. A dollar is worth a lot more to a homeless person than to a millionaire.
The funny thing about prioritarianism is that it seems to treat utility (well-being) itself as having diminishing marginal value. To illustrate, suppose for simplicity that prioritarianism applies to momentary rather than lifetime well-being. (In my book, I show how to extend the argument without this assumption.) Now imagine that Joe has the option to provide himself with either a small benefit at a time when he is poorly off, or a greater benefit at a time when he is better-off. By definition, the latter option benefits him more. But the priority view implies that the former may be “more important”. That is, considering only this person’s welfare, it might be better to do what is worse for him. Could that really be right?
To avoid such problems, utilitarians may agree with Parfit that our intuitions support prioritarianism, but then seek to give a debunking explanation of these intuitions rather than accepting them at face value. Experimental evidence suggests that our intuitive appreciation of the diminishing marginal utility of resources overgeneralizes when presented with a new kind of unit—a “unit of well-being”—with which we lack intuitive familiarity.
Alternatively, I suggest a possible view on which basic goods such as happiness have diminishing marginal utility. To ensure theoretical clarity, we must take care to distinguish the prioritarian idea that the interests of the worse off matter more, from the (utilitarian-compatible) idea that an equal amount of happiness would constitute a greater benefit for the worse off, i.e. making a greater difference to their (inherently equally important) interests or well-being. This latter view would have much the same practical implications as prioritarianism, but without the theoretical costs. For example, the view may advise Joe to prefer a smaller burst of happiness when he’s down over a larger burst when he’s already doing well; but it does so precisely on the grounds that the smaller burst actually constitutes a greater benefit (in context).
If you insist on accommodating the Misery intuition, this diminishing goods version of utilitarianism strikes me as the best of the three options. That said, I suspect that much of the temptation to regard the smaller happiness boost to the sad person as constituting a greater benefit stems from illicitly building in further instrumental effects: perhaps that little burst of happiness is all Joe needs to get out of his funk that day, yielding longer-lasting good effects. But of course if that were the case, then even the classical utilitarian could get on board with giving an intrinsically smaller happiness boost that indirectly results in greater overall happiness.
Aggregation
Consider Scanlon’s Transmitter Room case:
Jones has suffered an accident in the transmitter room of a television station. To save Jones from an hour of severe pain, we would have to cancel part of the broadcast of a football game, which is giving pleasure to very many people.
Intuitively, it doesn’t matter how many people are watching the football game, it’s just more important to save Jones from suffering severe pain during this time. Why? One answer would be that we can’t aggregate distinct interests, so all that’s left to do is to satisfy whichever individual moral claim is strongest, namely, Jones’. But Parfit suggests an alternative explanation: perhaps we should help Jones because he is much worse off, and thus has greater moral priority.
Parfit argues that his prioritarian account is preferable to Scanlon’s anti-aggregative approach in cases where the two diverge. We can see this by imagining cases in which the many smaller benefits would go to some of the worst-off individuals. By refusing to countenance aggregation, we would end up prioritizing a single large benefit to someone already well-off, rather than (individually smaller but collectively immensely larger) benefits to a great many worse-off individuals. That seems clearly wrong. It would not, for example, be a good thing to take a dollar from each of a billion poor people in order to give a billion dollars to someone who was wealthy to begin with.
So, rather than discounting smaller benefits (or refusing to aggregate them), Parfit suggests that we do better to simply weight harms and benefits in a way that gives priority to the worse-off. Two appealing implications of this view are that: (1) We generally should not allow huge harms to befall a single person, if that leaves them much worse off than the others with competing interests. (2) But we should allow (sufficient) small benefits to the worse-off to (in sum) outweigh a single large benefit to someone better-off.
Since we need aggregation in order to secure verdict (2), and we can secure verdict (1) without having to reject aggregation, it looks like our intuitions are overall best served by accepting an aggregative moral theory.
(For more, see sec. 3 of Parfit’s Ethics.)