Some objections to consequentialism rely upon a strangely non-normative conception of value (disconnected from the normative question of which outcome is impartially preferable). They assume that some outcome X maximizes value, note that it seems we ought to bring about alternative Y instead, and so infer that consequentialism must be false. But they neglect the fact that outcome Y also seems better than X — meaning that they really have an argument against their value assumptions, not an argument for the non-consequentialist claim that we should bring about a worse outcome.
Methodological lesson: if faced with a putative “counterexample to consequentialism”, remember to assess the outcomes as well as the actions. (Often you can “naturalize” a case by replacing actions with purely natural causes, and ask whether or not we should prefer for a gust of wind to trigger X. If X isn’t actually preferable—i.e., better—then consequentialism trivially agrees that we shouldn’t bring about X. So the case fails in its ambition to distinguish consequentialist and non-consequentialist views.)
Three Examples
Exhibit A is the mistaken belief that objections to crude value hedonism (e.g., evil pleasures, or the experience machine) are objections to welfarist consequentialism per se. It would be bizarre to have the combined thought that we shouldn’t torture innocent people for the glee of sadists while also thinking it would be a good thing for the sadistic outcome to occur (say if the sadists’ victims were randomly struck by lightning). Clearly, the upshot of our anti-sadism intuitions is not that we should bring about worse outcomes, but rather that evil pleasure isn’t good. That is, it calls for refining our theory of value, not abandoning the idea that better outcomes are worth pursuing.
Exhibit B is population ethics. Sometimes people tell me that they’re a non-consequentialist because they don’t think we should bring about Parfit’s repugnant world Z. What I don’t understand is why, then, they are assuming that world Z constitutes a better outcome. Again, that just seems like a bizarre combination of views. If you don’t like the repugnant conclusion (and I can certainly respect that!), then don’t assume that Totalism is the correct population axiology. Maybe there are other considerations (whether average welfare, perfectionist excellences, or whatever) that matter more to determining the overall quality (value) of an outcome.
If you insist on assuming a notoriously “repugnant” conception of value, and then reject the principle that we should promote value because you find the population-ethical implications intuitively repugnant, you should at least pause to consider whether you might have misdiagnosed the problem!
Exhibit C is Scanlon’s famous Transmitter Room case:
Jones has suffered an accident in the transmitter room of a television station. To save Jones from an hour of severe pain, we would have to cancel part of the broadcast of a football game, which is giving pleasure to very many people.1
When I teach about this case, nobody in the room has the intuition that it would be a better outcome were Jones to be electrocuted for an hour so that billions can enjoy watching the World Cup final live. So it cannot possibly serve as a counterexample to the consequentialist claim that we should bring about better outcomes.
For that, you instead need a case (e.g. Martian Transplant) where you think we intuitively ought to bring about a (transparently) worse outcome: that is, where the consequentialist-recommended alternative is something we should hope to happen via natural causes, but may not do ourselves.
Reflecting on Anti-Aggregative Intuitions
How should we respond to the Transmitter Room case? I see three main options:
(1) You could fully embrace the intuition via lexicalism, and hold that no amount of trivial pleasures can outweigh the disvalue of an hour of agony. (The problem with this view is that it violates transitivity, given that one can always create chains of ever-less-trivial value kinds V(n) where a sufficient number of kind V(n-1) clearly can outweigh a little of V(n), and thereby breach any supposed lexical thresholds.)
(2) You could partly accommodate the intuition via Parfit’s prioritarianism; but this just gets the intuition that Jones’ agony isn’t easily outweighed, not that literally no number of trivial goods could eventually outweigh it.
OR
(3) You could argue that our intuitions must have gone awry. Yes, it sure seems worse for the one to suffer lots. But that one person is very salient, whereas we can’t really grasp the full reality of billions of tiny benefits—instead we implicitly, but mistakenly, round them down to nothing. So we should not trust our intuition that saving Jones makes for a better outcome. So nor should we trust our intuition that we ought to save Jones (since this may very well rest upon the former intuition).
Indeed, as Parfit goes on to show, our anti-aggregative intuitions (according to which some benefits are so small as to be strictly normatively “irrelevant”) are provably unreliable:2
[W]e might claim that
(1) we ought to give one person one more year of life rather than lengthening any number of other people’s lives by only one minute.And we might claim that
(2) we ought to save one person from a whole year of pain rather than saving any number of others from one minute of the same pain.These lesser benefits, we might say, fall below the triviality threshold.
These claims, though plausible, are false. A year contains about half a million minutes. Suppose that we are a community of just over a million people, each of whom we could benefit once in the way described by (1). Each of these acts would give one person half a million more minutes of life rather than giving one more minute to each of the million others. Since these effects would be equally distributed, these acts would be worse for everyone. If we always acted in this way, everyone would lose one year of life. Suppose next that we could benefit each person once in the way described by (2). Each of these acts would save one person from half a million minutes of pain rather than saving a million other people from one such minute. As before, these acts would be worse for everyone. If we always acted in this way, everyone would have one more year of pain.
Note that the (expected) value of each choice is clearly independent of the others—it does not matter how many others have made the same choice, or indeed whether it is repeated at all. As a result, the fact that repeating the choice of concentrated benefits across the whole population results in an overall worse outcome (than the alternative choice of greater distributed benefits) establishes that each such choice is worse.
That is, while we intuitively feel that
(1*) it is a better outcome for one person to have one more year of life rather than lengthening any number of other people’s lives by only one minute.
Parfit’s iteration argument proves that (1*) is false, and thus our anti-aggregative intuitions are unreliable.
Given that our anti-aggregative intuitions seem to apply just as strongly to evaluative matters as to deontic ones, and yet are demonstrably mistaken about the former, there’s a real challenge for anti-aggregationists to show why their deontic intuitions should be trusted.
Conclusion
One cannot reject consequentialism on the basis of an intuition about what we ought to do that perfectly accords with our intuition about which outcome would be best. Instead, such intuitions give us reason to reconsider our prior assumptions about value (if they’re inconsistent with the newly intuited verdict).
That’s not to say that the new intuition will necessarily win out: sometimes we should reject our intuitions about cases as unreliable and confused, especially when they pit concentrated salient interests against widely-distributed, less-salient ones. It’s entirely predictable that we’ll be biased against the latter. We should try to overcome this bias, and still give full weight to the interests of those who are less easily seen.
This abbreviated version of the case is taken from Parfit’s (2003) ‘Justifiability to Each Person’, p. 375.
Ibid, p. 385.
I think this insight takes the force out of every objection to consequentialism. Very few people think “it would be great if the surgeons hand slipped and they killed the person and distributed their organs but it would be wrong to do that knowingly.” Most objections to consequentialism seem hard to stomach if you imagine that it would be good if the wrong act happened.
Lexicalism doesn't necessarily violate transitivity. I think it does violate weak independence, transitivity, the independence of irrelevant alternatives (IIA), completeness *or* certain continuity assumptions. See https://www.researchgate.net/publication/303858540_Value_superiority
Weak independence: If an object e is at least as good as e’, then replacing e’ by e in any whole results in a whole that is at least as good.
Here are some specific transitive and complete lexical views and some of their other properties:
1. Leximin and lexical threshold negative utilitarianism satisfy weak independence and IIA, but violate even weak continuity assumptions.
2. Rank-discounted utilitarianism (e.g. Definition 1 in https://econtheory.org/ojs/index.php/te/article/viewFile/20140629/11607/345) satisfies IIA and some weak continuity assumptions, but violates weak independence.
3. Limited, partial and weak aggregation views can (I'd guess) be made to satisfy weak continuity assumptions (and transitivity), but violate IIA. I'm not sure if they can also be made to satisfy weak independence.
Your sequence argument doesn't work against rank-discounted utilitarianism, in particular, because that view violates weak independence and/or your argument isn't specific enough about the steps in value. For example, if we interpret the aggregated utilities as individual harms or bads and benefits or goods, then for any harm of degree x, there is some slightly lesser harm of degree y and a finite number N so that N harms of degree y are worse than one harm of degree x. Rank-discounted utilitarianism violates a uniform continuity assumption instead. The assumption would be that there is a finite difference in degree d>0 of harm such that no matter how great a harm x<0, an individual harm of degree x-d<x<0 can be outweighed by a finite number of lesser harms of degree x. But to even state this assumption, you need to assume you can measure differences in harms, and it's also much less intuitively obvious because of how abstract it is, and it's not clear why the same d>0 need work for all. (You could generalize it with metric spaces or uniform topological spaces to avoid differences and even distances between harms, but that's even more abstract.)
That being said all of the above views can be interpreted as aggregative in a sense (or possibly different senses), anyway. Leximin and LTNU can be represented by separable utility functions, even if not real-valued, while RDU can be represented with a real-valued utility function. Limited, partial and weak aggregation are by name aggregative. Truly non-aggregative views could be Scanlon's Greater Burden Principle / Regan's harm principle, according to which the greatest individual harm/potential loss is prioritized lexically above all others and which violates IIA, so it can't be represented by a utility function, although it could still be separable if we order lexicographically based on harm.
See also https://centerforreducingsuffering.org/research/lexical-views-without-abrupt-breaks/