In ‘Deontic Pluralism and the Right Amount of Good’ (published in The Oxford Handbook of Consequentialism), I argue that—contrary to widespread belief—there is (or at least should be) no dispute between maximizing, satisficing, and scalar consequentialists. This is because there is no shared concept of ‘rightness’ that all three theories address. As the abstract explains:
Consequentialist views have traditionally taken a maximizing form, requiring agents to bring about the very best outcome that they can. But this maximizing function may be questioned. Satisficing views instead allow agents to bring about any outcome that exceeds a satisfactory threshold or qualifies as “good enough”. Scalar consequentialism, by contrast, eschews moral requirements altogether, instead evaluating acts in purely comparative terms, i.e., as better or worse than their alternatives. After surveying the main considerations for and against each of these three views, I argue that the core insights of each are not (despite appearances) in conflict. Consequentialists should be deontic pluralists and accept a maximizing account of the ought of most reason, a satisficing account of obligation, and a scalar account of the weight of reasons.
Some moral theories might invoke a primitive, indefinable sense of ‘wrong’—what Parfit calls mustn’t-be-done—but such a concept fits poorly with consequentialism. After all, primitive wrongness would mark a significant point of discontinuity in the strength of one’s moral reasons to act, departing from the continuous scale of realizable value that consequentialists care about.
So I think scalar consequentialists like Norcross are right to view normative reasons (rather than “rightness”) as foundational to consequentialism. But there’s an obvious sense in which we “ought” to do whatever we have most moral reason to do, and indeed this is just what maximizers like Singer and Sidgwick tend to have in mind. Finally, I think there’s a completely separate notion of obligation that’s definable in terms of blameworthiness, which (I’ve argued) satisficing accounts may best address.
Reconciling the three views is thus reasonably straightforward. I’m most confident that there should be no dispute between maximizers and scalar consequentialists. All consequentialists should accept the central claims made by both of these theories. Satisficing is more controversial: some consequentialists may be skeptical of the notion of blameworthiness, and hence may reject the idea that there’s anything for satisficing accounts to address here. But I personally think it’s a helpful addition to our overall theory. As a general point of methodology, I find it preferable to be able to say more, or make a wider range of interesting, substantive claims, than to unnecessarily limit ourselves to less.
Towards the end of the paper, I explain why I think all three accounts have a valuable role to play in our total theory:
It makes sense to aspire to do the best, while recognizing and accepting the reality that, as flawed agents, we will typically fall short. And it makes sense to have a firmer commitment to maintaining a level of at least minimal decency, rather than being willing to plummet to any moral depths without limit. Then, between these two principled standards lies a continuous scale of more-or-less demanding standards that we might choose to target. To help guide us in this choice, we can appreciate that the more good we achieve, the better. But beyond that, there is no authoritative meta-standard out there to tell us how high to aim.
Salience and Killing vs Letting Die
While my main development of satisficing consequentialism can be found elsewhere, one thing I really like about the current paper is the account I offer (in sec. 3.3) of how psychological facts about salience can explain (i) why killing is typically worse than letting die, and (ii) why it sometimes is not:
We do not generally find the millions of potential beneficiaries of charitable aid to be highly salient. Indeed, people are dying all the time without impinging upon our awareness at all. A killer, by contrast, is (in any normal case) apt to be vividly aware of their victim’s death. So killing tends to involve neglecting much more salient needs than does merely letting die. [The exceptions—e.g., watching a child drown in a shallow pond right before your eyes—are precisely the cases in which we’re inclined to judge letting die to be morally comparable to killing.]
Next, note that neglecting more salient needs reveals a greater deficit of good will. This is because any altruistic desires we may have will be more strongly activated when others’ needs are more salient. So if our resulting behavior remains non-altruistic even when others’ needs are most salient, that suggests that any altruistic desires we may have are (at best) extremely weak. Non-altruistic behavior in the face of less salient needs, by contrast, is compatible with our nonetheless possessing altruistic desires of some modest strength—and possibly sufficient strength to qualify as “adequate” moral concern.
Putting these two facts together, then, secures us the result that suboptimal killing is more apt to be blameworthy (and hence impermissible in sentimentalist terms) than comparably suboptimal instances of letting die. It’s a neat result for sentimentalist satisficers that they’re able to secure this intuitive result without attributing any fundamental normative significance to the distinction between killing and letting die.
So, if you’ve ever wondered whether the best form of consequentialism is scalar, maximizing, or satisficing, you may now find that the right answer is: “Yes! All of the above.”
Sounds pretty close to my view on these distinctions. My preferred metaphor is three guys who work out. Andy wants to be as swole as he possibly can be at all times. Bob wants to make sure he always stays above some particular threshold of measurable fitness. Carlos generally works out quite a bit, but goes through waves where he does it more or less.
Consequentialist morality is analogous to the physiological facts about diet and exercise, which are the same for all three guys. The difference between them is just a personality difference, not a difference in theory of the underlying physiology.
This makes a lot of sense to me. It also seems related to "ought implies can" and compatibilism. We shouldn’t expect people to be appropriately sensitive to reasons that aren't salient to them.
But I also wonder if pointing out moral demands to people (like how much their donations could help and the consequences of their diets on animals) actually raises the salience enough that normal acts and inaction don't satisfice anymore, and they become "blameworthy". Then we'd be back at fairly demanding morality again, although still probably much less demanding than anything less than maximizing being blameworthy. To be clear, I am pretty sympathetic to this.
Related: https://forum.effectivealtruism.org/posts/QXpxioWSQcNuNnNTy/the-copenhagen-interpretation-of-ethics
Also, maybe it gives a pass to people who are bad at recognizing moral reasons, which could include people who cause harm. This often or usually makes sense for nonhuman animals, but we might wonder if this is too soft on basically rational humans. Or, maybe the response to them should really be similar: prevention, deterrence and teaching/training, not judgements of blameworthiness.