Revisiting "Philosophical" Utilitarianism
Well-being is what matters, but needn't exhaust the moral facts
There’s an interesting passage in Scanlon’s classic (1982) ‘Contractualism and Utilitarianism’, where he tries to pin down a common motivation for utilitarianism on the basis of broader philosophical commitments:
[W]hat I will call 'philosophical utilitarianism' is a particular philosophical thesis about the subject matter of morality, namely the thesis that the only fundamental moral facts are facts about individual well-being. I believe that this thesis has a great deal of plausibility for many people, and that, while some people are utilitarians for other reasons, it is the attractiveness of philosophical utilitarianism which accounts for the widespread influence of utilitarian principles. (p. 108)
“Philosophical utilitarianism”, so defined, strikes me as a non-starter. For example, there are facts about what constitutes virtuous motivation and blameworthiness, and these are not (straightforwardly reducible to) facts about individual well-being. Perhaps even more obviously, there are moral facts about which moral theory is correct, along with more specific questions like whether we can reasonably give more weight to the interests of our loved ones than to strangers, how to resolve the paradoxes of population ethics and decision theory, and so on. None of these are answered simply by pointing to “facts about individual well-being”. But they’re fundamental questions of ethics. So there’s just no chance at all that “the only fundamental moral facts are facts about individual well-being.” That claim is obviously false. So I very much doubt that it explains the appeal (or “widespread influence”) of utilitarianism as a normative theory.
A more plausible claim in this vicinity is that individual well-being is the only thing that fundamentally matters, or warrants ultimate concern. And this more modest claim fits well with much of what Scanlon goes on to say in the next paragraph:
It seems evident to people that there is such a thing as individuals' being made better or worse off. Such facts have an obvious motivational force; it is quite understandable that people should be moved by them in much the way that they are supposed to be moved by moral considerations. Further, these facts are clearly relevant to morality as we now understand it. Claims about individual well-being are one class of valid starting points for moral argument. But many people find it much harder to see how there could be any other, independent starting points.
Individual well-being clearly matters; it’s much less clear how other, interest-independent facts could have such normative force. This is precisely my argument from transgression-skepticism for consequentialism. But Scanlon misconstrues the underlying concern as metaphysical rather than normative:
Substantive moral requirements independent of individual well-being strike people as intuitionist in an objectionable sense. They would represent 'moral facts' of a kind it would be difficult to explain. There is no problem about recognising it as a fact that a certain act is, say, an instance of lying or of promise breaking. And a utilitarian can acknowledge that such facts as these often have (derivative) moral significance: they are morally significant because of their consequences for individual well-being. The problems, and the charge of 'intuitionism', arise when it is claimed that such acts are wrong in a sense that is not reducible to the fact that they decrease individual well-being.
I don’t get the charge of “intuitionism”. And the imputed connection to Mackie-style skepticism about “moral facts” seems entirely confused. (The significance of individual well-being is itself an objective normative fact, which may be difficult to metaphysically explain. But we should accept it nonetheless.) The better objection to interest-independent requirements is that they represent ‘moral facts’ of a kind that we would have no reason to care about!
The Significance of Well-being
Later in the paper, Scanlon writes:
Individual well-being will be morally significant, according to contractualism, not because it is intrinsically valuable or because promoting it is self-evidently a right-making characteristic, but simply because an individual could reasonably reject a form of argument that gave his well-being no weight.…
One effect of contractualism, then, is to break down the sharp distinction, which arguments for utilitarianism appeal to, between the status of individual well-being and that of other moral notions. (p. 119)
This is indeed a key point of disagreement, and one that I think reflects well on utilitarianism. Compare my objection to the “Informal Insurance” model of emergency ethics:
[I]magine extending the logic of the informal insurance model to a society including water-phobic robots who just want to collect paperclips but occasionally drop them in puddles. In order to secure the assistance of the robots in helping to free us from getting our feet caught on railroad tracks (or other non-water-related emergencies), we might reciprocate by rescuing their lost paperclips from puddles. If the informal insurance account of emergency ethics were correct, then your moral reason to save a drowning child would be of exactly the same kind as your reason to "save" a paperclip from a puddle in the imagined scenario. But this is clearly wrong. We have moral reasons to save lives and avert great harms for the sake of the affected individuals. These moral reasons are distinct from (and more important than) our reasons to participate in mutual-benefit schemes.
Given a suitably procedural, non-question-begging conception of “reasonableness”, it seems that the robots could “reasonably” reject principles for governing our shared life in society that gave no weight to collecting paperclips. That is their overwhelming concern, after all. So on the contractualist view, it seems that the nature of our reason to save a drowning child is (once again) of the same kind as our reason to “save” a paperclip. But that’s clearly wrong. Well-being really is special, and not just because contractors seeking mutual agreement would insist on regarding it as such.
Tangential: I'm curious whether Scanlon has ever answered the immediately salient question of how he might explain what "could reasonably reject" means? Is the idea that the answer to what that means will appeal to moral ideas like rights, fairness, justice, oppression, desert, etc., and that the apparently blatant circularity is not a problem because he's a "coherentist" and he believes "thick" moral concepts or something?
I don’t think I understand the robots and paper clips case. My reason for helping the robots find the paper clips is that this will secure their help saving drowning kids. The fact the robots care about clips is itself no reason for me to help them. It is the fact that the robots will help me if I help them that is my reason. My reason for saving drowning kids is that they are drowning. The fact that they are drowning is reason to help because (says Scanlon) a person could reasonably reject an argument that gave that fact no weight. Are you assuming the robots are persons?