A curious fallacy I often notice (but haven’t noticed others notice) is when philosophers assume that a morally significant difference must make a difference to how we should act. I’ll step through a couple of examples, below. But first, to explain why I think this is a mistake: Our most direct response to normative reasons is via our attitudes: what we care about, and why. I take rational action to be “downstream” of our attitudes of concern. And I take attitudes to be more fine-grained than acts: different attitudes/concerns could end up motivating the choice of one and the same act.
Given this background understanding, it should be obvious why morally significant differences should be marked first and foremost by different attitudinal responses. It’s a further question whether those different attitudes call for different actions or not. If the two different attitudes happen to call for the same action (as is always possible), then the fact that you perform that same act in either case does not indicate a failure to mark the morally significant difference in the world. You may mark it through your attitudes, and yet find that it doesn’t alter what act you should perform.
Two examples: cluelessness, and the separateness of persons.
Cluelessness
Check out ‘The Cluelessness Objection’ on utilitarianism.net for background. Lenman’s third objection to the expected value response to cluelessness is that it fails to adequately mark the distinction between a known no-effect and utter cluelessness about possible effects:
It is surely a sophistry to treat a zero expected value that reflects our knowledge that an act will lack significant consequences as parallel in significance to one that reflects our total ignorance of what such consequences (although we know they will be massive) will be. (Lenman 2000, p. 357)
As I respond in footnote 6 of the utilitarianism.net article:
It’s true that this is a significant difference. But it’s a mistake to assume that anything morally significant must change how we assess acts, when often attitudes are better suited to reflect such significance. We should feel vastly more angst and ambivalence—and strongly wish that more information was available—in a high-stakes “total uncertainty” case than in a “known zero” case. That seems sufficient to capture the difference.
This diagnosis also suggests different actions in some circumstances: if you have the option to do further research that could feasibly improve your epistemic position, you have strong reason to pursue that in the “uncertainty” case, and no such reason in the “known zero” case. But given suitable stipulations, such as if you cannot possibly gain further info, it may be that the same actions are called for. For example, in the face of ineradicable cluelessness about how present-day actions affect the likelihood of far-future genocides, we should simply bracket this (unknowable) consideration and strictly prefer to support the Against Malaria Foundation over the For Malaria Foundation, because the former does more good in every way that we can possibly know.
So when utilitarians oppose malaria rather than support it, this does not (contra Lenman) indicate a failure on our part to mark the distinction between zero significance and unknown significance. It instead reflects a rational response to this distinction: one that doesn’t paralyze or prevent us from performing life-saving actions with obviously positive expected value (given all that we know).
The Separateness of Persons
Ayn Rand notoriously inferred from the law of identity (“A=A”) that her particular brand of libertarianism must be true. More sophisticated philosophers, following Nozick and Rawls, infer from the distinctness of persons (“Amy ≠ Aaron”) that utilitarianism must be false. Neither inferential leap strikes me as sensible.
In fairness to the real philosophers, they aren’t appealing to a point of trivial logic, but rather to the substantive normative datum that it makes a moral difference who is harmed or benefited. When the same person who pays a cost also receives a greater benefit, they are compensated in a way that Amy is not if she pays a cost so that Aaron—a distinct person—may receive a greater benefit.
So there is something worse about the interpersonal tradeoff: Amy suffers a pure (uncompensated) cost. Though it is less often noted, there is also something better about it: Aaron gets to enjoy a pure benefit. To observe these differences is not yet to determine any overall verdict on how they compare (all things considered).
On one way of understanding the objection, it attributes to utilitarians the view that different people’s interests are wholly fungible: the positive and negative differences “cancel out” so that we no longer see any harm (or “moral residue”, or grounds for pro tanto regret) about any process that is net-positive. Such a view would fail to mark the distinction between persons as one that has any moral significance.
But (as I argued in ‘Value Receptacles’) this is a false attribution. There’s nothing stopping utilitarians from having separate and conflicting desires for each person’s well-being. Since they are equally weighted, we will end up opting for the choice that yields the greatest net benefits. But we’ll feel differently—not strictly worse, but more conflicted between one negative and one more-strongly-positive desire—in the two-person case (compared to the one-person case). In this way, the utilitarian can—very plainly and literally—mark the significance of the distinction between persons. On my view, the distinction between persons is a normative difference that calls for a difference in attitude.
I’m struck by how hastily other philosophers leap to the conclusion that differences in moral significance must be reflected in action. For example, Otsuka & Voorhoeve (2009: 179-180) write:
[A] shift of weighting when we move to the interpersonal case can be resisted only on pain of denying the moral significance of the separateness of persons.…
These differences between the one person and the two-person case imbue the potential loss to a person with greater negative moral significance in the two-person case. You should therefore intervene in a two-person case to prevent [the trade-off].
In light of the availability of my alternative proposal for how to mark the moral significance of the separateness of persons, these inferences are logically invalid. The greater negative moral significance of Amy’s loss in the two person case does not entail that you should intervene, unless one can further show that there is not also a correspondingly greater positive significance to Aaron’s gain. If it instead turns out that we have two equally strong and opposed reasons in the two-person case that we lack in the one-person case, this would both (i) respect the moral significance of the separateness of persons, by changing what moral reasons we have and what attitudes are called for in response; and yet (ii) call for one and the same act-response, namely, allowing the trade-off to proceed.
[See also: Acts, Attitudes, and the Separateness of Persons]
Conclusion
Philosophers sometimes assume that a morally significant difference must make a difference to how we should act. This is a mistake. A morally significant difference must make a difference to our reasons. This difference in reasons primarily suggests a difference in fitting attitudes, or our overall psychological response to the situation. But only some such differences will change how we should act.
Some changes in reasons leave unchanged which decision is overall best supported by our reasons. Some even leave unchanged the net strength of the reasons supporting this act (as when we introduce a pair of new reasons that perfectly balance each other out in the overall weighting). And some even leave unchanged our reasons for action, giving us only reasons for heightened angst or other emotional responses (as when the stakes are raised in a “clueless” fashion, without changing the expected value of any prospect).
When different attitudes rationally motivate one and the same act-response, we may mark a morally significant difference (through our different attitudinal responses) without this necessarily making any difference to how we should act.
People who tell me that I should study for my exams which will be in the future fail to recognize the separateness of moments, and that there is no super moment that contains both this moment and the moment during which I'm taking the exam.
One interesting question is how exactly normative differences should affect attitudes. I'm slightly dubious that we should feel deeply conflicted when we're benefitting one and harming another, though we've discussed that before. But it seems even odder to think that when one is making a decision that has positive expected value but unpredictable actual value they should feel deeply conflicted and worried.
Your position here is very appealing, but I find it difficult to reconcile with your insistence that that ethics should focus on "what really matters," and that what really matters is benefits and harms to sentient beings. Are you saying that attitudes can also "matter" in a way that has moral significance?
It would be very plausible for a utilitarian to say that attitudes can have instrumental value insofar as attitudes shape behaviour. But it is harder to see how attitudes can be part of a utilitarian's fundamental moral theory, as you seem to be asserting here, rather than part of their practical moral advice or their empirical account of moral psychology.
A thought experiment:
Imagine two intelligent utilitarian robots, Annie and Bertie, both capable of responding to normative reasons, and also capable of editing their own software. They are programmed to act identically in all circumstances — the only difference is that Annie is programmed like a strawman (strawrobot?) utilitarian who just adds up benefits and harms, while Bertie is a sophisticated Chapellian utilitarian whose code includes some additional lines defining the function experience_heightened_angst() and calling that function whenever Bertie encounters the sorts of situations you describe in this post.
(Q1) If Annie gains access to Bertie's source code, should she edit her own software to include the additional lines from Bertie's code?
(Q2) If I am a robot engineer, should I make robots with Annie's code or Bertie's?