Suppose that donating a kidney involves a degree of altruism that is all-things-considered irrational.1 On this view, impartiality is a normative mistake: we have significant agent-relative reasons to give extra weight to our own interests, and the self-sacrifice involved in live kidney donation is sufficiently great that you have most reason not to do it (despite its being net-beneficial in impartial terms).
Still, it’s a happy mistake (even assuming that it’s a mistake at all), so why care if a stranger makes a mistake of this purely agent-relative sort? Usually, I want others to do as they ought. But that’s because I presume that doing as one ought would be a good thing, impartially speaking. If I became convinced of ethical egoism, I would no longer want others to do as they ought: I would want them to refrain from stealing my wallet, for example, even if the theft would be in their interests (and hence be what, according to egoism, they have “most reason” to do).
Similar issues arise for those who have a purely agent-relative conception of deontic constraints: the fact that an agent ought not to kill one to save five does not imply that bystanders should want the same thing. Maybe deontology is self-effacing, and a deontologist should want others to (wrongly) act in whatever ways utilitarianism recommends, whilst quietly keeping their own hands clean. But few deontologists seem to recognize this as an implication of their view (despite many claiming that constraints are purely agent-relative!). They tend to be horrified by the thought of others—not just themselves—violating rights for the greater good.2
Insofar as we have good reasons to help others act upon their (apparently) agent-relative reasons, it turns out that the latter reasons aren’t purely agent-relative after all. Unlike in the case of egoism, commonsense morality tells us that everyone has reason to want others to do as they have most reason to do (and so, e.g., not to be unduly pressured into acts of altruistic self-sacrifice). On this view, promoting overall well-being isn’t the most important thing. It’s more important that each person do as they have most reason to do, which often involves acting more self-interestedly. The moral goal becomes universal rationality, not universal happiness.
So, as a morally-motivated bystander, you should try to spread the bad news: “Don’t do too much good, it isn’t worth the cost!” you interrupt a would-be donor, and encourage them to go home and watch TV while others die preventable deaths. That’s the way to truly respect their rational autonomy and status as ends in themselves.
Is that the view? Doesn’t it seem weird?
Inspired by Scott Alexander’s question:
I find it interesting that so many people feel protective of potential kidney donors and want to protect them from self-sacrifice. This isn’t selfish (they’re trying to protect someone else). It’s not exactly altruistic (it’s preventing an act of altruism which I think everyone agrees is probably net positive). So what’s the psychological motive here?
This attitude committing them to an agent-neutral conception of constraints, and hence to my new paradox — but let’s put that aside for now.
I think I have a pretty strong psychological/sociological hypothesis in response to Scott's question there:
These people are worried that acceptance or encouragement of such a large sacrifice is a trick by the powerful, which is bound to end up like so many historical examples of hugely net-negative exploitation. How much of this is evolved instincts against being duped, how much is the cultural shadow of dystopian fiction, and how much is simple error, I don't know.