An objection that moved me away from utilitarianism is a variant on the demandingness problem. I've often found versions of a two-level utilitarianism to be persuasive in solving certain problems (like Railton's paper on personal relations and alienation). But it seems to me that even here one cannot get away from the demandingness probl…
An objection that moved me away from utilitarianism is a variant on the demandingness problem. I've often found versions of a two-level utilitarianism to be persuasive in solving certain problems (like Railton's paper on personal relations and alienation). But it seems to me that even here one cannot get away from the demandingness problem: given the state of the world it's not clear that it would be best if we formed many personal relationships which took up our time and resources preventing us from doing good elsewhere. Attempts to square these two priorities by utilitarians often feel squirmy.
It's been my belief that one's meta-ethics matters in how serious we should take the demandingness problem. And i've always found most meta-ethics associated with utilitarianism to be too subjectivist to persuade one on this point. It seems as though you'd need a more firmly objectivist meta-ethics if you're going to be able to justify the kinds of moral demands that a utilitarian outlook recommends.
Yeah, I'm very much a moral realist, so it seems perfectly plausible to me that *ideally*, we really should do vastly more to help others even at grave cost to ourselves. But of course none of us are morally perfect. And there's nothing in utilitarianism (properly understood) that says we should take perfection as the *baseline*, and feel bad whenever we fall short of it. We can simply accept that we're inevitably imperfect, and try to do better on the margins. I discuss the issue more here:
(That said, I'm also open to the possibility that some degree of partiality is actually intrinsically warranted. It's an issue I'm highly uncertain about, and certainly think people could reasonably go either way on it. I don't see a huge difference between traditional utilitarianism and agent-relative welfarist consequentialism, so even if one moves to the latter, it isn't too far to go!)
An objection that moved me away from utilitarianism is a variant on the demandingness problem. I've often found versions of a two-level utilitarianism to be persuasive in solving certain problems (like Railton's paper on personal relations and alienation). But it seems to me that even here one cannot get away from the demandingness problem: given the state of the world it's not clear that it would be best if we formed many personal relationships which took up our time and resources preventing us from doing good elsewhere. Attempts to square these two priorities by utilitarians often feel squirmy.
It's been my belief that one's meta-ethics matters in how serious we should take the demandingness problem. And i've always found most meta-ethics associated with utilitarianism to be too subjectivist to persuade one on this point. It seems as though you'd need a more firmly objectivist meta-ethics if you're going to be able to justify the kinds of moral demands that a utilitarian outlook recommends.
Yeah, I'm very much a moral realist, so it seems perfectly plausible to me that *ideally*, we really should do vastly more to help others even at grave cost to ourselves. But of course none of us are morally perfect. And there's nothing in utilitarianism (properly understood) that says we should take perfection as the *baseline*, and feel bad whenever we fall short of it. We can simply accept that we're inevitably imperfect, and try to do better on the margins. I discuss the issue more here:
https://rychappell.substack.com/p/caplans-conscience-objection-to-utilitarianism
(That said, I'm also open to the possibility that some degree of partiality is actually intrinsically warranted. It's an issue I'm highly uncertain about, and certainly think people could reasonably go either way on it. I don't see a huge difference between traditional utilitarianism and agent-relative welfarist consequentialism, so even if one moves to the latter, it isn't too far to go!)
Re moral realism, I would be interested in some sort of dialogue/debate/discussion between you and Joe Carlsmith about metaethics!
I've responded to a couple of his objections to realism at:
https://www.philosophyetc.net/2021/10/ruling-out-helium-maximizing.html
and
https://rychappell.substack.com/p/metaethics-and-unconditional-mattering