Yeah, I'm very much a moral realist, so it seems perfectly plausible to me that *ideally*, we really should do vastly more to help others even at grave cost to ourselves. But of course none of us are morally perfect. And there's nothing in utilitarianism (properly understood) that says we should take perfection as the *baseline*, and fee…
Yeah, I'm very much a moral realist, so it seems perfectly plausible to me that *ideally*, we really should do vastly more to help others even at grave cost to ourselves. But of course none of us are morally perfect. And there's nothing in utilitarianism (properly understood) that says we should take perfection as the *baseline*, and feel bad whenever we fall short of it. We can simply accept that we're inevitably imperfect, and try to do better on the margins. I discuss the issue more here:
(That said, I'm also open to the possibility that some degree of partiality is actually intrinsically warranted. It's an issue I'm highly uncertain about, and certainly think people could reasonably go either way on it. I don't see a huge difference between traditional utilitarianism and agent-relative welfarist consequentialism, so even if one moves to the latter, it isn't too far to go!)
Yeah, I'm very much a moral realist, so it seems perfectly plausible to me that *ideally*, we really should do vastly more to help others even at grave cost to ourselves. But of course none of us are morally perfect. And there's nothing in utilitarianism (properly understood) that says we should take perfection as the *baseline*, and feel bad whenever we fall short of it. We can simply accept that we're inevitably imperfect, and try to do better on the margins. I discuss the issue more here:
https://rychappell.substack.com/p/caplans-conscience-objection-to-utilitarianism
(That said, I'm also open to the possibility that some degree of partiality is actually intrinsically warranted. It's an issue I'm highly uncertain about, and certainly think people could reasonably go either way on it. I don't see a huge difference between traditional utilitarianism and agent-relative welfarist consequentialism, so even if one moves to the latter, it isn't too far to go!)
Re moral realism, I would be interested in some sort of dialogue/debate/discussion between you and Joe Carlsmith about metaethics!
I've responded to a couple of his objections to realism at:
https://www.philosophyetc.net/2021/10/ruling-out-helium-maximizing.html
and
https://rychappell.substack.com/p/metaethics-and-unconditional-mattering