Who said anything about evaluative beliefs? I'm talking about *preferences* (and preferability). The agent-neutral deontologist can agree that pushing the fat man increases welfare value; they just deny that this is what *matters*. It's more important, for consistent deontologists, that the agent not act so as to violate the one's rights…
Who said anything about evaluative beliefs? I'm talking about *preferences* (and preferability). The agent-neutral deontologist can agree that pushing the fat man increases welfare value; they just deny that this is what *matters*. It's more important, for consistent deontologists, that the agent not act so as to violate the one's rights. And so they do not *want* the agent to act so as to violate the one's rights.
The weird thing about the egoistic deontologist is that they lack these distinctively "deontological" preferences. Instead, they share the utilitarian's preferences: they want the fat guy to be pushed off the bridge! Not in a gleeful, "gee it makes me so happy to see people go soaring through the air" kind of way, of course. But as an all-things-considered preference: they would be *more disappointed* if the agent respected the one's rights, and *more relieved* if they kill the one as a means. Those are weird attitudes for a putative deontologist to have! But if you agree with me that they're the *right* attitudes, then I feel like you've gone a long way towards agreeing that consequentialism is really the right view after all. You've at least accepted *consequentialism for bystanders*. That seems like a surprising thing for deontologists to grant!
It seems like, for example, you should probably want fewer people to believe deontology. You want them to act like utilitarians instead. They'd be "wrong" to do so. But you don't want other people to act rightly, when their doing so is suboptimal. So we should all join together to try to promote a utilitarian moral code in society. I'm on board with that if you are!
"But if you agree with me that they're the *right* attitudes, then I feel like you've gone a long way towards agreeing that consequentialism is really the right view after all. You've at least accepted *consequentialism for bystanders*. That seems like a surprising thing for deontologists to grant!"
For the record I don't think I ever did agree with that, as far as I can see I haven't really taken a stance which of the views you mentioned is ultimately correct. But if I had to pick one it might indeed be what you misleadingly call egoistical deontology and several other deontologists have defended something like that (agent-relative deontology is certainly more popular in the moral literature than agent-neutral one).
"So we should all join together to try to promote a utilitarian moral code in society"
That of course doesn't follow even if I were to believe it would be good for there to be more utilitarians, because I believe pretty strongly that deontology is correct and I obviously think there are pretty strong deontic constraints on lying or intentionally misleading.
Well, it seems you'd at least be committed to hoping that *others* successfully pursue that project, even if you're lamentably constrained from doing so yourself. And you might help indirectly, by trying to convince your fellow deontologists not to be too persuasive in their arguments. It's a kind of forbidden knowledge, after all: morally bad for people to have, even if (you think it is) true. (There's presumably not any positive obligation to try to convince people of bad truths.)
> "agent-relative deontology is certainly far more popular in the moral literature than agent-neutral one"
I don't know that that's true. People misleadingly started using the phrase "agent-centered constraints" to talk about deontic constraints, and so many deontologists assume that their view is agent-relative, since they certainly accept these constraints. But so do agent-neutral deontologists, so that isn't really any reason to attribute the agent-relative view of constraints to them. I don't think most philosophers have considered the distinction I'm talking about here at all. Like I said, I would expect most to endorse the "against others' violations" (i.e. agent-neutral) view once the distinction is raised to their attention. (E.g., most seem horrified at the thought of agents pushing people off footbridges as a means to saving others.) But I could be wrong. It'd be sociologically interesting to find out.
FWIW I think that the majority of laypeople becoming convinced of act utilitarianism and/or utilitarianism becoming less socially stigmatised in popular culture, newspapers and other media would likely have pretty bad consequences for society, given what the average person is like. But I'm not gonna debate this further point with you, given that this has been going on for a long time and I have other stuff to do, as I'm sure do you
It probably depends how it's done ("become convinced of utilitarianism" is a massively under-described process). But it'd seem interesting enough even if you were merely committed to wishing the most conscientious, epistemically responsible, and morally-motivated individuals to become (competent, instrumentally rational) utilitarians.
Who said anything about evaluative beliefs? I'm talking about *preferences* (and preferability). The agent-neutral deontologist can agree that pushing the fat man increases welfare value; they just deny that this is what *matters*. It's more important, for consistent deontologists, that the agent not act so as to violate the one's rights. And so they do not *want* the agent to act so as to violate the one's rights.
The weird thing about the egoistic deontologist is that they lack these distinctively "deontological" preferences. Instead, they share the utilitarian's preferences: they want the fat guy to be pushed off the bridge! Not in a gleeful, "gee it makes me so happy to see people go soaring through the air" kind of way, of course. But as an all-things-considered preference: they would be *more disappointed* if the agent respected the one's rights, and *more relieved* if they kill the one as a means. Those are weird attitudes for a putative deontologist to have! But if you agree with me that they're the *right* attitudes, then I feel like you've gone a long way towards agreeing that consequentialism is really the right view after all. You've at least accepted *consequentialism for bystanders*. That seems like a surprising thing for deontologists to grant!
It seems like, for example, you should probably want fewer people to believe deontology. You want them to act like utilitarians instead. They'd be "wrong" to do so. But you don't want other people to act rightly, when their doing so is suboptimal. So we should all join together to try to promote a utilitarian moral code in society. I'm on board with that if you are!
"But if you agree with me that they're the *right* attitudes, then I feel like you've gone a long way towards agreeing that consequentialism is really the right view after all. You've at least accepted *consequentialism for bystanders*. That seems like a surprising thing for deontologists to grant!"
For the record I don't think I ever did agree with that, as far as I can see I haven't really taken a stance which of the views you mentioned is ultimately correct. But if I had to pick one it might indeed be what you misleadingly call egoistical deontology and several other deontologists have defended something like that (agent-relative deontology is certainly more popular in the moral literature than agent-neutral one).
"So we should all join together to try to promote a utilitarian moral code in society"
That of course doesn't follow even if I were to believe it would be good for there to be more utilitarians, because I believe pretty strongly that deontology is correct and I obviously think there are pretty strong deontic constraints on lying or intentionally misleading.
Well, it seems you'd at least be committed to hoping that *others* successfully pursue that project, even if you're lamentably constrained from doing so yourself. And you might help indirectly, by trying to convince your fellow deontologists not to be too persuasive in their arguments. It's a kind of forbidden knowledge, after all: morally bad for people to have, even if (you think it is) true. (There's presumably not any positive obligation to try to convince people of bad truths.)
> "agent-relative deontology is certainly far more popular in the moral literature than agent-neutral one"
I don't know that that's true. People misleadingly started using the phrase "agent-centered constraints" to talk about deontic constraints, and so many deontologists assume that their view is agent-relative, since they certainly accept these constraints. But so do agent-neutral deontologists, so that isn't really any reason to attribute the agent-relative view of constraints to them. I don't think most philosophers have considered the distinction I'm talking about here at all. Like I said, I would expect most to endorse the "against others' violations" (i.e. agent-neutral) view once the distinction is raised to their attention. (E.g., most seem horrified at the thought of agents pushing people off footbridges as a means to saving others.) But I could be wrong. It'd be sociologically interesting to find out.
FWIW I think that the majority of laypeople becoming convinced of act utilitarianism and/or utilitarianism becoming less socially stigmatised in popular culture, newspapers and other media would likely have pretty bad consequences for society, given what the average person is like. But I'm not gonna debate this further point with you, given that this has been going on for a long time and I have other stuff to do, as I'm sure do you
It probably depends how it's done ("become convinced of utilitarianism" is a massively under-described process). But it'd seem interesting enough even if you were merely committed to wishing the most conscientious, epistemically responsible, and morally-motivated individuals to become (competent, instrumentally rational) utilitarians.