I disagree that it robs rights of their normative authority. The deontological view of morality is that morality limits, for each agent individually, what they are allowed to do in pursuit of their goals... It limits, for me, what I am allowed to do. And it limits, for Dr. Chappell, what he is allowed to do. And so on. I don't see how an…
I disagree that it robs rights of their normative authority. The deontological view of morality is that morality limits, for each agent individually, what they are allowed to do in pursuit of their goals... It limits, for me, what I am allowed to do. And it limits, for Dr. Chappell, what he is allowed to do. And so on. I don't see how any of this makes rights less authoritative - it's still true, for any conceivable rational agent x, that x ought not do y (where y is any arbitrary rights violation). Seems pretty authoritative to me!
By the way, by describing this as "egoistic" and "self-centred" it seems like you are making a very strong *psychological* claim about deontologists - do you really think that e.g. many neo-Kantian philosophers act out of self-interest and not out of a genuine sense of altruistically motivated duty? Obviously you think they are *mistaken* in their normative judgements, but by describing them as egotistical it seems like you are saying something much stronger
I'm describing the *view*, not the psychology of its *adherents*. But I guess I do think there's something morally objectionable about being primarily guided by agent-relative reasons, neglecting what strikes me as the more legitimately "moral" (impartial / agent-neutral) point of view. (But I wouldn't use the term "pretty bad" to describe someone merely for being morally flawed in this way.)
> "I don't see how any of this makes rights less authoritative"
That might be because you didn't include the distinguishing features of (what I'm calling) egoistic deontology. After all, agent-neutral deontology also limits, for each agent individually, what they are allowed to do in pursuit of their goals. And it further specifies that *we all have decisive moral reason to want people to respect these moral limits, in each instance*. The distinguishing feature of egoistic deontology, by contrast, is that it adds that *everyone else might reasonably hope that agents violate their deontic constraints* (whenever it would maximize welfare for them to do so). It is THIS further claim that I take to undermine the normative authority of deontic constraints.
"It's really important that I do X, but nobody else has any reason to hope/want that I do X" seems incoherent. In this way, egoistic deontology seems incompatible with regarding deontic constraints as important. Part of normative authority, I take it, is not just determining what an agent ought to do, but being such that the rest of us *should actually care* that it be done.
Do you really not see anything strange about a deontologist bystander quietly chanting "Push! Push! Push!" under their breath as they observe the trolley footbridge scenario unfold? (I don't think this is the actual view of most deontologists, neo-Kantian or otherwise. I think most would accept agent-neutral deontology, once the distinction is brought to their attention.)
I don't see how thinking that the world would be better if the fat man is pushed would rob rights of their authority either - *it would still be true, for the person who pushed the fat man, that he ought not have done it*. Whether I think the world contains more value either way doesn't affect that.
Now does "Tim made the world better by pushing the fat man, but he ought not have done it" sound strange? Maybe a little bit. But that's mainly because usually when we make the world better we also have good deontic reason to do so. But not always, in fact we often hear people in e.g. crime documentaries saying something like "The world is better without him, but it was still wrong to kill him".
"Do you really not see anything strange about a deontologist bystander quietly chanting "Push! Push! Push!" under their breath as they observe the trolley footbridge scenario unfold?"
Sorry I think it might be that you didn't read the edit to my first comment, I will quote it here:
"(Btw, being positively delighted about someone killing another person would show a highly defective moral attitude, whatever view we pick - even the utilitarian should agree with this. Even most utilitarians would agree that you should probably not be super happy while torturing someone, even if you knew that it will maximise overall wellbeing)"
Who said anything about evaluative beliefs? I'm talking about *preferences* (and preferability). The agent-neutral deontologist can agree that pushing the fat man increases welfare value; they just deny that this is what *matters*. It's more important, for consistent deontologists, that the agent not act so as to violate the one's rights. And so they do not *want* the agent to act so as to violate the one's rights.
The weird thing about the egoistic deontologist is that they lack these distinctively "deontological" preferences. Instead, they share the utilitarian's preferences: they want the fat guy to be pushed off the bridge! Not in a gleeful, "gee it makes me so happy to see people go soaring through the air" kind of way, of course. But as an all-things-considered preference: they would be *more disappointed* if the agent respected the one's rights, and *more relieved* if they kill the one as a means. Those are weird attitudes for a putative deontologist to have! But if you agree with me that they're the *right* attitudes, then I feel like you've gone a long way towards agreeing that consequentialism is really the right view after all. You've at least accepted *consequentialism for bystanders*. That seems like a surprising thing for deontologists to grant!
It seems like, for example, you should probably want fewer people to believe deontology. You want them to act like utilitarians instead. They'd be "wrong" to do so. But you don't want other people to act rightly, when their doing so is suboptimal. So we should all join together to try to promote a utilitarian moral code in society. I'm on board with that if you are!
"But if you agree with me that they're the *right* attitudes, then I feel like you've gone a long way towards agreeing that consequentialism is really the right view after all. You've at least accepted *consequentialism for bystanders*. That seems like a surprising thing for deontologists to grant!"
For the record I don't think I ever did agree with that, as far as I can see I haven't really taken a stance which of the views you mentioned is ultimately correct. But if I had to pick one it might indeed be what you misleadingly call egoistical deontology and several other deontologists have defended something like that (agent-relative deontology is certainly more popular in the moral literature than agent-neutral one).
"So we should all join together to try to promote a utilitarian moral code in society"
That of course doesn't follow even if I were to believe it would be good for there to be more utilitarians, because I believe pretty strongly that deontology is correct and I obviously think there are pretty strong deontic constraints on lying or intentionally misleading.
Well, it seems you'd at least be committed to hoping that *others* successfully pursue that project, even if you're lamentably constrained from doing so yourself. And you might help indirectly, by trying to convince your fellow deontologists not to be too persuasive in their arguments. It's a kind of forbidden knowledge, after all: morally bad for people to have, even if (you think it is) true. (There's presumably not any positive obligation to try to convince people of bad truths.)
> "agent-relative deontology is certainly far more popular in the moral literature than agent-neutral one"
I don't know that that's true. People misleadingly started using the phrase "agent-centered constraints" to talk about deontic constraints, and so many deontologists assume that their view is agent-relative, since they certainly accept these constraints. But so do agent-neutral deontologists, so that isn't really any reason to attribute the agent-relative view of constraints to them. I don't think most philosophers have considered the distinction I'm talking about here at all. Like I said, I would expect most to endorse the "against others' violations" (i.e. agent-neutral) view once the distinction is raised to their attention. (E.g., most seem horrified at the thought of agents pushing people off footbridges as a means to saving others.) But I could be wrong. It'd be sociologically interesting to find out.
FWIW I think that the majority of laypeople becoming convinced of act utilitarianism and/or utilitarianism becoming less socially stigmatised in popular culture, newspapers and other media would likely have pretty bad consequences for society, given what the average person is like. But I'm not gonna debate this further point with you, given that this has been going on for a long time and I have other stuff to do, as I'm sure do you
It probably depends how it's done ("become convinced of utilitarianism" is a massively under-described process). But it'd seem interesting enough even if you were merely committed to wishing the most conscientious, epistemically responsible, and morally-motivated individuals to become (competent, instrumentally rational) utilitarians.
Does you owing to that person that you don’t do it really matter more than the deaths of 5 lives?
It seems more like it just matters more to you (it’s in this sense in which it seems more egotistical) as then you don’t have to break your supposed obligation to not be the one who pushes them - but it would be better if the wind did just push them down (or maybe you don’t agree with that?).
I then don’t think Richard’s point is that deontologists are being selfish (I assume he agrees you can reasonably and virtuously be a deontologist) but *what i’ve highlighted above* seems more egoistical (on a theoretical level) then how morality should be (ofc your intuitions might differ).
I disagree that it robs rights of their normative authority. The deontological view of morality is that morality limits, for each agent individually, what they are allowed to do in pursuit of their goals... It limits, for me, what I am allowed to do. And it limits, for Dr. Chappell, what he is allowed to do. And so on. I don't see how any of this makes rights less authoritative - it's still true, for any conceivable rational agent x, that x ought not do y (where y is any arbitrary rights violation). Seems pretty authoritative to me!
By the way, by describing this as "egoistic" and "self-centred" it seems like you are making a very strong *psychological* claim about deontologists - do you really think that e.g. many neo-Kantian philosophers act out of self-interest and not out of a genuine sense of altruistically motivated duty? Obviously you think they are *mistaken* in their normative judgements, but by describing them as egotistical it seems like you are saying something much stronger
I'm describing the *view*, not the psychology of its *adherents*. But I guess I do think there's something morally objectionable about being primarily guided by agent-relative reasons, neglecting what strikes me as the more legitimately "moral" (impartial / agent-neutral) point of view. (But I wouldn't use the term "pretty bad" to describe someone merely for being morally flawed in this way.)
> "I don't see how any of this makes rights less authoritative"
That might be because you didn't include the distinguishing features of (what I'm calling) egoistic deontology. After all, agent-neutral deontology also limits, for each agent individually, what they are allowed to do in pursuit of their goals. And it further specifies that *we all have decisive moral reason to want people to respect these moral limits, in each instance*. The distinguishing feature of egoistic deontology, by contrast, is that it adds that *everyone else might reasonably hope that agents violate their deontic constraints* (whenever it would maximize welfare for them to do so). It is THIS further claim that I take to undermine the normative authority of deontic constraints.
"It's really important that I do X, but nobody else has any reason to hope/want that I do X" seems incoherent. In this way, egoistic deontology seems incompatible with regarding deontic constraints as important. Part of normative authority, I take it, is not just determining what an agent ought to do, but being such that the rest of us *should actually care* that it be done.
Do you really not see anything strange about a deontologist bystander quietly chanting "Push! Push! Push!" under their breath as they observe the trolley footbridge scenario unfold? (I don't think this is the actual view of most deontologists, neo-Kantian or otherwise. I think most would accept agent-neutral deontology, once the distinction is brought to their attention.)
I don't see how thinking that the world would be better if the fat man is pushed would rob rights of their authority either - *it would still be true, for the person who pushed the fat man, that he ought not have done it*. Whether I think the world contains more value either way doesn't affect that.
Now does "Tim made the world better by pushing the fat man, but he ought not have done it" sound strange? Maybe a little bit. But that's mainly because usually when we make the world better we also have good deontic reason to do so. But not always, in fact we often hear people in e.g. crime documentaries saying something like "The world is better without him, but it was still wrong to kill him".
"Do you really not see anything strange about a deontologist bystander quietly chanting "Push! Push! Push!" under their breath as they observe the trolley footbridge scenario unfold?"
Sorry I think it might be that you didn't read the edit to my first comment, I will quote it here:
"(Btw, being positively delighted about someone killing another person would show a highly defective moral attitude, whatever view we pick - even the utilitarian should agree with this. Even most utilitarians would agree that you should probably not be super happy while torturing someone, even if you knew that it will maximise overall wellbeing)"
Who said anything about evaluative beliefs? I'm talking about *preferences* (and preferability). The agent-neutral deontologist can agree that pushing the fat man increases welfare value; they just deny that this is what *matters*. It's more important, for consistent deontologists, that the agent not act so as to violate the one's rights. And so they do not *want* the agent to act so as to violate the one's rights.
The weird thing about the egoistic deontologist is that they lack these distinctively "deontological" preferences. Instead, they share the utilitarian's preferences: they want the fat guy to be pushed off the bridge! Not in a gleeful, "gee it makes me so happy to see people go soaring through the air" kind of way, of course. But as an all-things-considered preference: they would be *more disappointed* if the agent respected the one's rights, and *more relieved* if they kill the one as a means. Those are weird attitudes for a putative deontologist to have! But if you agree with me that they're the *right* attitudes, then I feel like you've gone a long way towards agreeing that consequentialism is really the right view after all. You've at least accepted *consequentialism for bystanders*. That seems like a surprising thing for deontologists to grant!
It seems like, for example, you should probably want fewer people to believe deontology. You want them to act like utilitarians instead. They'd be "wrong" to do so. But you don't want other people to act rightly, when their doing so is suboptimal. So we should all join together to try to promote a utilitarian moral code in society. I'm on board with that if you are!
"But if you agree with me that they're the *right* attitudes, then I feel like you've gone a long way towards agreeing that consequentialism is really the right view after all. You've at least accepted *consequentialism for bystanders*. That seems like a surprising thing for deontologists to grant!"
For the record I don't think I ever did agree with that, as far as I can see I haven't really taken a stance which of the views you mentioned is ultimately correct. But if I had to pick one it might indeed be what you misleadingly call egoistical deontology and several other deontologists have defended something like that (agent-relative deontology is certainly more popular in the moral literature than agent-neutral one).
"So we should all join together to try to promote a utilitarian moral code in society"
That of course doesn't follow even if I were to believe it would be good for there to be more utilitarians, because I believe pretty strongly that deontology is correct and I obviously think there are pretty strong deontic constraints on lying or intentionally misleading.
Well, it seems you'd at least be committed to hoping that *others* successfully pursue that project, even if you're lamentably constrained from doing so yourself. And you might help indirectly, by trying to convince your fellow deontologists not to be too persuasive in their arguments. It's a kind of forbidden knowledge, after all: morally bad for people to have, even if (you think it is) true. (There's presumably not any positive obligation to try to convince people of bad truths.)
> "agent-relative deontology is certainly far more popular in the moral literature than agent-neutral one"
I don't know that that's true. People misleadingly started using the phrase "agent-centered constraints" to talk about deontic constraints, and so many deontologists assume that their view is agent-relative, since they certainly accept these constraints. But so do agent-neutral deontologists, so that isn't really any reason to attribute the agent-relative view of constraints to them. I don't think most philosophers have considered the distinction I'm talking about here at all. Like I said, I would expect most to endorse the "against others' violations" (i.e. agent-neutral) view once the distinction is raised to their attention. (E.g., most seem horrified at the thought of agents pushing people off footbridges as a means to saving others.) But I could be wrong. It'd be sociologically interesting to find out.
FWIW I think that the majority of laypeople becoming convinced of act utilitarianism and/or utilitarianism becoming less socially stigmatised in popular culture, newspapers and other media would likely have pretty bad consequences for society, given what the average person is like. But I'm not gonna debate this further point with you, given that this has been going on for a long time and I have other stuff to do, as I'm sure do you
It probably depends how it's done ("become convinced of utilitarianism" is a massively under-described process). But it'd seem interesting enough even if you were merely committed to wishing the most conscientious, epistemically responsible, and morally-motivated individuals to become (competent, instrumentally rational) utilitarians.
Does you owing to that person that you don’t do it really matter more than the deaths of 5 lives?
It seems more like it just matters more to you (it’s in this sense in which it seems more egotistical) as then you don’t have to break your supposed obligation to not be the one who pushes them - but it would be better if the wind did just push them down (or maybe you don’t agree with that?).
I then don’t think Richard’s point is that deontologists are being selfish (I assume he agrees you can reasonably and virtuously be a deontologist) but *what i’ve highlighted above* seems more egoistical (on a theoretical level) then how morality should be (ofc your intuitions might differ).