I know this is not the main point here, but I would really question the idea that people tithed 10% "without complaint". Tithes were often more like taxes than voluntary donations, and there were revolts against them, e.g.: https://www.jstor.org/stable/2539720
I'm wondering how much overlap there is between apparent examples of reactive ethics and apparent examples of "expanding circle" moral weighting of others based upon metaphorical proximity to oneself. When people put a lot of effort into domestic political causes, it usually doesn't strike me as particularly reactive, but more like proactive altruism toward their respective political tribe first, their own nation second (possibly also including a few other select "similar" nations), and the faraway rest of the world mostly out of the calculation.
I'm not sure exactly what you have in mind, but I certainly agree it's also possible to pursue local goals in a proactive (and not merely reactive) way.
Jan 9, 2023·edited Jan 9, 2023Liked by Richard Y Chappell
So, I think it's clearly reasonable to treat one chicken as carrying less moral weight than one human, one spider as carrying less moral weight than one chicken, etc. What I'm suggesting is that many examples which we'd be tempted to explain as reactive ethics, might also be well-explained in terms of similar (although in this case morally indefensible) weighting of some humans over others.
After all, we have plenty of historical and current examples of people living in close proximity with others whose interests they heavily downgrade, don't we? What if Singer had posed his question in 1700 Louisiana, and it had been "white landowner's child dying in the swimming pool" versus "slave child dying in the canal"? Or what if we asked some conservative Brahmins in India, and gave some indication of the child's caste?
Sure, it's easy to come up with dissociations between the reactive/proactive question and the discrimination between groups question. I'm asking about the extent of real world overlap, and guessing it's considerable. Usually our morally downgraded outgroups are also the people we see less often face-to-face.
Ah, I see, thanks for clarifying! I'm maybe more optimistic about most people not having such outright indefensible moral goals nowadays (e.g. explicitly racist or nationalist ones), but it is an interesting question how often something like that may be playing some role.
Jan 10, 2023·edited Jan 10, 2023Liked by Richard Y Chappell
I think one particularly strong argument for that sort of "under-expanded circle" being at work rather than reactive ethics is how much greater political effort -- and even passionate emotion -- people put into lesser domestic issues versus greater global ones, when most of the time they're not being reactive like in the drowning child example.
Potential counter-argument: people talk a lot about doing good things, and the ethics of not doing bad things gets neglected. The issue is, that goal-oriented ethics could turn one into Pol Pot. Reactive ethics protect us from this (I guess, usually), the reactive person reacts to that by "just don't kill people, I don't care about your goals, just don't do this". Granted reactive people can also do bad things, they can be manipulated into thinking other people are evil and thus want to kill them. A solution is to reactively judge acts, not people.
Still I would argue reactive evil did less harm. The random, emotional Anti-Semitic pogroms in Russia were nothing compared to the goal-oriented Holocaust, for example.
I especially have a problem with utilitarianism or consequentualism here. It is so easy to "calculate" something like "well if we just get rid of the 1% absolutely worst people, so many utilons".
Right, you can raise the worry about any executive virtue -- courage, determination, grit, intelligence, creativity, etc. -- that it makes things worse when you add it to a Nazi. We don't want *bad* people to be highly effective, or have strong agential capacities.
Still, that's not much of a reason to oppose executive virtues *in general*, unless you expect the bad in humanity to outweigh the good. I don't accept that bleak view. Although there are some costs to increasing human capabilities, I think the benefits outweigh the costs. For similar reasons, I want to empower people's moral efforts because I expect moral efforts to tend to do more good than harm on the whole.
How do you explain the lack of enthusiasm amongst non-consequentialists (with rare exceptions) for beneficentrism, and their tendency to outright conflate effective altruism with "applied utilitarianism"? Lip service to a doctrine isn't enough to show that their *actual moral thinking* is recognizably goal-directed (in the sense under discussion here).
I guess one of the main reasons why EA is commonly conflated with Utilitarianism is because it seems like the EA movement itself constantly explictly or implictly assumes Utilitarianism to be correct - let's take MacAskill's new book: He doesn't just say that affecting the future is very important, but goes much farther (i.e. strong longtermism) by appealing to utilitarian principles. Of course a deontologist will be put off by this.
(1) Why would a deontologist be put off by strong longtermism? There's no particular reason for a Rossian pluralist, for example, to reject this view of beneficence. (Many actual deontologists seem to embrace narrow person-affecting views in order to limit the scope of ethics to something closer to what I'm here calling "reactive" views; but the whole point of my post is that there's no essential reason for deontology to be so limited. It can accept utilitarian-style beneficence as a part.)
A mainly person-affecting view seems to be a direct consequence of Rossian ethics, it's not just an ad-hoc auxiliary hypothesis in order to avoid strong longtermism.
Also, the vast vast majority of deontologists are broadly Kantian and not broadly Rossian and according to Kantian ethics it's very clear why strong longtermism would be false: The key moral priority is to not violate any perfect duty ever (and helping non-existent people is not a perfect duy!), if Kantianism is true. Thus strong longtermism is false if Kantianism is true. Thus the vast vast majority of deontologists should reject strong longtermism.
> "the vast vast majority of deontologists are broadly Kantian and not broadly Rossian"
What makes you think that? Most people's moral views don't adhere strictly to any particular theory; they broadly hew to stakes-sensitive "common sense morality", which is to say that they are broadly Rossian moderate deontologists, not Kantian absolutists.
“The positive case for goal-directedness is trivial: as a general rule, you’re more likely to do better if you at least try.”
Either I don’t understand your distinction between goal directed morality and reactive morality, or ... yeah, I don’t understand it. Maybe you mean that constraints and duties don’t cover all of morality; that we also should or could have moral goals in addition to or instead of duties and obligations.
A moral goal presumably improves the world generally, as distinct from improving it only for the acting agent. It is a moral opportunity, as distinct from a moral obligation. Is there a reason to think goal direction and reaction are mutually exclusive, or antagonistic, as opposed to complementary?
I’m tempted to think reactive morality is morality with the goal of avoiding making things worse on a local level. The people around me don’t expect me to defect in our various social games. If I did, that would have an immediate negative effect on trust and cooperation, which are fundamental to social progress. If I won’t help others, I should at least not get in their way.
I suspect that this attempt to view reactive morality as a goal-oriented category misses your point somehow.
A goal-directed approach to "avoiding making things worse on a local level" might involve pro-actively investigating threats to community health, looking into how other groups have failed and what preparations you could make to make your group more robust, etc. That's very different from a purely reactive approach. You can certainly conceive of standard reactive ethics as playing the *role* of guarding against certain harms (particularly, ones directly caused by the agent themselves reacting poorly). But it's very different, psychologically, from having the agent actually take up that goal and try to pursue it in a broader, strategic, instrumentally rational way.
That still seems like a distinction without a difference to me. Or rather, there is nothing antagonistic between the two. A requirement for becoming admirable is to avoid becoming despicable. Someone who has successfully avoided becoming despicable but had no particular positive impact on others is more admirable than someone who attempted to improve the world in a distorted way and actually made things worse.
Maybe I am just framing things differently and not really disagreeing. I really am not sure I have a good grasp on what the point is, or how the distinction works.
People who are genuinely desperate don’t have time to make the world better. People who are at a certain point in Maslow's hierarchy tend to want to make things better, without anyone needing to poke them. Does that mean the desperate are reacting, and the non-desperate are goal-directed? Everyone is goal-directed, but the goals may have more or less to do with improving the world for everyone, as opposed to improving the world for the actor alone, or even making it better for the actor at the expense of others. Avoiding acting at the expense of others seems like the primary requirement to me. World improvement goals should operate within those constraints.
I suppose that no change is costless, and so there are always people who lose in some sense whenever any significant change occurs. That is a big part of the moral calculus; What can we expect from each other in terms on non-interference, and what is none of our business? A straw manish act utilitarianism would say that everything is everyone's business. A more nuanced version would count the costs differently, and seek to avoid the chaos that would follow if that was adopted. But this is a digression.
Have I grasped your distinction or not? I don’t think so. I keep getting derailed into “goal directed or not goal directed” and “moral or prudential”. You make a valid but I hope trivial point that we should be critical of our moral goals, and of our goals and attitudes in general. But I don’t think you are just criticizing reactive persons for being uncritical. You are also criticizing them for being uninspired? Or inspired by an inferior goal? For pursuing an ineffective means to a worthy goal?
I'm not arguing for an "antagonism", just stressing that we shouldn't settle for a 100% reactive morality. You say, "A requirement for becoming admirable is to avoid becoming despicable." Which is fine as far as it goes. But that isn't very far: a moral view that said nothing more than "don't be despicable" would be insufficient. So I'm just trying to stress how important it is to *also* include an element of goal-directed beneficence.
I know this is not the main point here, but I would really question the idea that people tithed 10% "without complaint". Tithes were often more like taxes than voluntary donations, and there were revolts against them, e.g.: https://www.jstor.org/stable/2539720
Huh, good to know -- thanks for sharing!
Yes true 🔥
I'm wondering how much overlap there is between apparent examples of reactive ethics and apparent examples of "expanding circle" moral weighting of others based upon metaphorical proximity to oneself. When people put a lot of effort into domestic political causes, it usually doesn't strike me as particularly reactive, but more like proactive altruism toward their respective political tribe first, their own nation second (possibly also including a few other select "similar" nations), and the faraway rest of the world mostly out of the calculation.
I'm not sure exactly what you have in mind, but I certainly agree it's also possible to pursue local goals in a proactive (and not merely reactive) way.
So, I think it's clearly reasonable to treat one chicken as carrying less moral weight than one human, one spider as carrying less moral weight than one chicken, etc. What I'm suggesting is that many examples which we'd be tempted to explain as reactive ethics, might also be well-explained in terms of similar (although in this case morally indefensible) weighting of some humans over others.
After all, we have plenty of historical and current examples of people living in close proximity with others whose interests they heavily downgrade, don't we? What if Singer had posed his question in 1700 Louisiana, and it had been "white landowner's child dying in the swimming pool" versus "slave child dying in the canal"? Or what if we asked some conservative Brahmins in India, and gave some indication of the child's caste?
Sure, it's easy to come up with dissociations between the reactive/proactive question and the discrimination between groups question. I'm asking about the extent of real world overlap, and guessing it's considerable. Usually our morally downgraded outgroups are also the people we see less often face-to-face.
Ah, I see, thanks for clarifying! I'm maybe more optimistic about most people not having such outright indefensible moral goals nowadays (e.g. explicitly racist or nationalist ones), but it is an interesting question how often something like that may be playing some role.
I think one particularly strong argument for that sort of "under-expanded circle" being at work rather than reactive ethics is how much greater political effort -- and even passionate emotion -- people put into lesser domestic issues versus greater global ones, when most of the time they're not being reactive like in the drowning child example.
Potential counter-argument: people talk a lot about doing good things, and the ethics of not doing bad things gets neglected. The issue is, that goal-oriented ethics could turn one into Pol Pot. Reactive ethics protect us from this (I guess, usually), the reactive person reacts to that by "just don't kill people, I don't care about your goals, just don't do this". Granted reactive people can also do bad things, they can be manipulated into thinking other people are evil and thus want to kill them. A solution is to reactively judge acts, not people.
Still I would argue reactive evil did less harm. The random, emotional Anti-Semitic pogroms in Russia were nothing compared to the goal-oriented Holocaust, for example.
I especially have a problem with utilitarianism or consequentualism here. It is so easy to "calculate" something like "well if we just get rid of the 1% absolutely worst people, so many utilons".
Granted, altruism, charity will not do this.
Right, you can raise the worry about any executive virtue -- courage, determination, grit, intelligence, creativity, etc. -- that it makes things worse when you add it to a Nazi. We don't want *bad* people to be highly effective, or have strong agential capacities.
Still, that's not much of a reason to oppose executive virtues *in general*, unless you expect the bad in humanity to outweigh the good. I don't accept that bleak view. Although there are some costs to increasing human capabilities, I think the benefits outweigh the costs. For similar reasons, I want to empower people's moral efforts because I expect moral efforts to tend to do more good than harm on the whole.
See also: https://www.utilitarianism.net/objections-to-utilitarianism/abusability/
Your claim that goal-directedness is rare in "non-consequentialist" moral thought seems unfounded.
Take, for instance, Kant --common target of consequentialists-- and his doctrine of morally obligatory ends.
How do you explain the lack of enthusiasm amongst non-consequentialists (with rare exceptions) for beneficentrism, and their tendency to outright conflate effective altruism with "applied utilitarianism"? Lip service to a doctrine isn't enough to show that their *actual moral thinking* is recognizably goal-directed (in the sense under discussion here).
I guess one of the main reasons why EA is commonly conflated with Utilitarianism is because it seems like the EA movement itself constantly explictly or implictly assumes Utilitarianism to be correct - let's take MacAskill's new book: He doesn't just say that affecting the future is very important, but goes much farther (i.e. strong longtermism) by appealing to utilitarian principles. Of course a deontologist will be put off by this.
(1) Why would a deontologist be put off by strong longtermism? There's no particular reason for a Rossian pluralist, for example, to reject this view of beneficence. (Many actual deontologists seem to embrace narrow person-affecting views in order to limit the scope of ethics to something closer to what I'm here calling "reactive" views; but the whole point of my post is that there's no essential reason for deontology to be so limited. It can accept utilitarian-style beneficence as a part.)
(2) For the record, that's actually not an accurate characterization of the book. (Strong longtermism is defended in another paper; the book is much more moderate and ecumenical.) See my review: https://rychappell.substack.com/p/review-of-what-we-owe-the-future
A mainly person-affecting view seems to be a direct consequence of Rossian ethics, it's not just an ad-hoc auxiliary hypothesis in order to avoid strong longtermism.
Also, the vast vast majority of deontologists are broadly Kantian and not broadly Rossian and according to Kantian ethics it's very clear why strong longtermism would be false: The key moral priority is to not violate any perfect duty ever (and helping non-existent people is not a perfect duy!), if Kantianism is true. Thus strong longtermism is false if Kantianism is true. Thus the vast vast majority of deontologists should reject strong longtermism.
> "the vast vast majority of deontologists are broadly Kantian and not broadly Rossian"
What makes you think that? Most people's moral views don't adhere strictly to any particular theory; they broadly hew to stakes-sensitive "common sense morality", which is to say that they are broadly Rossian moderate deontologists, not Kantian absolutists.
“The positive case for goal-directedness is trivial: as a general rule, you’re more likely to do better if you at least try.”
Either I don’t understand your distinction between goal directed morality and reactive morality, or ... yeah, I don’t understand it. Maybe you mean that constraints and duties don’t cover all of morality; that we also should or could have moral goals in addition to or instead of duties and obligations.
A moral goal presumably improves the world generally, as distinct from improving it only for the acting agent. It is a moral opportunity, as distinct from a moral obligation. Is there a reason to think goal direction and reaction are mutually exclusive, or antagonistic, as opposed to complementary?
I’m tempted to think reactive morality is morality with the goal of avoiding making things worse on a local level. The people around me don’t expect me to defect in our various social games. If I did, that would have an immediate negative effect on trust and cooperation, which are fundamental to social progress. If I won’t help others, I should at least not get in their way.
I suspect that this attempt to view reactive morality as a goal-oriented category misses your point somehow.
A goal-directed approach to "avoiding making things worse on a local level" might involve pro-actively investigating threats to community health, looking into how other groups have failed and what preparations you could make to make your group more robust, etc. That's very different from a purely reactive approach. You can certainly conceive of standard reactive ethics as playing the *role* of guarding against certain harms (particularly, ones directly caused by the agent themselves reacting poorly). But it's very different, psychologically, from having the agent actually take up that goal and try to pursue it in a broader, strategic, instrumentally rational way.
Is "avoiding making things worse on a local level" a (perhaps inferior) goal or a purely reactive approach?
That still seems like a distinction without a difference to me. Or rather, there is nothing antagonistic between the two. A requirement for becoming admirable is to avoid becoming despicable. Someone who has successfully avoided becoming despicable but had no particular positive impact on others is more admirable than someone who attempted to improve the world in a distorted way and actually made things worse.
Maybe I am just framing things differently and not really disagreeing. I really am not sure I have a good grasp on what the point is, or how the distinction works.
People who are genuinely desperate don’t have time to make the world better. People who are at a certain point in Maslow's hierarchy tend to want to make things better, without anyone needing to poke them. Does that mean the desperate are reacting, and the non-desperate are goal-directed? Everyone is goal-directed, but the goals may have more or less to do with improving the world for everyone, as opposed to improving the world for the actor alone, or even making it better for the actor at the expense of others. Avoiding acting at the expense of others seems like the primary requirement to me. World improvement goals should operate within those constraints.
I suppose that no change is costless, and so there are always people who lose in some sense whenever any significant change occurs. That is a big part of the moral calculus; What can we expect from each other in terms on non-interference, and what is none of our business? A straw manish act utilitarianism would say that everything is everyone's business. A more nuanced version would count the costs differently, and seek to avoid the chaos that would follow if that was adopted. But this is a digression.
Have I grasped your distinction or not? I don’t think so. I keep getting derailed into “goal directed or not goal directed” and “moral or prudential”. You make a valid but I hope trivial point that we should be critical of our moral goals, and of our goals and attitudes in general. But I don’t think you are just criticizing reactive persons for being uncritical. You are also criticizing them for being uninspired? Or inspired by an inferior goal? For pursuing an ineffective means to a worthy goal?
I'm not arguing for an "antagonism", just stressing that we shouldn't settle for a 100% reactive morality. You say, "A requirement for becoming admirable is to avoid becoming despicable." Which is fine as far as it goes. But that isn't very far: a moral view that said nothing more than "don't be despicable" would be insufficient. So I'm just trying to stress how important it is to *also* include an element of goal-directed beneficence.