
Common moral intuitions are an unprincipled mess. That’s “the trolley problem” in a nutshell. It’s also demonstrated by attempts to distinguish Singer’s drowning child case from our everyday failures to donate to life-saving charities.
Scott Alexander, in More Drowning Children, focuses in on two candidate psychological explanations: (i) “Copenhagen” ethics, according to which observing or interacting with a problem makes you specially responsible for fixing it, and (ii) diminishing moral returns to saving more lives (to explain why we’re not obliged to save every drowning child in endless rescue scenarios). He goes on to explain why we shouldn’t morally endorse these principles: Copenhagen views, for example, wrongly valorize people who do nothing at all over those who are “entangled” but at least do some good (even if less than we’d like). We should instead be guided by what we’d all have most reason to choose from behind a veil of ignorance.
Definitely read Scott’s full post, if you haven’t already. In this post, I want to explore a couple more—broadly “virtue ethical”—explanations of our moral intuitions across these (and other) cases.
Empathy and Emotional Activation
The big obvious thing about a child drowning before your eyes that it activates your moral emotions in a way that distant deaths from malaria don’t. If you imagine the kind of person who could simply watch the child drown (because they care more about not ruining their expensive suit), you find yourself having to imagine a total moral monster—someone almost unimaginably callous. (Probably not a reliable social ally, or someone it would reflect well on you to associate with.)
Now imagine the kind of person who doesn’t donate as much as would be ideal to global charities. That’s, like, everyone. So, inaction in this case is no sign at all that the agent is an unusually callous person, worth avoiding, or anything of that sort.
Hypothesis: our moral intuitions are tracking signs of bad character.
(Cynical deeper hypothesis: the reason our moral intuitions are tracking this is in order to help us to form socially valuable associations and to avoid disadvantageous associations.)
Test case: imagine an uber-empath who watches the child drown, with tears streaming down his face, because he finds just as salient two distant children whose lives he will now save by selling his suit.1
This uber-empath strikes me as more perfectly virtuous than ordinary people who don’t care so much for the distant. When people claim that it would be “wrong” to watch the child drown in order to save more lives indirectly, they inevitably imagine the agent as a robotic type who just doesn’t much care about the drowning child. If we instead “level up” the emotions and imagine that they care as much about the distant children as we do about the nearby child, I no longer feel any temptation to insist that they ought to have saved the one instead of the two. Insofar as you are tempted to offer such a condemnatory verdict about the robotic optimizer, that just goes to show that your verdict is really about the agent and not about the action per se. (I take it that if an action could reasonably be chosen by a fully informed perfectly virtuous agent, that suffices to establish that it is objectively permissible. If actually done for the wrong reasons, the agent may be blameworthy, but that’s a separate matter.)
Trolley Cases
I suspect something similar is going on in Trolley cases. We expect a psychologically healthy and decent person to feel horrified at the thought of pushing the fat man off the footbridge; no such horror seems called for when you’re merely switching the tracks.
Greene et al. have proposed that it’s the “hands on” closeness of pushing that makes the difference here. But Peter Railton’s Bus case undermines this explanation:2
Bus: You live in a city where terrorists have in recent months been suicide-bombing buses and trains. The terrorists strap explosives to themselves under their clothing, and, at busy times of the day, spot a crowded bus or train and rush aboard, triggering the bomb instantly to avoid being stopped. You are on a very crowded bus at 5:10 pm, and are struggling to get to the door at your stop. The doors are starting to close and you won’t be able to get off unless you jostle the slow-moving obese gentleman trying to exit at the same time.
Suddenly you notice a man rushing up to the bus and forcing his foot into the doorway, wedging it between the fat man and the door frame. He is reaching with one hand under his coat and a gap between the buttons reveals to you what look like explosives strapped around his chest. You can’t reach this man, but if you push the corpulent gentleman beside you hard in his direction right now, he will fall directly on top of the seeming bomber and both will end up on the empty sidewalk, while you fall backwards into the bus as the doors snap shut.
—So, if you push hard, and this man is not a bomber, then the bus will leave behind two very annoyed men on the sidewalk, and you will be left on the bus, covered with embarrassment. But if he is a bomber, the bus will be spared, and you with it, but the fat man killed as the bomber explodes underneath him.
—On the other hand, if you simply squeeze off the bus alongside the corpulent gentleman and do nothing more, and the other man is a bomber, then many people on the bus will be killed while you and the corpulent gentleman are safe on the sidewalk. But if this man is not a bomber, then no one on the bus will be hurt and you simply will have jostled a corpulent gentleman while exiting a bus, and you can apologize to him on the sidewalk.
Whatever happens, you will not be killed if there is a bomb and it goes off—you will either be on the bus when it explodes on the sidewalk, or on the sidewalk when it explodes on the bus.
Should you (a) shove the corpulent gentleman hard right now, or (b) squeeze off the bus, jostling the corpulent gentleman but doing nothing else?
Railton reports that similar numbers of students support pushing the fat man in Bus (67%) as support pulling the lever in Trolley Switch (72%). There’s no metaphysical difference between Bus and Footbridge (where only 29% support pushing); nothing in the nature of the act itself that a deontologist could hope to point to in order to differentiate them. But, as Railton notes, the two cases feel very different, emotionally, presumably as a result of subtle differences in detail that affect our social and emotional expectations (about how we and others would react to our choice in either case).
As Railton further explains in his (2020) ‘Ethical learning, natural and artificial’, moral verdicts about trolley-style cases strongly correlate with whether the evaluator would trust their roommate less if they learned that their roommate performed the act in question. He concludes (p.62):
My hypothesis is that, when making an ethical assessment, my students (and the rest of us) rely upon acquired, general, abstract causal-evaluative models of situations and agents to simulate possible actions and likely outcomes or reactions. The simulations can be quite complex: How would it feel to perform this action? Could I actually see myself doing it? What kind of person would perform it? What would others think, and could I face them? But this kind of real-time simulation and evaluation of possibilities, and associated feelings and reactions on the part of others is exactly the kind of prospective processing the human default system appears to be engaged in systematically, off and on throughout the day, as we navigate the physical and social environment.
Conclusion
The best way to explain our messy moral intuitions may be that they aren’t tracking intrinsic features of actions (per se) at all, but rather subtle signs of good vs bad character.
As a result, I think these intuitions are a very poor guide to what moral reasons for action we really have. For example, Jaeger & van Vugt (2022) note that assessing effectiveness is actually “viewed negatively” in the context of charity, which seems awfully messed up. I’ve similarly written about how “some common views about acting ethically are better understood as views about how to reliably signal virtue. [But] when virtue-signaling conflicts with actually doing good, such signaling becomes morally vicious.” (I argue that common assumptions about anonymous donation get the ethics completely backwards, for example.)
So I do think it’s important to think critically about these intuitions, and about what really matters and makes an action worth doing. Our starting intuitions may be very different from what more careful reflection would inevitably lead us to conclude.
Cynical test: separately imagine: (i) that most people, upon hearing this case, persist in condemning the uber-empath as “inhuman” for allowing the child to drown; then instead imagine (ii) that most people, upon hearing this case, endorse the uber-empath as even more caring and virtuous than the rest of us.
If your moral intuitions are swayed by how you imagine other people will respond to the uber-empath, that may suggest that you’re really just tracking social status.
Peter Railton (2014), ‘The Affective Dog and Its Rational Tale: Intuition and Attunement’, pp. 854-55.
I also think that they're obviously tracking various biases. A drowning child is much more salient than a far away child with malaria. One reason we think that morality can't be so demanding that people would be required to spend all their time saving children is that would be super inconvenient--we're biased by self-interest. Another reason is scope neglect--just as most people would spend similar amounts to save 2000, 20,000, and 200,000 birds (hilariously they'd spend more to save 20,000 than 200,000) they also don't care much more about saving a dozen children than just one. The third nameless faceless child is not salient--and is thus ignored.
Brilliant. And it also makes sense overall, in that:
1) it fits with morality as that something evolved because of humans living in groups with massively benefited from cooperation beyond kin preference and had to deal with all kind of cheaters/free riders etc. From this pov, the key function of a "natural" moral judgement is PRECISELY to asses whether we can trust a person, ie to asses their character.
2) the latter explains very nicely why people attribute moral and competence traits rather differently based on individual acts (eg many fewer moral transgressions are sufficient to attribute a negative moral trait than it's for competence traits)
3) and historically it also fits the trajectory of the development of formal ethics / moral philosophy. Virtue ethics goes back millennia.
VE makes profound sense for an individual who on the one hand wants to have a tool for vetting potential allies or enemies, on the other wants to be seen as a desirable ally (or occasionally a formidable enemy, I guess). It even deals (at a stretch) with the whole mess of "when obligation isn't quite the right thing" concerning the most intimate relationships.
On the other hand it's of course almost entirely useless for big scale decision making about fungible units of sentience/suffering, because in those situations all that matters is beneficence. But we have not evolved to deal with those situations because they just didn't occur for the vast majority of people until fairly recently. Thus the need to "invent" rational obligation based systems (and also incidentally, why so called "ethics" of care feels so insanely regressive).