A curious feature of human nature is that we’re very psychologically invested in seeing ourselves as good. Teaching applied ethics, it’s striking how resistant students often are to any hint of moral self-critique. Meat-eaters will come up with the most transparently absurd rationalizations for disregarding all the torture that goes into producing their favorite meals. Some philosophers deny that having kids is good, simply because they’re scared of the (non-)implication that people ought to have more kids.1 And there’s obviously plenty of motivated reasoning underlying dismissals of effective altruism in the public sphere. I wish we were all more OK with just admitting moral imperfection and openly admiring others who do more good than we, in various respects.
The role of social norms
Part of the issue seems to be that people feel strong social pressure not to admit to any divergence between what they (even ideally?) ought to do and what they actually do.2 Anti-hypocrisy norms seem especially damaging here: there’s a sense that it’s better to have low standards than to have high standards that you struggle to always meet, even if your actual behavior is better (in absolute terms) in the latter case. In teaching, I try to push back against this by modelling tolerance of moral mistakes: “I still eat meat sometimes, even though I don’t really think it’s justifiable.” My hope is that this helps to create a learning environment where students can be more honest with themselves—a sense of, “Oh, good, we don’t have to pretend anymore.” Otherwise, there can be an atmosphere of defensiveness when discussing such topics, as people wonder whether they are going to be subject to attack for their personal decisions. That obviously isn’t conducive to open-minded inquiry.
So I think it can be valuable to create a kind of “safe space” for moral mediocrity, and that this can even be the first step in encouraging people to appreciate that they could do better (and might even feel better about themselves if they did). In general, I think it’s hard for moral motivations to win out over conformity and immediate gratification, so the most reliable way to do better is probably to develop a community of people with shared high standards. (That’s something I find very valuable about the EA community, for example.) It’s often easier to advocate that “we all” should do something valuable (pay higher taxes, eat vegan, tithe 10% to effective charities) than to do it unilaterally, when no-one else around you is doing the same.
As a result, I think it makes sense to be pretty tolerant of people in different social circumstances who are just conforming to their local norms. But I’m inclined to take a stricter stance when it comes to intellectual demands: everyone should acknowledge moral truths, even when they struggle to live up to them. Even though I eat meat, I can certainly acknowledge that veganism is better, and celebrate when a community successfully shifts its norms to make going vegan easier. And I think this is basically the stance that people who don’t donate to effective charities should have towards effective altruism, for example. It’s fine (not great, but fine) if you prefer to spend your money on yourself. We all do, to some degree. But that’s no excuse for opposing effective philanthropy. Just be honest.
Philosophical Cover
A lot of anti-beneficentric normative theorizing strikes me as the worst kind of cope. It’s, like, systematized motivated reasoning, aimed at securing the result that your everyday actions are completely normatively optimal. It’s absurd, and I don’t understand why anyone takes it seriously.
Consider, for example, the demandingness objection to consequentialism. Many philosophers are like, “Of course we couldn’t really have decisive reason to prioritize saving children’s lives over taking intercontinental vacations every summer. What a claim!”
Many treat it as a pre-theoretic datum that most everyday acts are fully justified, in the sense that people are always doing what they have most all-things-considered reason to do. When we fail to do what’s morally optimal, that’s just because we have sufficient non-moral reason to give thousands or even millions of times more weight to our own interests. Or something like that.3
I don’t understand why anyone would take that as a datum. It doesn’t seem true to me, even just reflecting on my own decisions. I am constantly making suboptimal decisions, by any reasonable standard (moral, prudential, whatever). Force of habit is extremely strong. ‘Ugh fields’ and anxiety can prevent me from taking the first steps or even thinking about a task that would be well worth completing. There are things I intellectually recognize the value of, but just don’t emotionally care about, and it’s very hard to be motivated by such purely abstract values. But most of all, there’s just a very small space of possibilities that I regard as “live options”—things I will actually seriously consider doing—even though I don’t for a moment believe that this cozy, familiar space contains all the genuinely worthwhile options. Willpower and executive functioning are scarce cognitive resources; it’s entirely inevitable that we run much of our lives on auto-pilot, and there’s no general reason to expect optimal calibration here.
Maybe I’m unusually irrational,4 but I sure don’t get the impression that other people are superlatively reasonable and wise. Many people are terrible with money.5 More generally, it seems like everyone struggles with various forms of weakness of will and (often) moral confusion, prioritizing immediate gratification over greater future goods (hyperbolic discounting), prioritizing smaller salient values over larger less-salient ones, and so on. Given these familiar facts about human psychology, it just seems entirely to be expected that we will routinely fail to do what we have most reason to do.
Our ethical theories should reflect this expectation. Indeed, one of the main purposes of a moral theory is to help us (in conjunction with relevant social science) to identify likely sources of normative error, or ways we are apt to go wrong. The forces we learn about from evolutionary and social psychology are obviously not perfectly aligned with any reasonable account of what we really have most reason to care about. So anyone who expects normative error to be rare simply cannot be thinking clearly. We should expect high moral returns from taking back the wheel and carefully surveying the terrain for neglected opportunities. (But we should not expect that any such effort will be sufficient to avoid all normative mistakes. Nor should we be excessively bothered by this basic fact of life.)
Living with Imperfection
As I’ve written previously, we should just be honest about the fact that our choices aren’t always perfectly justified. That’s not ideal, but nor is it the end of the world. It’s OK to be flawed—everyone else is too. We can all celebrate incremental improvements, and uphold norms to prevent moral backsliding (to severely below-average behavior).
The alternative is to indulge in the collective fantasy that everything we do is already ideal—that our everyday acts of selfishness and short-sightedness are actually what we have most reason to do—because our immediate narrow interests are allegedly just so much more “worth” acting upon.
If you don’t think about it too much, maybe you can get away with the fantasy. But I don’t think it survives scrutiny. And I think there’s a lot to be said for intellectual honesty, facing hard truths, and muddling through as best we can (given our myriad cognitive and motivational limitations).
As Scott Alexander wrote in ‘Nobody is Perfect, Everything is Commensurable’:
Nobody is perfect. This gives us license not to be perfect either. Instead of aiming for an impossible goal, falling short, and not doing anything at all, we set an arbitrary but achievable goal designed to encourage the most people to do as much as possible. That goal is ten percent.
Everything is commensurable. This gives us license to determine exactly how we fulfill that ten percent goal. Some people are triggered and terrified by politics. Other people are too sick to volunteer. Still others are poor and cannot give very much money. But money is a constant reminder that everything goes into the same pot, and that you can fulfill obligations in multiple equivalent ways. Some people will not be able to give ten percent of their income without excessive misery, but I bet thinking about their contribution in terms of a fungible good will help them decide how much volunteering or activism they need to reach the equivalent.
Avoiding Villainy
One non-arbitrary threshold is the distinction between having your life contribute positively vs negatively to the world as a whole. I think it makes sense to be especially concerned to ensure that one’s existence turns out to be a good thing on the whole. I also think this is very easy to achieve. Yet there’s a significant risk that most people are currently on track to fail here, simply due to how extraordinarily bad factory-farming is (and how each additional meat-eater contributes to increasing demand for factory farming).
As long as you’re not a criminal, your everyday actions are probably net-positive for humanity. (If you’re worried about environmental impact, consider offsetting with a donation to an effective climate organization like Clean Air Task Force.) Most jobs create value for others; your personal interactions are hopefully overall to the good; and if you have kids, you’re doing the essential work of keeping civilization going. Some people with immense (political, cultural, or economic) power abuse it badly in ways that make me regret their existence, but I doubt any of them are reading this blog. So I’m going to go out on a limb and say I’m glad that you exist, dear reader.
But there is a real risk that you cause a lot of harm to non-human animals. I’m not sure of the precise details, but a typical American diet could be so bad for animals that it outweighs all the good in your life, and the good that you do for others. It’s a scary thought!
There are two obvious ways to avoid this risk of outright villainy:
(1) Go vegan, or
(2) Donate sufficiently to effective animal charities to offset the harm done by your diet.6 (I’m guessing a couple hundred dollars a year would likely do the trick?)
Once you do that, and can reasonably expect your life impact to be in the green, I think you should feel good about your existence. But you needn’t stop there. I like The🔸10% Pledge because taking it almost guarantees that your life impact will be incredibly awesome, which is even better than “not villainous”! And it’s not even hard! But it’s your life—*shrug*—use your own judgment.
Compare: ought you to donate a kidney? Only in the sense that it would be a great thing to do. Not that it should be regarded as morally obligatory.
Sometimes this can result in quite implausible claims about what one would do in high-stakes circumstances: “Oh, I would absolutely throw myself in front of the trolley to save five, if I were the one on the footbridge.” Really? Remind me how many healthy kidneys you have right now?
Compare my previous discussion of “rationalist” conceptions of permissibility.
Probably true in some specific respects (e.g. social anxiety).
I’m really baffled by how many people hate their jobs and yet spend money very wastefully. Why not FIRE?
Or take other actions that are even better in expectation.
One of the main advantages of utilitarianism is that beyond good and evil, it allows for all the intermediate shades of grey.
In my view there is a lot of room between “veganism” and welfare indifferent omnivorism.
Keeping a given amount of animal protein consumption you can displace meat by diary (or free range eggs), and you can also displace meat from species raised in CAFO (pigs and chickens) by that raised by ruminants feed on pasture (cows and sheep).
There is more that accepting “imperfection”. Utilitarianism provides directions for continuous improvement.
Loved this article!! I strongly agree that anti-hypocrisy norms get in the way of a lot of positive moral change.
It’d be helpful if this phenomenon had a name (“anti-hypocrisy bias”? “cognitive-behavioral dissonance”?). I think the fact that we all know what “confirmation bias” is and can refer to it quickly helps us (even if only marginally) to resist it. Something similar would be helpful here.