Passivity norms and status quo bias
Many people endorse norms that presumptively oppose action in the face of uncertainty. They unreflectively assume that passivity is the “cautious”, risk-averse option. But this assumption isn’t necessarily true. In the face of significant status quo risk, doing nothing can easily be the more reckless option. (Imagine a train bearing down on you, or a child drowning in a pond. Doing nothing is not the safe option!) Of all my work in applied ethics, I’d especially like this conceptual point to be more widely appreciated.1 Presumptive passivity is not a good norm to uncritically or universally embrace; most people—philosophers included—need to do more to guard against such status quo bias.2
Passivity norms come in different forms. Some are especially concerned with advice. Many people feel that the moral risk of mistaken advice is greater than the moral risk of mistakenly failing to offer good advice. But there are many contexts in which we should doubt this. When thinking about the global poor, for example, what matters—their lives and well-being—is far more threatened by passivity than by action undertaken with their needs in mind. Alas, you’re far more likely to be blamed if you attempt something that backfires than if you do nothing at all. I think this mismatch (between what matters and how best to avoid blame) reflects poorly on our current social norms, and we should be more forgiving of well-intentioned efforts even when they prove counterproductive (so long as they weren’t blatantly unreasonable to begin with).
But my aim in this post isn’t to directly argue for more pro-active norms. Rather, I want to draw attention to a puzzle for those who actively argue for passivity.
Some people quietly endorse passivity norms. That seems perfectly coherent (even if I think it’s substantively mistaken).3 Others are more vocal in advising against giving risky advice. It’s this paradoxical behavior that I find most curious. I wonder why these people don’t more often appreciate the risks inherent in discouraging (possibly) good advice.
Advising against advice-giving
Have philosophers considered not advising young college grads to alter their career plans so that they can maximize their earnings for the greater good? […]
I remain baffled at how people can be this confident in their ability to offer life advice to others.
For someone who claimed to be baffled by the phenomenon, this philosopher seemed very confident about trying to influence people in the opposite direction. They seemed not to even realize that there was anything risky about their position.4 Isn’t that odd? As I put it in my (2024) ‘Why Not Effective Altruism?’
Consider that if you convince just one person not to take a course of action—such as earning to give—that would have led to their donating an extra $50k per year to GiveWell’s top charities, then you are causally responsible for approximately ten people’s deaths per year. That’s really bad!
Exhibit B: Leif Wenar’s recent Open Letter to Young EAs charges EA orgs (especially GiveWell) with a kind of combined epistemic hubris and moral recklessness:
The crucial-but-absent Socratic meta-question is, ‘Do I know enough about what I’m talking about to make recommendations that will be high stakes for other people’s lives?’
In that same document, Wenar makes recommendations (e.g., against trusting GiveWell’s research) that are high stakes for other people’s lives. But his arguments reveal that he doesn’t know what he’s talking about.5 So it’s all oddly self-undermining.
Contrasting Norms
How confident does one need to be in order to offer (hopefully) helpful advice?
At one extreme, there’s a kind of paralysis norm:
(Paralysis): Don’t offer advice unless you’re sure (or very close to sure) that it’s for the best.
But that doesn’t seem like a good norm. For one thing, it’s self-undermining, since you presumably can’t be sure that this is a good norm. So you can’t advise others to follow it. It also doesn’t seem that the above-quoted philosophers are actually following this norm themselves, since (as explained) they both offered advice that could very easily prove extremely harmful.
Alternatively, one could stick with standard norms of expected value:
(EV): Offer advice when doing so is positive in expectation.
EV seems generally reasonable to me (at least when tempered with commonsense heuristic limitations on what we should be willing to overturn on the basis of a rough calculation or intuition of instrumental value — so, no advising criminality, etc.).
Note that EV doesn’t require high confidence in any ordinary sense. Even in the face of significant uncertainty about actual consequences, we may still reasonably judge that it’s more positive in expectation to at least try to do good effectively than to not even try. (Importantly, anyone who shares that simple judgment should want to see more Effective Altruism in the world.)
Three Explanations
So why are many people so drawn to passivity norms for advice, when it seems so philosophically indefensible? I see three main possibilities.
(1) One way that advice may fail to be positive in expectation is if the quality of your advice is below average for advice-givers (including the listener’s gut instincts), such that they would be better off listening to someone else (or just going with their gut). Now, in many everyday contexts, we either have established “expert authorities” to defer to (e.g. on medical matters), or else decisions are so subject to personal taste (e.g. who to date) that it makes little sense for outsiders to opine about your choice.
So one possibility is that the anti-advice position just stems from overgeneralizing from these other cases, to contexts where passivity/reticence norms don’t so well apply. After all, sometimes we really may be in a position to identify improvements to typical behavior. Encouraging more (and more effective) charitable giving, including via earning to give, seems like a plausible candidate for being just such a case.
(2) Another possible explanation is that people are aiming to avoid blame rather than to secure good outcomes.6 But such a focus seems objectionably self-centered.
(3) Finally, one might attempt a principled justification in terms of the doing/allowing distinction. Perhaps (i) it’s just so much worse for people to be harmed as a result of our interventions than to die of preventable malaria as a result of our failure to intervene, that we should oppose philanthropic intervention even when clearly positive in expectation; and (ii) intervening to stop another’s intervention, resulting in harms like malaria not being prevented, still counts as an “allowing” rather than a “doing”.
I’m very skeptical that this combination of views is defensible in full generality. In order to avoid the paralysis argument, deontological constraints against harm have to be limited to relatively proximate, foreseeable, and broadly intentional harms. Not stuff like, “If you donate to a charity, that could result in bandits attacking the charity in order to steal the money, so you’d better just sit back and watch as kids die of preventable natural causes!” I just think it’s really clear that these kind of downstream, agentially-mediated, unintended harms should not activate deontological constraints. If they did, then you literally could not permissibly perform any actions because anything you do has myriad long-term consequences of this broad sort. Sane deontology needs to be more restricted in scope, to just “harming as a means” or something of that (narrower) sort, which ordinary philanthropy clearly does not violate.
Conclusion
Passivity norms and status quo bias are both extremely widespread and yet rarely recognized. People will highlight “risks” from action, while implicitly neglecting far greater risks from inaction. They will worry a lot about saying the wrong thing, and not worry at all about failing to say the right thing even when it’s desperately needed. I think this is all terrible and I want it to change. But at a minimum, I hope I can help readers to become more attuned to the inconsistency of those who vocally advocate against high-stakes vocalizations. That’s a high-stakes thing they’re doing, and often not very thoughtfully. People should think more in high-stakes cases about which option is really better! To help them, maybe you can point out the “anti-advisory paradox” the next time you see it.
My (2022) paper ‘Pandemic Ethics and Status Quo Risk’ steps through many examples of authorities being conceptually confused about this, and describing reckless passivity as stemming from “an abundance of caution”. (They were being cautious of vaccine side-effects, but reckless of the virus. It makes no sense to describe this combination as “cautious” overall, if you agree—as these authorities did—that the latter was the greater risk.)
That was at the heart of my critique of Leif Wenar’s objections to GiveWell:
People are very prone to status-quo bias, and averse to salient harms. If you go out of your way to make harms from action extra-salient, while ignoring (far greater) harms from inaction, this will very predictably lead to worse decisions… Note that his “dearest test” does not involve vividly imagining your dearest ones suffering harm as a result of your inaction; only action. Wenar is here promoting a general approach to practical reasoning that is systematically biased (and predictably harmful as a result): a plain force for ill in the world.
Another philosopher once asked me, over dinner, why I wasn’t more worried about making a mistake in my public advocacy. I explained my views on status quo bias, and why—given the lopsided social pressures—I thought we should all be much more worried about mistakenly failing to advocate for good things. (Mistakes are inevitable, and if we couldn’t tolerate the thought of them we’d have to stunt our own agency, which—if we’re at least moderately competent to begin with—would be the greatest mistake of all. Of course, I certainly agree that we should be trying seriously to get things right—I don’t endorse epistemic recklessness.) They didn’t seem persuaded, but nor did they try to convince me to stop doing public philosophy. They were just curious.
It’s important to understand that debates over effective altruism are high stakes—literally life or death, for some of those affected. That makes it especially worth contributing to the debate, if you have a valuable contribution that could be expected to result in (overall) better decisions. But if you’re going to advocate for reduced life-saving aid (whether due to reduced donations or reduced focus on the effectiveness of one’s donations), that carries obvious moral risks that also ought to be taken seriously.
Two quick examples: (1) His first main criticism rests upon committing Parfit’s famous “1st mistake in moral mathematics” (assessing moral credit via a “share of the total” of all causal contributors, rather than recognizing that many people may simultaneously be counterfactually responsible for the full total). See section 3.3 of Parfit’s Ethics for a concise summary of the “five mistakes in moral mathematics.” Parfit is essential reading for anyone who hopes to think clearly about these topics! (2) Wenar also suggests, falsely, that GiveWell’s work has not been scrutinized by developmental economists. It’s all quite bizarre.
I think this plausibly explains a lot of status quo bias at the level of policy: the things that are mistakenly called morally “risky” are avoided because they would be politically risky. Policy-makers have systematically biased incentives, and their decisions reflect this. But the political risks in turn depend upon status quo bias on the part of the broader public, which requires some other explanation.
In some situations there is also an activity bias. For example, soccer goalies would rather dive one way or another even if staying in the middle would give them better odds. Politicians would rather change a policy so that they can take credit.
Whether a passivity bias or an activity bias dominates depends on incentives.
First, I want to say that I signed up not only because of the interesting content, but because of your clear and concise style, which makes this readable without a major commitment.
Now the real point. As a physician I have had very notable issues with this matter. People do ask for advice. My malpractice company has told me not to give it even if I tell them to only make decisions with their doctor. This has come up either when a friend asks me about something or when I am aware of a treatment option the person I am speaking with is not aware of. I am advised not to even mention it as the listener may take this as the way to do things. If they take my suggestion and the case does not go well I can be sued. I told them that if something exists that would likely be very helpful and the person is not aware of it I cannot withhold the information, especially if we are close. They say they understand but hold by their stance.
Of course, most people do not have this particular problem. But I do think it generalizes in that in our overly litigious society people will hold us responsible for any and everything that goes wrong, if they possibly can. Even if we are not thinking of lawsuits specifically, I do think people are sensitive to the idea that getting involved can come back and hurt you.
Whatever the case, I am not allowed, under punishment of not being covered if sued, to tell people simple facts like "there is a medicine for that, ask your doctor." If the situation is bad I do it anyway.