Policy-makers face strong pressures not to perform risky actions. (They might also feel pressure to “do something”, but if the most effective responses involve risk, this just means that they will be pressured towards doing some worthless policy theater.)
California suffers megafires because “there are just so many reasons not to pick up the drip torch and start a prescribed burn even though it’s the safe, smart thing to do.” CA institutions implicitly treat the risks from prescribed burns as graver than those from wildfires, even though prescribed burns massively reduce overall fire risk. But because they’re intentional, the action is viewed as “risky”—requiring cumbersome regulations—neglecting the fact that the risks from inaction are far greater. We implicit treat the status quo as risk-free, even when it’s not. (Wouldn’t it be nice if wildfires had to apply for a permit before spreading?)
Or consider environmental regulations. As others have pointed out (I can’t find the link—comment if you have a good one), many such regulations were implemented at a time when the main danger to the environment came from new interventions further degrading the environmental status quo. But now the status quo itself contains the seeds of disaster, and our best shot at improving things is through new interventions—massively building up solar and other renewable energy, increasing urban density, etc.—which is often made more difficult by the very regulations that are supposed to protect the environment. The problem is that these regulations protect against harmful interventions, but neglect the even graver risk of harmfully failing to intervene when intervention is needed.
Pandemic ethics
Finally, consider Covid. Policy-makers disastrously opposed challenge trials to speed vaccine testing, on the grounds that deliberately exposing volunteers to Covid was “too risky”. Never mind the millions who were involuntarily exposed to Covid, without prior vaccine access, as a result of this decision. Too often, policymakers simply ignored the status quo risks of the unchecked pandemic by rejecting promising—and objectively risk-reducing—interventions as “too risky”. (Wouldn’t it be nice if the virus had to get ethics approval before spreading?)
Expanding on this theme, my paper Pandemic Ethics and Status Quo Risk, published in Public Health Ethics, makes the following points:
Medical regulations asymmetrically privilege inaction over action. This may be justified in ordinary circumstances if there’s a corresponding asymmetry of risk between action and inaction (say because the status quo is tolerably safe). But this justification transparently no longer applies in a pandemic—if anything, it is reversed: the status quo is so disastrous that we need to find ways to incentivize more and faster experimentation to discover pathways to a tolerable future.
Because of this risk reversal, institutions obstructing vaccine access out of “an abundance of caution” were in fact being objectively reckless, exposing their citizens to an increased level of (status quo) pandemic risk.
Opposition to human challenge trials involved anti-beneficent paternalism: forcing individuals, for their own good, not to promote the general good. This is deeply unreasonable, and indefensible on any sane moral view.
Much as prescribed burning can reduce the harm from wildfires, so variolation (low-dose controlled voluntary infection) with SARS-CoV-2 to provide early targeted immunity plausibly could have reduced the harm from the pandemic, and should have been seriously evaluated.
Obstructing vaccine access in hopes of assuaging “vaccine hesitancy” is morally problematic for at least three reasons:
(i) exaggerating vaccine risks (relative to status quo risks) risks validating and reinforcing such hesitancy;
(ii) public institutions ought not to engage in strategic deception of the public, and the messaging around such obstructions amounts to moral deception;
(iii) it’s wrong to harm innocent people merely in order to convince fools not to harm themselves. Worse, vaccine obstructionist policies kill innocent people.Doctors and scientists aren’t moral experts, and shouldn’t be blindly deferred to on policy matters. They may oppose challenge trials and other beneficial policies as ‘too risky’, not because they have a more accurate conception of what the risks actually are, but because they lack moral understanding of when risks of that magnitude can be justified. To ward off bias and ill-considered assumptions, policy-makers need to feed their empirical data through a robust process of cost-benefit analysis. It is not enough to one-sidedly consider potential costs of action, and then lazily assume that the status quo is automatically superior.
You can read the whole thing here. (And some nice discussion of it in the public media, over at UnHerd.)
What are some other examples of neglected status quo risks? Comments welcome.
This reminds me of this paper: https://www.nber.org/papers/w22487
People considering a major life decision, who are told by a coin flip to do it, end up happier than people who are told by the coin flip not to do it.