Sometimes, stakes are high. Plausible examples include:
Climate change could be very bad (and very likely will be significantly bad—well worth mitigating, even on relatively optimistic forecasts).
As longtermists rightly point out, astronomical opportunity costs would make human extinction the worst thing ever, so it’s very well worth investing in reasonable precautions regarding AI, biosecurity, nuclear diplomacy, etc. (More positively, it’s also extremely worthwhile to pursue forms of enduring progress in science, ethics, institutions, etc., which could robustly be expected to improve the long-term future.)
A second Trump term could severely undermine American democracy (and would very likely bring significant further erosion of valuable democratic norms).
Sometimes, people who are politically opposed to a cause will object that it is “dangerous” to say that it is important. “If it’s so important,” they argue, “someone could use that to justify atrocities.” (One must always maintain that nothing much matters, apparently, lest some excitable crazy person gets the wrong idea.)
Of course, no-one makes this “abusability” argument consistently, applying it equally to causes they and their allies think are important. (An orthodox leftist might apply it against longtermism, for example, but never against climate change or anti-fascism.) To better discourage this bad reasoning, maybe it would help to give it a clear label:
The Convenient Stakes Fallacy: when one implies that nothing can be important, simply because perceived high stakes raise the risk of desperate/harmful actions.
Thinking about Justification
The basic problem here is that:
(1) In principle, sufficiently high stakes could justify imposing smaller harms (e.g. killing one person), all else equal. We should all prefer smaller harms over greater ones, if those really are the options.
(2) In practice, we don’t really know the options. We certainly can’t just assume that “all else is equal”, or that criminal acts of violence or fraud will have no further repercussions or unintended consequences.
My own view, as explained in Naïve Instrumentalism vs Principled Proceduralism (as well as the other links, above), is that we should be strongly disposed to expect co-operative, high-integrity behavior to have better long-term results than unilateral defections and criminal acts motivated on “naive instrumentalist” grounds. The fact that we can easily imagine a violent act being “justified” if we imagine it all working out for the best is no reason at all to think that such an act would actually work out for the best. A robustly unfavorable expectation is the most clearly reasonable basis for judging naive instrumentalist rule-breaking to be morally wrong.
Moral Norms Can’t Rely on Low Stakes
The deontologist’s alternative answer—that rights violations are just intrinsically wrong—seems very non-robust in light of the fact that nobody is an absolutist and sometimes stakes are high. It also seems patently unreasonable to value, say, Donald Trump’s right to life more highly than American democracy. The obviously more reasonable grounds for being opposed to political violence is that (you reasonably expect that) political violence does not help the democratic cause. (This latter view does not require especially high confidence about particular contested questions. Obviously there’s huge uncertainty here. You just need to have the sense that the shift in probabilities is overall for the worse. And that is very much my sense.)
Similarly, as I wrote in Astronomical Cake, “stealing to give” seems like an awfully counterproductive strategy for supporting a good cause, once you take into account secondary effects like reputational costs. I could add, against the purely deontological account: it just seems patently unreasonable to value property rights more than the survival of humanity. (The leftist professors objecting to utilitarianism in the wake of SBF’s fraud must not have realized how easily they could be portrayed as neoliberal villains.) There’s no disputing the ultimate values when the stakes get this high. The thing to question is whether disreputable means will actually help (in expectation).
My sense is that many people don’t like to rely on probabilities in the way that my consequentialist account does. I’ll say, “Stealing doesn’t pay,” meaning it as a generic. Pedantic philosophers will reply, “Sometimes, stealing does pay,” revealing a very basic misunderstanding. (Compare: “Seatbelts save lives!” Pedantic philosophers: “Sometimes, seatbelts don’t save lives.” Clever, huh. Now, are you buckling in your kid or not?)
It can be comforting to cloak yourself in false certainty: “we can just know, by the intrinsic nature of the act itself, that it was overall bad and not worth doing!” But you can’t really know any such thing in advance. (Even if, per impossibile, absolutist deontology were true, it is so far from being self-evident that nobody could reasonably believe it with certainty. So some more stakes-sensitive view is going to exert influence once you take moral uncertainty into account.) What I think you can know, or at least very reasonably believe, is that an act-type is generally bad news. And that’s enough to not welcome new instances of that act-type, and to embrace practical norms that oppose them. Maybe some lucky instance will turn out for the best. You can quietly hope for that possibility. But this bare possibility doesn’t make it advisable to welcome more marginal acts of the generally-bad sort, if you’ve no special reason to doubt that they would, as generally expected, turn out badly.
So, yeah, the consequentialist story can seem messy and awkwardly tentative, compared to the “moral clarity” of a fiction. But that’s just a reflection of the messiness of the world, and the tentative (though still, I think, tolerably clear) nature of the expectations we can reasonably form about it. We should not expect reasonable ethical judgments to be completely unhinged from reasonable empirical expectations about what would, or would not, actually be for the best.
While I agree, I think it is a response to a genuine and persistent bias in our thinking and even how we model action. And while you are correct it is technically false I worry that it's necessary given the complexity of the true reasons to avoid extreme action.
We tend to think and model people by idealizing away psychological constraints and asking about what possible choices they could make. Of course any model has to involve idealizing potentially deterministic physical reality into a set of choices and talking about what's physically possible for normal humans is a useful way to make this distinction. [1]
And while that's useful it tends to create a persistent bias towards people ignoring important psychological constraints. Be it those that prevent a doctor from undetectable killing one patient to save 10 or for someone to harness the backlash effect in a political assassination by engaging in a false flag operation (if assassination isn't useful bc of backlash than in theory a false flag assassination should be).
The problem is that it's not really psychologically possible (or at least likely) for someone to engage in those sort of acts without being hyped up in a way that makes the motives for the act relatively transparent and or makes discovery likely. Suicide bombers have all sorts of religious encouragement and even eco-terrorists require a support structure that assures them they are acting correctly.
But these tend not to be the kind of explanations people find compelling -- in part because it is really kinda hard not to imagine yourself as a free actor when contemplating future choices.
---
1: It's a bit hard to define this in a principled fashion (it's not quite all possible commands the brain could give the body because it doesn't include things like a superhuman ability to ignore crippling pain) but it's a natural idealization to make because we have a relatively clear shared concept about what is involved in normal cases and it doesn't suffer from the kind of diagnolization worries that occur when you consider psychological constraints.
After all, one of the goals of this kind of modeling is to persuade people or guide action and the danger with trying to identify psychological constraints is that they may not be compatible with knowledge that the constraint is used in modeling their potential behavior (or more accurately constraints that only apply when someone doesn't know they are being used to model them aren't that useful).
> The obviously more reasonable grounds for being opposed to political violence is that (you reasonably expect that) political violence does not help the democratic cause.
Why can't we arrive here via:
Would we want unlawful killing without due process rights to become a universal law? I might be next, the people I think of as the "good guys" might be next, etc.
No matter the stakes.