The idea that one should strictly ignore tiny probabilities (no matter the stakes) is both extremely widespread and demonstrably false. (Perhaps it is motivated by the thought that we shouldn’t be vulnerable to Pascal’s muggings. But better solutions to that worry are available.)
The Proof
Many people think you should simply ignore—round down to zero—probabilities smaller than, say, 0.00000001 (or one in 100 million). To see why this is false, suppose that:
(i) a killer asteroid is on track to wipe out all life on Earth, and
(ii) a billion distinct moderately-costly actions could each independently reduce the risk of extinction by 1 in a billion.
Now, it would clearly be well worth saving the world at just moderate cost to a billion individuals. And, since each of the relevant actions is independent of the others, if all billion are worth performing then any one is equally worthwhile no matter how many others are performed. So it is extremely worthwhile to oneself perform one of the risk-mitigating actions, despite the moderate cost and tiny probability of making a difference.
We can generalize the case to make the probability arbitrarily small, so long as the stakes are correspondingly increased. For arbitrarily large X, suppose that a population of X individuals is threatened with extinction, and each individual has the opportunity to independently reduce the risk of mass extinction by 1/X. This action is clearly worth taking. So it is demonstrably false that probabilities as tiny as 1/X can be categorically ignored, no matter what number you put in for X.
What about Pascalian muggers?
Pascal’s mugger threatens to torture some unimaginably huge number of sentient beings in another galaxy, unless you hand over your wallet. Seems pretty unlikely that he’s telling the truth. But, the worry goes, you should give it some non-zero probability (however tiny — call it 1/Y), and then he simply needs to add that there are more than Y lives at stake in order for it to be “worth” handing over your wallet, in terms of expected value. Since it would actually be clearly irrational to hand over your wallet on the basis of such a ludicrous claim, the critic concludes that we shouldn’t be moved by expectational reasoning based on small probabilities.
This case is very different from the asteroid case we previously considered. A couple of notable differences:
(1) The asteroid case involved objective chances, rather than made-up subjective credences.
(2) In the asteroid case, each act made a definite difference to the resulting objective probability, with vastly different outcomes guaranteed to occur if no-one acts vs if all X people do so. In the mugger case, it’s not true that Y people handing over their wallets would be guaranteed to do any good. So your one act doesn’t even make a definite probabilistic difference. It’s a different (more epistemic) sort of gamble.
Either of these features (or some third possibility) could explain what’s wrong with Pascal’s Muggings, without implying that you can generally (i.e. even in our asteroid case) ignore tiny probabilities.
For what it’s worth, my diagnosis would be that the problem with Pascal’s mugging lies in epistemology rather than decision theory. If you grant even 1/Y credence to the mugger’s claims to influence Y lives, you’re being duped. However large a population Y he claims to affect, your credence should be some vastly smaller 1/Z. If he then claims to affect this larger number Z instead, your credence in that should be some vastly smaller 1/Z*. Whenever complying with the mugger would be irrational, you shouldn’t accept credences that would rationalize compliance.
It’s probably a reasonable heuristic to just treat the mugger’s baseless claims as if they have literally zero chance of being true. But it doesn’t follow from this that you can similarly “round down” other small probabilities. It’s especially clear that we should not “round down” small objective chances, or probabilities that are well-grounded in our knowledge of the situation.
Conclusion
You should ignore Pascal’s mugger. You shouldn’t ignore all tiny probabilities. Whether a tiny risk should be ignored or not cannot be determined solely on the basis of how small a number it is. It also depends on whether the assigned probability is robust and well-grounded (e.g. in objective chances) or is a subjectively made-up number that could easily be off by orders of magnitude.
That’s not to say that you should ignore all subjective made-up numbers, either. Yes, we can be very confident that the Mugger was lying. But we can’t always be so confident about our situation. As I argue in X-Risk Agnosticism, it seems most reasonable to assign non-trivial subjective credence to things like AI risk. There are, after all, genuine reasons for concern there, even if it’s hard to know how to quantify it and there’s also a pretty good chance that there will turn out to be no problem after all. Given such uncertainty, moderate precautionary measures seem prudent.
The upshot, then, is that we should take seriously risks of great harm that are either (i) non-trivial in subjective likelihood, supported by genuine—non-ludicrous—reasons for concern, or (ii) objectively well-grounded, no matter how small (“trivial”) the probability in question.
We should only dismiss high-stakes risks (where the immense value at stake seems proportionate to the low chance) that are both (i) trivially tiny in probability, and (ii) this particular probability estimate isn’t robust or objectively well-grounded, so there’s reason to expect that a vastly lower probability is actually warranted.
I have doubts about the asteroid example working against mugging.
"(1) The asteroid case involved objective chances, rather than made-up subjective credences.
In what real life situations do we have access to "objective chances?" Never, we don't observe some platonic realm of chances. We might think that some subjective credences are better grounded than others, but in the real world that's all we have.
The whole concept of EV is kind of subjective- we only observe what happens, not parallel worlds or whatever.
I agree with the suggestion that our judgment about the *rationality* of acceding to the mugger's demand is more secure than our judgment about the *likelihood* of his carrying through with his threat. But I don't think this is enough to escape the problem that arises from the fact that the mugger can multiply the threat. Because he can multiply the threat, we have to ask ourselves: "Supposing that it was initially irrational for me to accede to the threat, would it still be irrational if the threat was multiplied by [arbitrarily high number]?" And I don't think *this* question prompts a secure negative judgment. On the face of it, a low expected utility can always be multiplied into a high expected utility. So I don't think we can escape the problem just by relying on our secure judgments about rationality.
I wonder what you think about a different way of escaping the problem. The way I think of it, when the mugger confronts you, there are at least three possible situations you might be in:
Normal: The mugger is just lying.
Demon Mugger: The mugger is actually a demon/wizard/god/whatever capable of carrying through on his threat, and will do so.
Demonic Test: The situation with the mugger is a test set up by an evil demon/wizard/god/whatever, and if you accede to the mugger's threat, the demon/wizard/god will do whatever the mugger threatened to do.
Demon Mugger and Demonic Test are both unbelievably unlikely, and more to the point, neither of them seems any more likely than the other. So they cancel each other out in the decision calculus. And while the mugger can keep increasing his threat, for every such threat there's an equal and opposite Demonic Test. So we can ignore any crazy threat the mugger might make (unless and until he gives some evidence that these threats should be taken more seriously than the corresponding Demonic Test scenarios!)