22 Comments

I discuss the same asteroid case at the end of this GPI working paper (p. 24): https://globalprioritiesinstitute.org/tiny-probabilities-and-the-value-of-the-far-future-petra-kosonen/ Probability discounters might respond with something like "collective difference-making": One ought to take into account the choices of other people and consider whether the collective has a non-negligible probability of making a difference.

Expand full comment
author

Thanks for the reference!

I think the dialectic is similar to Parfit's famous objections to anti-aggregation:

https://www.philosophyetc.net/2012/10/parfit-on-aggregation-and-iteration.html

While it's true that the targeted view might secure the right verdict in this case by appeal to something like collective difference-making, the larger point is that the *motivation* for their view is decisively undermined one we see that it is clearly "worth it" to perform the individual action simply on the grounds that:

(i) it is worth it for everyone to do the action in question

(ii) the value of each act is equal and independent, and does not depend upon how many others act likewise

(iii) If a set of acts are jointly worth doing, and are of equal and independent value, then each is individually worth doing.

Expand full comment

I have doubts about the asteroid example working against mugging.

"(1) The asteroid case involved objective chances, rather than made-up subjective credences.

In what real life situations do we have access to "objective chances?" Never, we don't observe some platonic realm of chances. We might think that some subjective credences are better grounded than others, but in the real world that's all we have.

The whole concept of EV is kind of subjective- we only observe what happens, not parallel worlds or whatever.

Expand full comment
author

You can take "objective chances" to just be "sufficiently well-grounded subjective chances". E.g., assigning a 50% chance to a coin flip is very different (and more robustly justified) than assigning some arbitrary 1/Y chance to the wild claims of Pascal's mugger.

Real-life cases that are parallel in structure to the asteroid case come up all the time when thinking about "collective harm"-type cases. Voting in a close election is another example: in a population of X voters, the chance of placing a decisive vote is typically on the order of 1/X, and affects at least X people. So whenever the total benefits of the better candidate winning are sufficient to justify the time costs of up to X voters, it will generally be worth it for any individual to vote (for the better candidate), no matter how high X is, and hence no matter how small a chance (1/X) their vote has of "making a difference".

Expand full comment

My objection to this type of reasoning is it seems very motivated- like we're just looking for any principle to reject the mugger (and the wager). But why does this reasoning work?

Imagine P(A) is very uncertain and very low so we pretend it's zero. But obviously it's not zero in the Bayesian sense because that would mean we could never be more confident in A regardless of the evidence. So there's this split between the probability between what we believe and what we use for expected value calculations. That seems really bad- we're using numbers we know are fake for our EV calculations.

It also seems strange to have a binary consideration of well-grounded or not well-grounded probabilities for EV. Finally, what do we about highly speculative but high probabilities- do we round up to 100%? Imagine someone said I'm very confident that this claim is true with P(A)=99%+ but I haven't thought about it much so it could change a lot. This implies that the probability of not A is highly speculative and near zero... what do we do?

Expand full comment
author

Reflective equilibrium! I like Dan G.'s comment on this, from the public fb thread: https://www.facebook.com/richard.chappell/posts/pfbid02GX3CwyAsEhqDtzZbKdaeqz9rjYoq7JZy6uftrQnGMkv1854m7STj5c43Bwq8CR8El?comment_id=1047513809765705

"In the mugging, i think we have a better grip on the facts about rational decision than on priors for how likely it is the mugger can benefit arbitrary large groups of people. But there's nothing wrong with working backwards to figure out sensible priors on the basis of one's firmer grip on facts about the utility of outcomes and facts about the rationality of decisions."

I don't think that anything general can be said about "highly speculative but high probabilities". It depends on the content of the belief, and whether the "speculative" nature of it suggests that the ideal credence is likely to be significantly higher, lower, or what. I'm not suggesting that you can ignore *all* speculative low probabilities. I'm merely suggesting that being speculative and extremely low are *necessary conditions* for such a move. But the final verdict will always depend on the details.

Expand full comment

Here's another reason why I think the poorly grounded but low probabilities can (at least generally be ignored).

Imagine I think the probability is 10^-10 but I'm highly uncertain. There should be some lower probability that I'm fairly certain is equal or higher than the real probability. Maybe it's 10^-20 - then I could this lower value for EV calculations. This seems much better than rounding to literal zero.

Expand full comment
author

Suppose the mugger claims that they can realize *any* amount of value that they choose. What probability should you assign to the truth of this claim? I suggest it should be closer to zero than to any other number you can express.

Expand full comment

I would probably assign the same value I do to other very implausible religious claims.

Here's an example of rounding down to zero, leading to bad EV results. Imagine your eccentric uncle dies. He gives a sealed box to John and to a trillion other people. He told you that he flipped a coin- if it was heads, he put all his 1 million dollar fortune in John's box. If tails, he put it in someone else's box. You assume that the probability it's in John's box is 0.5 and the probability for each other box is 1 in 2 trillion. If it was tails, you're not sure how he chose what box to put it in. If you could talk to his widow you might gain valuable information- perhaps he put in the box of neighbor. The probability for each box other than John's is low and very uncertain. You round down the probability for each other box to zero. Since probabilities add up to 1, you're confident that John's box has a million dollars. You offer to but it from him for $950K. But that's absurd.

Expand full comment
Dec 10, 2023Liked by Richard Y Chappell

Is your intuition that I have the same reason to (i) make my sacrifice leading to a 1 in a billion decrease in risk if I can reasonably predict that all others will do the same, as I do to (ii) make my sacrifice leading to a 1 in a billion decrease in risk if I can reasonably predict that no one else will do their share? This doesn't seem obvious to me.

Expand full comment
author

I don't need anything so strong. Just that there is *a* sufficient reason in case (i) that also applies in case (ii).

For example, you might think that there are extra non-consequentialist reasons to "do your part in a valuable collective effort" that apply only in the first case. That's fine. The crucial point is just that the case for acting in (i) does not strictly *depend* upon any such non-consequentialist reasons.

You can bracket everything else, and the sheer *ex ante difference-making impact* of each person's action in case (i) is clearly worth it. And this particular reason is, by independence, exactly the same in case (ii). So there is a sufficient reason that makes acting in case (ii) clearly worth it.

Expand full comment

Sure, but it seems like once you concede the presence of other factors at work driving the intuition that you ought to help, then the argument from intuition looks weaker--I don't know if my intuition that I should help in your case (a very strong one) is responsive to the presence of collective action issues, or to separate reasons that I have as an individual to make a moderate sacrifice that will very slightly reduce the risk of extinction. Once I try to imagine a case where I am the only one in a position to make a moderate sacrifice to very slightly reduce the risk of extinction, and no one else's actions will affect this risk, I no longer have strong judgments about the case one way or the other.

So I certainly don't dispute that there could be all-things-considered to make sacrifices that would be responsive to arbitrarily small chances of great harms, independent of collective action questions, but I'm not sure your case establishes as much? Maybe I'm misunderstanding.

Expand full comment
author

In one-person cases, it is much less intuitively transparent how the costs, odds, and potential benefits all compare. We can calculate how a utilitarian would value the prospects. But it isn't immediately obvious that we must share this evaluation of the ex ante prospects in question. That's why we need to shift to an intuitively clearer case.

In my case, the details are more transparently accessible to our intuition, since we can simply (1) assess whether the total benefit outweighs the total cost (as it clearly does), (2) conclude that the ex ante prospect of performing ALL the acts, given their total costs and benefits, must be evaluated positively on net -- i.e., the acts are collectively "worth it" in purely welfarist, difference-making terms; (3) distribute this prospective evaluation equally across EACH act that equally and independently contributes to the whole, and hence (4) conclude that each act, individually, is also "worth it" on purely welfarist grounds (offering an ex ante prospect that we must evaluate positively on net).

None of this reasoning depends on the absence of other reasons, since I'm not appealing to some vague intuition that you "should help". Rather, I'm appealing to the specific intuitions that (i) the acts are collectively "worth it" (i.e., offer a net positive ex ante prospect) on purely welfarist grounds, and (ii) ex ante prospects of collections of acts must cohere with the ex ante prospects of the individual acts that constitute the collection.

Expand full comment

I thought the whole point of attributing moral significance to collective action was that, for some degree of this significance, it might be (i) obligatory for each member of collective to make a sacrifice that has an independent chance of reducing a small risk of extinction, when and because every person is doing their part to reduce the risk, but (ii) permissible for each member of collective to not make a sacrifice that has an independent chance of reducing a small risk of extinction, when and because a significantly large number of people are not doing their part to reduce the risk. If collectivity is significant in this way, it wouldn't follow from your examples that every individual ought to act to to reduce very small extinction risks just because they ought to act as part of a collective where everyone reduces these risks.

What am I missing?

Expand full comment
author

You're thinking about deontic status instead of rational choice (specifically, evaluating ex ante prospects). My argument is about the latter, not the former.

Expand full comment
Dec 11, 2023Liked by Richard Y Chappell

Ah okay, maybe I misunderstood what you meant when you said that the opportunity to independently reduce the risk of mass extinction by 1/X is "clearly worth taking." I understood this to mean that you thought it would be wrong for individuals not to take these risks.

Expand full comment
Nov 23, 2023Liked by Richard Y Chappell

I agree with the suggestion that our judgment about the *rationality* of acceding to the mugger's demand is more secure than our judgment about the *likelihood* of his carrying through with his threat. But I don't think this is enough to escape the problem that arises from the fact that the mugger can multiply the threat. Because he can multiply the threat, we have to ask ourselves: "Supposing that it was initially irrational for me to accede to the threat, would it still be irrational if the threat was multiplied by [arbitrarily high number]?" And I don't think *this* question prompts a secure negative judgment. On the face of it, a low expected utility can always be multiplied into a high expected utility. So I don't think we can escape the problem just by relying on our secure judgments about rationality.

I wonder what you think about a different way of escaping the problem. The way I think of it, when the mugger confronts you, there are at least three possible situations you might be in:

Normal: The mugger is just lying.

Demon Mugger: The mugger is actually a demon/wizard/god/whatever capable of carrying through on his threat, and will do so.

Demonic Test: The situation with the mugger is a test set up by an evil demon/wizard/god/whatever, and if you accede to the mugger's threat, the demon/wizard/god will do whatever the mugger threatened to do.

Demon Mugger and Demonic Test are both unbelievably unlikely, and more to the point, neither of them seems any more likely than the other. So they cancel each other out in the decision calculus. And while the mugger can keep increasing his threat, for every such threat there's an equal and opposite Demonic Test. So we can ignore any crazy threat the mugger might make (unless and until he gives some evidence that these threats should be taken more seriously than the corresponding Demonic Test scenarios!)

Expand full comment
author
Nov 23, 2023·edited Nov 23, 2023Author

Demonic Test strikes me as even more unlikely. However unlikely it is that the mugger has magical demonic powers, the Test hypothesis requires the *additional* implausibility of their lying about their plans for no obvious reason. This is a small difference in plausibility compared to the immense implausibility of having demonic powers in the first place. But even a small difference in plausibility prevents a "cancelling out" story from working.

(This differs from Pascal's Wager, where I think it is substantively more plausible that an all-powerful God would reward honest inquiry than that one would jealously punish reasonable non-believers.)

> "the problem that arises from the fact that the mugger can multiply the threat..."

I guess I just have different intuitions from you here. No matter how ludicrously high a number of people the mugger claims to affect, it doesn't seem rational to me to grant them proportionate credence (because it definitely doesn't seem rational to assign high EV to complying).

> "On the face of it, a low expected utility can always be multiplied into a high expected utility"

That sort of abstract theoretical claim strikes me as much less trustworthy. You might retain it by assigning lower credence the more people the mugger claims to affect. But if you need to give a constant credence at some point (e.g. to the generic claim that the mugger can affect *any* number of lives), then I think your credence ought to be lower than 1/N for any N whatsoever. Maybe it ought to be literally zero.

(Some zero probability events are, in some sense, more likely than others. Like, God's randomly picking the number '1' from amongst all the natural numbers, vs. randomly picking either '1' or '2'. Both have p = 0, but the former is even less likely. Demonic Test and Demonic Mugger might be like that.)

Expand full comment

> Demonic Test strikes me as even more unlikely. However unlikely it is that the mugger has magical demonic powers, the Test hypothesis requires the *additional* implausibility of their lying about their plans for no obvious reason.

I'm actually not sure we do have any rational grounds for thinking that Demonic Test is more unlikely. It's not as if Demonic Test includes everything that is implausible about Demonic Mugger, *plus* an additional implausibility. The implausibilities are just different. In Demonic Mugger we have to ask "why might it be that this mugger has crazy evil powers, and will punish me for refusing his threat?" (Maybe he somehow can't use his powers to get the money himself, and he has to carry out his threats in order to guarantee future compliance. Absurdly unlikely, but perhaps!) In Demonic Test we have to ask, "why might it be that this mugger has crazy evil powers, and will punish me for complying with his threat"? (Maybe he hates people who are weak and spineless, or who are irrational from the standpoint of decision theory. Again absurdly unlikely, but perhaps!) The possibilities we are considering are simply different. The Demonic Test scenarios aren't just (Demonic Mugger + something else). It's hard to say anything about their likelihoods, other than they are all absurdly unlikely. So it's not clear to me that one is more likely than the other.

> "That sort of abstract theoretical claim strikes me as much less trustworthy."

Does it seem more trustworthy when we assume that the low expected utility is finitely small? I was assuming that your credence in the mugger's threat is finitely small, because I think weird things happen if it's infinitesimal. E.g. if your credence is infinitely small, then for you nothing will count as evidence in favor of the mugger's threat (I think?) Could definitely be mistaken about this! Anyway thanks for the response.

Expand full comment
author

> "It's hard to say anything about their likelihoods, other than they are all absurdly unlikely. So it's not clear to me that one is more likely than the other."

Yeah, I agree it's not clear. But it's especially not clear that they are precisely equal in likelihood. And if Test is even *slightly* more unlikely then the cancelling out move fails.

> "Does it seem more trustworthy when we assume that the low expected utility is finitely small?"

Probably, but I don't think we should assume that the mugger's claim warrants finite credence. I'm dubious of the Bayesian claim that nothing can count as evidence for zero probability events.

Simple counterexample: suppose God runs a fair lottery over the natural numbers. You should assign p = 0 that the number 1 will be picked (any finite credence would be too large). Then God tells you that the winning number was, in fact, 1. You should now assign this much higher credence. It's difficult to model this mathematically. But it seems clearly correct nonetheless.

Expand full comment