8 Comments
тна Return to thread

Is your intuition that I have the same reason to (i) make my sacrifice leading to a 1 in a billion decrease in risk if I can reasonably predict that all others will do the same, as I do to (ii) make my sacrifice leading to a 1 in a billion decrease in risk if I can reasonably predict that no one else will do their share? This doesn't seem obvious to me.

Expand full comment

I don't need anything so strong. Just that there is *a* sufficient reason in case (i) that also applies in case (ii).

For example, you might think that there are extra non-consequentialist reasons to "do your part in a valuable collective effort" that apply only in the first case. That's fine. The crucial point is just that the case for acting in (i) does not strictly *depend* upon any such non-consequentialist reasons.

You can bracket everything else, and the sheer *ex ante difference-making impact* of each person's action in case (i) is clearly worth it. And this particular reason is, by independence, exactly the same in case (ii). So there is a sufficient reason that makes acting in case (ii) clearly worth it.

Expand full comment

Sure, but it seems like once you concede the presence of other factors at work driving the intuition that you ought to help, then the argument from intuition looks weaker--I don't know if my intuition that I should help in your case (a very strong one) is responsive to the presence of collective action issues, or to separate reasons that I have as an individual to make a moderate sacrifice that will very slightly reduce the risk of extinction. Once I try to imagine a case where I am the only one in a position to make a moderate sacrifice to very slightly reduce the risk of extinction, and no one else's actions will affect this risk, I no longer have strong judgments about the case one way or the other.

So I certainly don't dispute that there could be all-things-considered to make sacrifices that would be responsive to arbitrarily small chances of great harms, independent of collective action questions, but I'm not sure your case establishes as much? Maybe I'm misunderstanding.

Expand full comment

In one-person cases, it is much less intuitively transparent how the costs, odds, and potential benefits all compare. We can calculate how a utilitarian would value the prospects. But it isn't immediately obvious that we must share this evaluation of the ex ante prospects in question. That's why we need to shift to an intuitively clearer case.

In my case, the details are more transparently accessible to our intuition, since we can simply (1) assess whether the total benefit outweighs the total cost (as it clearly does), (2) conclude that the ex ante prospect of performing ALL the acts, given their total costs and benefits, must be evaluated positively on net -- i.e., the acts are collectively "worth it" in purely welfarist, difference-making terms; (3) distribute this prospective evaluation equally across EACH act that equally and independently contributes to the whole, and hence (4) conclude that each act, individually, is also "worth it" on purely welfarist grounds (offering an ex ante prospect that we must evaluate positively on net).

None of this reasoning depends on the absence of other reasons, since I'm not appealing to some vague intuition that you "should help". Rather, I'm appealing to the specific intuitions that (i) the acts are collectively "worth it" (i.e., offer a net positive ex ante prospect) on purely welfarist grounds, and (ii) ex ante prospects of collections of acts must cohere with the ex ante prospects of the individual acts that constitute the collection.

Expand full comment

I thought the whole point of attributing moral significance to collective action was that, for some degree of this significance, it might be (i) obligatory for each member of collective to make a sacrifice that has an independent chance of reducing a small risk of extinction, when and because every person is doing their part to reduce the risk, but (ii) permissible for each member of collective to not make a sacrifice that has an independent chance of reducing a small risk of extinction, when and because a significantly large number of people are not doing their part to reduce the risk. If collectivity is significant in this way, it wouldn't follow from your examples that every individual ought to act to to reduce very small extinction risks just because they ought to act as part of a collective where everyone reduces these risks.

What am I missing?

Expand full comment

You're thinking about deontic status instead of rational choice (specifically, evaluating ex ante prospects). My argument is about the latter, not the former.

Expand full comment

Ah okay, maybe I misunderstood what you meant when you said that the opportunity to independently reduce the risk of mass extinction by 1/X is "clearly worth taking." I understood this to mean that you thought it would be wrong for individuals not to take these risks.

Expand full comment

Nah, I'm generally pretty uninterested in deontic status. Compare: https://rychappell.substack.com/p/impermissibility-is-overrated

Expand full comment