"Double or Nothing" Existence Gambles
Are they best resisted via Risk Aversion, Diminishing Marginal Value, or Partiality?
Suppose that, on your 18th birthday, God offered you a “double or nothing” gamble on your continued existence: Heads your lifespan doubles; tails you die immediately. (Or on the population of humanity: Heads he makes a copy of Earth in another galaxy; tails he destroys it.) Seems like a bad deal! But why?
First, we can stipulate that your extended lifespan (if you win the gamble) will contain twice as many basic goods—pleasures, desire satisfactions, achievements, whatever—as your default future life. So don’t worry that it would be a “be careful what you wish for” scenario of a long, miserable decline into unnaturally old age. Even if it would all be a jolly good time, it still doesn’t seem worth the risk of losing everything.
Risk Aversion
This might naturally suggest adopting a risk-averse decision theory rather than maximizing (risk-neutral) expected value. But there are two reasons I’m dubious of that route:
(i) The doubled-up life (or world) doesn’t intuitively strike me as truly twice as good as the single life (world), making risk aversion explanatorily redundant.
(ii) If extended to negative-value prospects, risk-averse decision theory implies—insanely—that we should kill ourselves (destroy the world) rather than risk tiny chances of extremely bad futures, even when our overall prospect is, in expectation, extremely positive. I find that implication absolutely intolerable.
(Capped) Diminishing Marginal Value
A better option is to hold that basic goods have diminishing marginal value. This gets the intuition right, I think, and doesn’t commit us to omnicide (phew!). But, assuming that the value has a cap (which it approaches asymptotically), this view does face further challenges:
(1) It commits us to what Theron Pummer calls “comparative insensitivity to (arbitrarily) large differences”—as you approach the value cap for some basic good (or population size), you get the result that adding even astronomically more of the good in question—something that, relative to a zero starting point, would be thought incredibly valuable—suddenly counts for practically nothing at all. And that seems bad!
(2) It makes evaluation hyper-sensitive to how you individuate lives. If you cap how good an individual life can be, it would make things a lot worse if everyone in existence was somehow just one big super-person (suppose we’re all just facets of God, and he lives out all our lives in sequence). Conversely, if something like the Average View in population ethics is true, then the “we are all one” metaphysics might imply that the world is vastly better than we realized (we only have to divide total welfare by 1, instead of by billions)! That seems… wrong.
So maybe the capped value view isn’t great either (though I don’t take these problems to rule it out of contention entirely; deflationary accounts of personal identity might rule out the “super-person” metaphysics as incoherent, for example)…
Partiality
Finally, we may consider the strategy that Tyler Cowen suggests in his conversation with Will MacAskill: partiality.
This nicely explains why we reject the population gamble: we care more about ourselves and our loved ones—and maybe even other, antecedently existing strangers—and it’s a bad deal for us to risk our lives to double the universe’s population. (Except: what if we’re Parfitians about identity, and the copied planet contains a duplicate of you—psychologically continuous with your past self? Shouldn’t that then count as doubling your future welfare? You might then need to appeal to diminishing marginal value of basic goods after all: perhaps two futures is not twice as good for you as one.)
But what about the single-life gamble? In this case, I think the Parfitian view of identity can actually help. For if what matters in identity is psychological continuity and connectedness, and we are far less psychologically connected to our distant future selves, we can reasonably have less prudential concern for them. Doubling your lifespan might then be more altruistic than prudent: it would give a great future to someone who is, by that stage, only partly you, and partly someone new.
In short, if you are partial towards your life stages that are most similar to your current self, that could explain why you needn’t value double the lifespan as being twice as good for (present) you.
But if we further think that it isn’t twice as good, impartially—if we wouldn’t wish a stranger to take the bet, for example, or aliens to gamble their own distant planet—then it seems partiality alone cannot fully explain our intuitions (unless perhaps we are to be partial towards all who exist independently of our choice — which I do think has some appeal). Even if we are partial, it’s usually at least permissible to instead choose the impartially best option. But it doesn’t seem permissible to play double-or-nothing with the universe. So there must be more to it.
Conclusion
I’m very unsure what to think about all this! I’m most confident that extreme (omnicidal) risk aversion is not the way to go. Maybe our starting intuitions were awry, and really we should be fine with the gambles after all. That would be a tough bullet to bite, though. I find some mix of partiality and diminishing marginal value to seem most intuitive, at least in moderation. But those moves also don’t seem great if taken to extremes—as may be needed to avoid other risky gambles, say with slightly more favourable odds. Very tricky! If you’ve a better solution, please share it in the comments…
I bite the bullet! I once found this really unintuitive, but reflecting on it, it seemed like irrational risk aversion, and it no longer seems unintuitive.
The mathematical argument against double or nothing:
https://substack.com/home/post/p-85415034