16 Comments

I bite the bullet! I once found this really unintuitive, but reflecting on it, it seemed like irrational risk aversion, and it no longer seems unintuitive.

Expand full comment
author

Yeah, that may be the best option at the end of the day. But (as flagged in the post) I don't think it's risk aversion. If I try to directly evaluate the normal life (or world) vs doubled-up life (world), the doubled-up one just doesn't strike me as really seeming twice as good.

But it's hard to reconcile this intuition of diminishing marginal value with the competing intuition that arbitrarily large increases in basic goods are never trivial. Damn incoherent intuitions!

Expand full comment

But that's also how the organ harvesting case seems to the deontologists. Ethics often conflicts with our intuitions, and our intuitions thus often need to be revised. This is especially so when it has all the hallmarks of a bad intuition; it's plausible explained by status quo bias (isn't it strange that the most valuable lives seem to be the life we plausible have ahead of us), irrational risk aversion (we know that the human brain is bad at adding up different numbers -- it fails to shut up and multiply), and it's seemingly impossible to come up with a compelling, justified philosophical position that allows us to hold on to those intuitions.

One other intuition to correct the status quo bias; suppose that your life was a rerun. You'd already lived, died, then been reborn. That wouldn't seem to diminish the value of your life. It's only when the extra life seems like extra, when status quo bias is in full force, that a 1/2 chance of two lives seems less valuable.

Expand full comment
author

Diminishing marginal value of basic goods strikes me as a view with principled appeal. It has some costs, but so does the total view's fanaticism (e.g. sacrificing utopia for a mere 1/2^trillions chance of uber-utopia). I don't think super-high confidence is warranted either way here.

Expand full comment

I agree that both views have costs; one's just seem far greater than the other.

Expand full comment
Comment deleted
Expand full comment

I think that this is a case where there is no most rational action to take. There are other cases like this; suppose you could increase the utility in the world by any number. For any number you choose, you could have chosen a better one, so there's no right action, given that every action has a better possible action. So in this case the expected utility function is such that it increases indefinitely the more times you flip -- but at infinity flips, the answer is zero. Thus, there is no best action -- but flipping infinite times is clearly wrong as it guarantees no value.

Expand full comment
deletedAug 14, 2022·edited Aug 14, 2022
Comment deleted
Expand full comment

I'd give a precisely identical preference ordering 3>2>1>0. The answer here is similarly as large a number as possible. We know humans are terrible at envisioning large numbers. I don't know if I would play 100 trillion rounds, but I think I ought to.

Expand full comment
Comment deleted
Expand full comment

I don't know if I could either psychologically. But that just seems like an error on our parts.

Expand full comment

The mathematical argument against double or nothing:

https://substack.com/home/post/p-85415034

Expand full comment

Such thought experiments are obviously not meant to be realistic, but to illustrate a point. What is he point here? I take it to be that it is possible to find circumstances where utility maximizing will give bad advice. I’m tempted to think this is true of all decision procedures - they have an appropriate scope and an error rate.

Nassim Taleb comes to mind. He is always on about the risk involved in using an algorithm that makes assumptions about the statistical distribution we observe, which can lead to a form of overconfidence that serves well on average, but eventually kills you if if you don’t take precautions (or maybe even if you do).

Expand full comment
author
Aug 14, 2022·edited Aug 14, 2022Author

More a puzzle than a point: How should we think about these sorts of cases? How should they affect our judgments about the fundamental determinants of rational choice? If expected utility maximization gives "bad advice", what is "good advice", and how do you systematize it? (There must be a *reason* that the alternative advice is better, if indeed it is; what is that reason? It can't be that we should always prefer less risky options; we know that isn't true. So what determines which risks are genuinely worth taking?) These are some of the most fundamental questions in ethics and decision theory.

Expand full comment

Expected utility maximization assumes we know the relevant variables, their relationships, and their statistical distributions. When we don’t, it will probably fail.

This leads to the idea of the explore/exploit trade-off. This is a concept from computer science (which I only vaguely recall) where an algorithm must deal with high uncertainty. If it spends too much time either exploring the environment to improve its results or in generating results that are poor quality, that's bad. This is an unavoidable trade-off in low info environments. This plausibly leads to using heuristics and improving them using feedback. Maximize if you think you understand everything you need to understand, use heuristics otherwise, and try to learn so you understand more and better. A hospital admin can use a lot of stats to decide where budget has more beneficial impact. An entrepreneur maybe can’t.

So there may not be a general solution to the problem that we can formalize. Specific contexts might be amenable to formal solutions, while others are not. Maybe there is a nice generalization we could make about this, maybe not. Systemization doesn’t necessarily exclude the idea of using heuristics or treating different contexts differently. Maybe we could maximize utility at a meta level, based on our expectations about the effectiveness of formal maximizing vs. heuristics in a specific context. At that level, has it become sort of tautological?

Expected utility maximization is a norm that is difficult to evaluate. That is, I am not sure when I am violating it. It only becomes tractable when we make assumptions about what is relevant. But relevance is subjective, and may be in error. Again, we could form estimates about our ability to get relevance right, but then do we need error estimates of our error estimates?

The application of max u in the thought experiment maybe makes too stringent assumptions about what people care about, or ought to. I always have trouble with thought experiments that are so extremely unrealistic. (Waves hands.)

Expand full comment

“ doubled-up life (or world) doesn’t intuitively strike me as truly twice as good as the single life (world), making risk aversion explanatorily redundant.”

Isn’t risk aversion just a formalization of that intuition? Whatever your reason for thinking gambles are worth less than “risk neutral equivalent” sure things is the reason for acting with risk aversion. There is the behavior, and there is it’s label. What's redundant? Maybe the explanation? But risk aversion is a diagnosis, not a prescription or an explanation. It doesn’t explain the behavior we observe - we need evolution and psychology for that, not decision theory or however we should classify risk aversion.

Expand full comment
Comment deleted
Expand full comment
author

I've always thought the strongest objection to the total view was these sorts of intuitions suggesting that the contributory value of additional lives has diminishing marginal value in this way. (I've long been drawn to variable value views for this reason, but of course they also face challenges. I don't have a settled view on the matter, though in recent years I've grown increasingly sympathetic to critical range views.)

I'm really more interested in trying to work out *what's most plausible* (all things considered) rather than dogmatically insisting upon whatever a "true utilitarian" would do. (I'm fine with only being approximately utilitarian, if that's what ultimately strikes me as most plausible!)

But a question: if this pushes you away from utilitarianism, what does it push you *towards*? If every alternative is just as bad (or worse), these puzzle cases shouldn't push you away after all. You need to actually find a better view to replace it with!

Expand full comment
Comment deleted
Expand full comment
author

I don't think this works; we can just stipulate that the copies are made in a distant corner of the universe that we could never reach anyway (i.e. outside our light cone). Note that the portion of the universe that it's physically possible for humans to ever reach gets smaller every day.

Expand full comment