24 Comments
Aug 13, 2022Liked by Richard Y Chappell

You should consider diminishing value of goods and maybe life, but not of population in these doubling experiments. Increases in more food has diminishing returns, but doubling population will double utility itself unless you do not take the total view, but you seemed sympathetic to it on utilitarianism dot net.

I don't think the partiality works because, like you said, it should at least be permissible to take the double-or-nothing bet. Even if we were to say "we care more about people we know" it wouldn't necssarily mean we should or we have the strongest moral reason to.

This is a tricky problem in my view. However, I think a true utilitarian should probably just take the bet. However, then they should take the bet again...and again...until we cease to exist. Which seems incredibly wrong. I don't really have a solution, but it would seem a bit ad hocish if I did. All the population ethics work-arounds seem so incredibly ad hocish. This problem which I first heard from Cowen seems honestly like one of the better arguments against utilitarianism - better than Huemer's in your debate (even though I liked his). I was feeling more sympathetic to utilitarianism recently, but this issue pushed me away a bit more...sorry!

Expand full comment
author

I've always thought the strongest objection to the total view was these sorts of intuitions suggesting that the contributory value of additional lives has diminishing marginal value in this way. (I've long been drawn to variable value views for this reason, but of course they also face challenges. I don't have a settled view on the matter, though in recent years I've grown increasingly sympathetic to critical range views.)

I'm really more interested in trying to work out *what's most plausible* (all things considered) rather than dogmatically insisting upon whatever a "true utilitarian" would do. (I'm fine with only being approximately utilitarian, if that's what ultimately strikes me as most plausible!)

But a question: if this pushes you away from utilitarianism, what does it push you *towards*? If every alternative is just as bad (or worse), these puzzle cases shouldn't push you away after all. You need to actually find a better view to replace it with!

Expand full comment

Oh, I was mistaken in my assumption then. I apologize for assuming.

Not really toward any well defined theory in particular.

Expand full comment

Here is one contrived way to argue against doubling. Although, this admittedly is going at it from the (has the intuition) -> (tries to find rationalization) direction which we should be inherently skeptical about.

If we take a longtermist perspective, then we will colonize all the stars and have trillions of people achieving a maximum. The probability of this occuring is moderate, but doubling it does not double the probability of it occuring nor does it double expected population because the same maximum number of people will be created when we colonize the universe. The probabilities and expected utility are such that we shouldn't take the deal.

Expand full comment
author

I don't think this works; we can just stipulate that the copies are made in a distant corner of the universe that we could never reach anyway (i.e. outside our light cone). Note that the portion of the universe that it's physically possible for humans to ever reach gets smaller every day.

Expand full comment
Aug 13, 2022Liked by Richard Y Chappell

Final comment but I used your website as a resource for my latest argument for the section about population ethics: https://parrhesia.substack.com/p/in-favor-of-underpopulation-worries. So, thanks!!

Expand full comment

I bite the bullet! I once found this really unintuitive, but reflecting on it, it seemed like irrational risk aversion, and it no longer seems unintuitive.

Expand full comment
author

Yeah, that may be the best option at the end of the day. But (as flagged in the post) I don't think it's risk aversion. If I try to directly evaluate the normal life (or world) vs doubled-up life (world), the doubled-up one just doesn't strike me as really seeming twice as good.

But it's hard to reconcile this intuition of diminishing marginal value with the competing intuition that arbitrarily large increases in basic goods are never trivial. Damn incoherent intuitions!

Expand full comment

But that's also how the organ harvesting case seems to the deontologists. Ethics often conflicts with our intuitions, and our intuitions thus often need to be revised. This is especially so when it has all the hallmarks of a bad intuition; it's plausible explained by status quo bias (isn't it strange that the most valuable lives seem to be the life we plausible have ahead of us), irrational risk aversion (we know that the human brain is bad at adding up different numbers -- it fails to shut up and multiply), and it's seemingly impossible to come up with a compelling, justified philosophical position that allows us to hold on to those intuitions.

One other intuition to correct the status quo bias; suppose that your life was a rerun. You'd already lived, died, then been reborn. That wouldn't seem to diminish the value of your life. It's only when the extra life seems like extra, when status quo bias is in full force, that a 1/2 chance of two lives seems less valuable.

Expand full comment
author

Diminishing marginal value of basic goods strikes me as a view with principled appeal. It has some costs, but so does the total view's fanaticism (e.g. sacrificing utopia for a mere 1/2^trillions chance of uber-utopia). I don't think super-high confidence is warranted either way here.

Expand full comment

I agree that both views have costs; one's just seem far greater than the other.

Expand full comment

How many times would you accept the deal? From an expected value perspective, you should take the deal again...and again...and again till you cease to exist with probability 1.

Expand full comment

I think that this is a case where there is no most rational action to take. There are other cases like this; suppose you could increase the utility in the world by any number. For any number you choose, you could have chosen a better one, so there's no right action, given that every action has a better possible action. So in this case the expected utility function is such that it increases indefinitely the more times you flip -- but at infinity flips, the answer is zero. Thus, there is no best action -- but flipping infinite times is clearly wrong as it guarantees no value.

Expand full comment
Aug 14, 2022·edited Aug 14, 2022Liked by Richard Y Chappell

Okay, this is very interesting.

I'm not sure it's quite analogous. If you gave me the option of increasing utility by any amount, I can give you a preference ordering; I know +1000 > +100 > +1 > +0. We know what the right answer is "as large a number as possible", it's just there is conceptual difficulty with conceiving "the largest number."

In my case, you've already given a preference ordering for number of times playing the game. You chose 1 > 0 for the simple reason that 51% * 2 > 100% * 1. It is also the case that 51% * 51% * 2 * 2 > 51% * 2 > 100% * 1....so do you play one more round?

And would be would you play 100 trillion rounds? The infinity example is tricky because probability of 0 doesn't necessarily mean impossible in some circumstances because of confusing probability reasons (https://en.wikipedia.org/wiki/Almost_surely).

Expand full comment

I'd give a precisely identical preference ordering 3>2>1>0. The answer here is similarly as large a number as possible. We know humans are terrible at envisioning large numbers. I don't know if I would play 100 trillion rounds, but I think I ought to.

Expand full comment

Yeah, I don't think I could bite that bullet. Maybe that's a topic for an essay of yours.

Expand full comment

I don't know if I could either psychologically. But that just seems like an error on our parts.

Expand full comment

The mathematical argument against double or nothing:

https://substack.com/home/post/p-85415034

Expand full comment

Such thought experiments are obviously not meant to be realistic, but to illustrate a point. What is he point here? I take it to be that it is possible to find circumstances where utility maximizing will give bad advice. I’m tempted to think this is true of all decision procedures - they have an appropriate scope and an error rate.

Nassim Taleb comes to mind. He is always on about the risk involved in using an algorithm that makes assumptions about the statistical distribution we observe, which can lead to a form of overconfidence that serves well on average, but eventually kills you if if you don’t take precautions (or maybe even if you do).

Expand full comment
author
Aug 14, 2022·edited Aug 14, 2022Author

More a puzzle than a point: How should we think about these sorts of cases? How should they affect our judgments about the fundamental determinants of rational choice? If expected utility maximization gives "bad advice", what is "good advice", and how do you systematize it? (There must be a *reason* that the alternative advice is better, if indeed it is; what is that reason? It can't be that we should always prefer less risky options; we know that isn't true. So what determines which risks are genuinely worth taking?) These are some of the most fundamental questions in ethics and decision theory.

Expand full comment

Expected utility maximization assumes we know the relevant variables, their relationships, and their statistical distributions. When we don’t, it will probably fail.

This leads to the idea of the explore/exploit trade-off. This is a concept from computer science (which I only vaguely recall) where an algorithm must deal with high uncertainty. If it spends too much time either exploring the environment to improve its results or in generating results that are poor quality, that's bad. This is an unavoidable trade-off in low info environments. This plausibly leads to using heuristics and improving them using feedback. Maximize if you think you understand everything you need to understand, use heuristics otherwise, and try to learn so you understand more and better. A hospital admin can use a lot of stats to decide where budget has more beneficial impact. An entrepreneur maybe can’t.

So there may not be a general solution to the problem that we can formalize. Specific contexts might be amenable to formal solutions, while others are not. Maybe there is a nice generalization we could make about this, maybe not. Systemization doesn’t necessarily exclude the idea of using heuristics or treating different contexts differently. Maybe we could maximize utility at a meta level, based on our expectations about the effectiveness of formal maximizing vs. heuristics in a specific context. At that level, has it become sort of tautological?

Expected utility maximization is a norm that is difficult to evaluate. That is, I am not sure when I am violating it. It only becomes tractable when we make assumptions about what is relevant. But relevance is subjective, and may be in error. Again, we could form estimates about our ability to get relevance right, but then do we need error estimates of our error estimates?

The application of max u in the thought experiment maybe makes too stringent assumptions about what people care about, or ought to. I always have trouble with thought experiments that are so extremely unrealistic. (Waves hands.)

Expand full comment

“ doubled-up life (or world) doesn’t intuitively strike me as truly twice as good as the single life (world), making risk aversion explanatorily redundant.”

Isn’t risk aversion just a formalization of that intuition? Whatever your reason for thinking gambles are worth less than “risk neutral equivalent” sure things is the reason for acting with risk aversion. There is the behavior, and there is it’s label. What's redundant? Maybe the explanation? But risk aversion is a diagnosis, not a prescription or an explanation. It doesn’t explain the behavior we observe - we need evolution and psychology for that, not decision theory or however we should classify risk aversion.

Expand full comment