30 Comments

Thanks for the always helpful and interesting engagement, Richard!

I'd like to clarify the Nuclear War argument a bit. I am claiming that we are clueless about whether a nuclear war in the near future would overall have good vs bad consequences over a billion-years-plus time frame continuing at least to the heat death of the universe. I do think a nuclear war would be bad for humanity! The way you summarize my claim, which depends on a certain way of thinking about what is "bad for humanity", makes my view sound more sharply in conflict with common sense than I think it actually is.

Clarifying "N-Bad" as *that* claim, it's not clear to me that denying it is commonsensical or that it should have a high prior.

(I do also make a shorter-term claim about nuclear war: That if we have a nuclear war soon, we might learn an enduring lesson about existential risk that durably convinces us to take such risks seriously, and if this even slightly decreases existential risk, then humanity would be more likely to exist in 10,000 years than without nuclear war. My claim for this argument is only that it is similar in style to and as plausible as other types of longtermist arguments; and that's grounds for something like epoche (skeptical indifference) regarding arguments of this sort.)

Expand full comment
author
Dec 27, 2023·edited Dec 27, 2023Author

Hi Eric, thanks for continuing the conversation! I remain confused by your suggestion that there are multiple ways of thinking about what is "bad for humanity" in this context. You mentioned in our previous discussion that you think that *evaluation* (and not just decision) needs to build in something like temporal discounting. But this would seem to imply that it is "worse for humanity" for (i) one human to die of malaria today, and trillions of humans to flourish a billions years hence, than for (ii) nobody to die of malaria today, but the entire population of humanity, trillions strong, to be enslaved and tortured a billion years hence.

But that's surely just a misuse of language. There's no reasonable usage on which (i) can be sensibly described as "worse for humanity" than (ii). So you cannot coherently claim that nuclear war is "bad for humanity" unless you believe that it is bad for humanity *overall*, and not just in a local time period.

[Edited to add: You could coherently claim that (i) is worse than (ii), on grounds that we should care more about current people than about humanity at large. I'd argue (another day) that such a verdict is substantively misguided, but for this post meant to focus more on the epistemic issues, hence the stipulation that we're evaluating these prospects "for humanity" at large, not relative to other possible values, such as more partial ones. I could judge that nuclear war would be bad *because bad for my family* -- even if it averted future extinction -- but it would misdescribe my verdict for me to describe this as "bad for humanity", because there's more to humanity than just my family.]

Expand full comment

I do think we should care more about current people than people a billion years in the future, on a combination of the grounds of their existence, our closer causal/social relationships, and maybe also pure temporal discounting. I’m not sure we can cleanly separate badness from caring, but I won’t press on that.

Maybe part of the issue here I that I’m unsure whether to include people existing a billion years in the future as part of “humanity”. Is it important to you to phrase it in that way? I feel that phrasing it in terms of what is overall timelessly good for whatever entities happen to exist avoids that definitional problem and is also closer to the consequentialist heart of the matter, assuming that consequentialism doesn’t privilege “humanity”.

Expand full comment
author

How about "humanity and its descendants"?

It's true that utilitarian concern is more general, but that also raises extraneous complications about whether the extinction of humanity might be judged better for other species. To make the epistemic judgment as straightforward as possible, I'm wanting here to just focus on the interests of humanity and its descendants. That's enough to suggest that we aren't *completely* clueless about reasonable long-term expectations. We can always argue more another day about whether we have good reason to expect that what's good for humanity is also a good thing more broadly :-)

Expand full comment

I'm okay with that rephrasing, but I don't think that the question of whether the extinction of humanity is good for other species is extraneous. There is a sense in which I have the harder path dialectically: My argument requires that there is *no* action currently available to us that we can justifiably evaluate as likelier to be good vs bad overall for the billion-year-plus future. So I'm committed to a negative existential generalization. But I don't think it's dialectically correct to make this path still harder by adding qualifications like good specifically *for humanity and its descendants* unless those qualifications are very well justified. By choosing nuclear war as an example, I'm trying to choose one of the hardest cases for my cluelessness view, to see if it still works. Let's not make it still harder by adding qualifications that my opponents would not actually endorse. I assume that my dialectical opponent here is more interested in the overall goodness of the outcome than in the goodness of the outcome specifically for humans and their descendants.

Here's an analogy, but one that doesn't take advantage of that rephrasing. A space alien arrives at informs me that in a billion years planet Earth will host a quadrillion Gorks. The alien now asks me whether I think that nuclear war would be good for a group composed of humans over the next several generations, plus Gorks, considered as a group. Call this artificial group the HuGork Group. Since I know basically nothing about Gorks, I have no idea and decline to speculate on this question. But I do know that nuclear war would be bad for humans over the next several generations, and I'm happy to say that.

Regarding the HuGork Group, I'd say one of the below, in increasing order of preference:

(1.) We should just accept being 50/50 on its goodness/badness for the HuGork Group.

(2.) If we expect nuclear war to have total value negative-X for humans over the next several generations, we should also expect it to have total value negative-X for the HuGork Group.

(3.) We should treat this as a question where mathematical models are more likely to mislead than illuminate.

So: I think nuclear war is bad for the future as far as we can foresee. But I don't think we can foresee its badness vs goodness overall, or its badness vs goodness for HuGorks overall. I'm not sure why I need to have an opinion about its badness considered in terms of timeless overall consequences to have an opinion about its badness.

This thought might help. Suppose baby Hitler had drowned. That event would have been bad, even if from an omniscient perspective one could foresee that it had overall good consequences. The family and neighbors knew that it was bad even if they couldn't know its consequences. Similarly, if some child I happen to know dies, I know that the death is bad even if I don't know what the child's ultimate impact on the world would have been.

Expand full comment
author

I was thinking that we *should* expect that what's good for humanity (& descendents) is (most likely) overall good, so it's a harmless simplification. (The reasons why people mistakenly doubt this are largely independent of the long-term cluelessness question that your argument addresses.)

From your last paragraph, it sounds like you are using "bad" to just mean "pro tanto bad". But that isn't what we should be interested in. If an oracle tells you that baby Hitler would grow up to be a genocidal dictator, you should no longer regard the drowning as bad. You should instead be glad it happened (though sympathetic to his mourning family).

The neighbors' sadness isn't objectively warranted: it is only subjectively warranted because they're ignorant of the defeater -- the massive failure of the "all else is equal" clause that we usually take to apply when something pro tanto bad happens. To bring this out, suppose the neighbors are Jewish, and their children would later be murdered if baby Hitler were not to drown. They can know that there's *something* bad about the baby's drowning. But this clearly isn't the most important evaluative property in the situation. In terms of what they should care about (both their family's future well-being and the impartial good), they are *massively mistaken* to regarding the drowning as regrettable. Though they don't realize it, the correct moral goals for them to have are better served by baby Hitler's drowning.

More generally, the correct attitude to take towards an event depends on (i) what the correct moral goals are, and (ii) how the event affects those goals. If you see that event N (nuclear war) would do *some* harm (relative to the correct moral goals), you can merely conclude that N is pro tanto bad. To judge it bad *tout court*, you need reason to expect that N's outcome is *overall* worse (relative to the correct moral goals). But that's precisely what you're missing if you accept cluelessness. If you wouldn't want to ignorantly rescue baby Hitler, you shouldn't take yourself (while clueless) to have overall reason to oppose nuclear war either. After all, if it really *were* essential to prevent extinction, wouldn't you want it to go ahead?

Expand full comment

It's complicated! I don't think we necessarily need to want what we think is overall, timelessly best for the world; and furthermore, I don't think we know whether avoiding extinction is overall, timelessly best for the world, especially if the price of avoiding extinction is nuclear war in the near future. There's something to be said for not putting much weight on conjectures about the long term, instead thinking and operating more within the short-to-medium-term sphere that we can better see and control.

Expand full comment
Dec 27, 2023Liked by Richard Y Chappell

Perhaps you should call this shorter-term claim the "Ozymandias argument", after the character in Watchmen who causes a disaster that kills millions in NYC in order to get the countries of the world to unite, averting a greater disaster from all-out nuclear war.

Expand full comment

To me, the most plausible justification for assigning higher probability to S1 than S2 is that we ought to have priors that penalize more complex laws. More generally, it seems to me that we should be specifying priors at the level of theories / mechanistic models / etc, from which we then derive our priors about propositions like S1, S2, N-bad, N-good, “value concordance”. As opposed to directly consulting our intuitions about the latter.

So in the case of nuclear war, our priors over the long-run welfare consequences should be derived from our priors over the parameters of mechanistic models that we would use to predict how the world evolves conditional on nuclear war vs no nuclear war. And it seems much less clear that there will be a privileged prior over these parameters and that this prior will favor N-bad. (It seems plausible that the appropriate response would be to have imprecise priors over these parameters, and that this would lead to an indeterminate judgement about the total welfare consequences of nuclear war.)

Expand full comment

To me, the most plausible justification for assigning higher probability to S1 than S2 is that we ought to have priors that penalize more complex laws. More generally, it seems to me that we should be specifying priors at the level of theories / mechanistic models / etc, from which we then derive our priors about propositions like S1, S2, N-bad, N-good, “value concordance”. As opposed to directly consulting our intuitions about the latter.

So in the case of nuclear war, our priors over the long-run welfare consequences should be derived from our priors over the parameters of mechanistic models that we would use to predict how the world evolves conditional on nuclear war vs no nuclear war. And it seems much less clear that there will be a privileged prior over these parameters and that this prior will favor N-bad. (It seems plausible that the appropriate response would be to have imprecise priors over these parameters, and that this would lead to an indeterminate judgement about the total welfare consequences of nuclear war.)

Expand full comment
Dec 26, 2023·edited Dec 26, 2023

How do Schwitzgebel's points about the long-term unpredictability of our actions relate to McAskill and Mogensen's "paralysis argument"?

https://globalprioritiesinstitute.org/summary-the-paralysis-argument/#:~:text=In%20%E2%80%9CThe%20Paralysis%20Argument%2C%E2%80%9D,improving%20the%20long%2Drun%20future.

BTW is it possible to embed links in the comments?

Expand full comment
author

I think they're largely unrelated. The paralysis argument rather trades on something that is highly predictable: that among the many long-term causal consequences of our actions will be very significant harms and benefits. And while these might largely cancel each other out in expectation, if you give extra weight (as many deontologists do) to our reasons not to cause harms, then it seems every positive action will end up being prohibited (with the possible exception of longtermist-recommended actions, as then the positive expectation might be sufficiently great to outweigh the deontological asymmetry).

P.S. I'm not aware of any formatting options for comments, so I think you just have to write out the link (as you did). Though note that you can usually delete everything after a '#' or '?' in a url (including the symbol itself).

Expand full comment