I’ve had a couple of interesting exchanges with Eric Schwitzgebel about cluelessness and longtermism. I see two main cruxes to the disagreement:
I think we can form some reasonable, commonsense expectations about overall future prospects, for example that nuclear war would be overall bad for humanity. (So, unlike Eric, I see radical agnosticism as a non-ideal—excessively skeptical—response to our current epistemic situation.)
I think that the question of what matters morally has epistemic priority over the question whether we might be clueless about what matters, such that you cannot rationally revise your view of what matters merely in order to avoid apparent cluelessness. (That would be like responding to the inconsistent triad <causal determinism, incompatibilism, free will> by wishfully rejecting causal determinism. You just can’t possibly have good reason to believe in free will without having independent reason to think that either causal determinism or incompatibilism is false. Likewise, you can’t possibly have good reason to assume we have epistemic access to what matters without first having some grasp of (i) what matters, and (ii) what kind of epistemic access we have to those specified facts or events. The question of our epistemic access to what matters is purely downstream of those two prior questions, and so cannot be assumed in advance and used to revise one’s view of those strictly prior matters.)
I’ve previously explained the latter point, so in this post I’ll focus on the first crux.
Ideal vs non-ideal rationality
I should clarify, straight-up, that I’m going to be focusing on the question of what is the ideal doxastic state to be in, given our evidence (including everything knowable a priori). If someone is very bad at reasoning, you might recommend that they not even try to determine this. A practically safer heuristic for many non-ideal agents might be something more like:
(Safe) Believe what is clearly supported by your own observations, or reported to you by credible authorities, and otherwise just suspend judgment.
Safe helps to protect agents against manipulation. Many philosophy undergrads are epistemically fickle: they find themselves blown about over the course of a semester, easily swayed by whatever argument they read most recently. Anyone with such unstable views could be easily manipulated if they trusted their own (manipulable) prima facie judgments. So it’s good practical advice for them to just suspend judgment, rather than “following arguments where they lead” (since attempting the latter would simply let others lead them anywhere, regardless of the true merits of the case).
I think that’s important and wise advice for many people. But my basis for thinking this depends upon my not being so clueless. I must have a reasonable expectation that following Safe would result in better outcomes, for these individuals, than naively “following arguments where they lead”. This is itself a contestable judgment (not clearly supported by direct observation, or expert consensus, so Safe itself would probably advise suspending belief in its own advisability). Still, I consider it highly plausible. My expectation is that at the idealized “end of inquiry”, I would have reasonably high credence that Safe is a good policy for many people to follow.
This epistemic status—of moderate-to-high credence being validated at the “end of inquiry”—is what I’m concerned with for the rest of this post. That is, I’m thinking about the ideal credence for a clear thinker to assign, not the safest policy for most people.
Cluelessness is an Extreme Claim
Claims of rationally mandatory cluelessness are quite extreme. It’s one thing to advise a non-ideal policy like Safe. It’s quite another to insist that as a matter of ideal rationality, no-one can possibly justifiably believe anything about the (temporally-unrestricted) expected value of our actions.
Schwitzgebel’s Nuclear Catastrophe Argument describes a scenario in which global nuclear war turns out to be overall good for humanity (by teaching us to take existential risks more seriously). This is certainly possible. But the suggestion that anyone should regard this scenario as equally credible to the commonsense view that global nuclear war would be overall harmful to humanity’s future prospects strikes me as quite wild. Moreover, in order to secure the result that clueless is forced upon us by longtermism, it isn’t enough for the proponent to suggest that they can reasonably regard themselves as clueless. They need to claim that their doxastic response to the case is the uniquely justified one, and that all of us who think nuclear war is overall bad in expectation must be making an epistemic error. That’s an incredibly strong claim, and I’ve never seen anything remotely close to an adequate defense of it.
Reasonable Expectations
Cards on the table: I expect that nuclear war would be bad for humanity. (Shocking, I know!) I don’t expect to convince a skeptic of this, any more than I could convince an inductive skeptic that the sun will rise tomorrow. Still, I think it’s clearly the warranted expectation to have here, and I find it very strange that anyone seriously denies this.
As with everyday induction, the epistemic dispute here ultimately comes down to what priors we ought to have. I think our prior probability distribution should assign greater weight to the world of:
(S1): Actual history + Sun rises tomorrow
than to:
(S2): Actual history + Sun doesn’t rise tomorrow.
Similarly, I think our priors should assign greater weight to:
(N-bad): Global nuclear war would be bad for humanity in expectation
than to:
(N-good): Global nuclear war would be good for humanity in expectation.
I don’t see anything in Schwitzgebel’s argument that changes this. Yes, we can describe possible circumstances in which N-good would be true. So what? (Surely we already knew that: nobody should ever have thought that nuclear war was necessarily overall bad.)1 He doesn’t make any case for finding it especially credible, he just asserts that his imagined scenario is not “implausible”, but that’s a far cry from its being equally as credible as alternatives.
As Schwitzgebel writes, “We lack good means for evaluating these stories.” In the absence of a strong case to enhance their credibility,2 we should not much update our expectations in the face of such speculative stories. We should just stick to whatever our prior was to begin with. So the question is, what kind of prior is most reasonable:
(i) One that builds in a default assumption of value concordance across different time-frames, e.g. taking the immense near- and medium-term harms of global nuclear war to be some reason to expect worse long-term outcomes;
or
(ii) One that takes value discordance to be just as likely, and so sees no reason whatsoever to expect global nuclear war to be overall bad.
It just seems very obvious to me that (i) is more reasonable, and I hope that most readers will agree (just as I hope that most will agree that we should assign higher credence to S1 than S2, above). But if you don’t, I probably don’t have much more to offer, besides an incredulous stare.
Conclusion: Cluelessness and Naive Instrumentalism
I’ve written a lot about why naive utilitarianism is irrational. My objection to cluelessness ultimately comes down to the same issue: that we should strongly expect various “tried and true” ethical norms (encouraging social co-operation, avoiding unilateral violence) to lead to better outcomes, and distrust weakly-supported speculation to the contrary. In either case, true prudence requires good priors: you’ll be led badly astray without them—to the point of losing your compass on whether the lasting effects of mass murder are more likely to be bad or good.
The only thing that’s necessarily overall bad is realizing the worst possible world. Even the second-worst possible world could be (comparatively) good, if its only alternative was the very worst one! But of course things can be extremely bad without being necessarily bad.
For an example of what I take to be a credibility-enhancing strong case for revising our priors, see Holden Karnofsky’s Most Important Century series, and compare it to Schwitzgebel’s breezy one-paragraph stipulated scenario. These are really not the same.
Thanks for the always helpful and interesting engagement, Richard!
I'd like to clarify the Nuclear War argument a bit. I am claiming that we are clueless about whether a nuclear war in the near future would overall have good vs bad consequences over a billion-years-plus time frame continuing at least to the heat death of the universe. I do think a nuclear war would be bad for humanity! The way you summarize my claim, which depends on a certain way of thinking about what is "bad for humanity", makes my view sound more sharply in conflict with common sense than I think it actually is.
Clarifying "N-Bad" as *that* claim, it's not clear to me that denying it is commonsensical or that it should have a high prior.
(I do also make a shorter-term claim about nuclear war: That if we have a nuclear war soon, we might learn an enduring lesson about existential risk that durably convinces us to take such risks seriously, and if this even slightly decreases existential risk, then humanity would be more likely to exist in 10,000 years than without nuclear war. My claim for this argument is only that it is similar in style to and as plausible as other types of longtermist arguments; and that's grounds for something like epoche (skeptical indifference) regarding arguments of this sort.)
To me, the most plausible justification for assigning higher probability to S1 than S2 is that we ought to have priors that penalize more complex laws. More generally, it seems to me that we should be specifying priors at the level of theories / mechanistic models / etc, from which we then derive our priors about propositions like S1, S2, N-bad, N-good, “value concordance”. As opposed to directly consulting our intuitions about the latter.
So in the case of nuclear war, our priors over the long-run welfare consequences should be derived from our priors over the parameters of mechanistic models that we would use to predict how the world evolves conditional on nuclear war vs no nuclear war. And it seems much less clear that there will be a privileged prior over these parameters and that this prior will favor N-bad. (It seems plausible that the appropriate response would be to have imprecise priors over these parameters, and that this would lead to an indeterminate judgement about the total welfare consequences of nuclear war.)