35 Comments
User's avatar
⭠ Return to thread
Richard Y Chappell's avatar

I'm not claiming that these are *principles*. I just think they're obviously true claims about what we ought to believe. (It remains an open question *why*.) As I put it in the OP:

The question is, what kind of prior is most reasonable:

(i) One that builds in a default assumption of value concordance across different time-frames, e.g. taking the immense near- and medium-term harms of global nuclear war to be some reason to expect worse long-term outcomes;

or

(ii) One that takes value discordance to be just as likely, and so sees no reason whatsoever to expect global nuclear war to be overall bad.

It just seems very obvious to me that (i) is more reasonable, and I hope that most readers will agree (just as I hope that most will agree that we should assign higher credence to S1 than S2, above). But if you don’t, I probably don’t have much more to offer, besides an incredulous stare.

Expand full comment
Jesse Clifton's avatar

> I'm not claiming that these are *principles*. I just think they're obviously true claims about what we ought to believe. (It remains an open question *why*.)

Ok, sorry about that. I don’t understand yet, though. Is the idea that

1) it’s just a brute normative fact that our prior should have that property? (I guess when I said “principle” I meant to include this kind of view, probably I’m not using standard terminology.)

2) you have the strong intuition that there are more fundamental, as-yet-unidentified normative facts that imply that property?

3) something else?

Expand full comment
Richard Y Chappell's avatar

I'm remaining neutral about what the deeper explanation is (if any). I'm personally fine with taking the correct prior to be brute -- explanation has to stop somewhere -- but if further systematization turns out to be possible, that would certainly be nice. I just don't think that commonsense wisdom should be held hostage to that sort of philosophizing.

Of course, if someone comes up with a surprising new argument for why I should positively find it more likely that the world is only 5 mins old, or that nuclear war is more likely to be positive than negative on net, I'm open to considering those arguments. Maybe they'll change my mind! But in the absence of any such first-order argument, I don't regard my current beliefs to be susceptible to higher-order debunking or undermining. Whatever abstract philosophical premises of that sort that the skeptic tries to appeal to are going to be vastly less plausible to me than the "Moorean" first-order datum of commonsense that I started with. And if two claims are in conflict, we should give up the least credible of the two.

If you find your abstract epistemic principles more obviously credible than the reality of the past, or the instrumental badness of nuclear war, etc., then (again) I don't expect to be able to persuade you otherwise. But I do think such an epistemic prioritization would constitute a kind of poor judgment. (I also grant that the situation is symmetric, in that - from your perspective - you could equally judge my epistemic priorities to constitute poor judgment, in that case.)

Expand full comment
Anthony DiGiovanni's avatar

Sounds like a false dichotomy. You could also have option (iii): A set of priors that regards it as indeterminate whether value concordance or value discordance is more likely. (I defend this kind of perspective here: https://forum.effectivealtruism.org/posts/NKx8sHcAyCiKT723b/should-you-go-with-your-best-guess-against-precise)

Expand full comment