The Blog Prize folks are eliciting ideas on how to “build agency” (or practical ambition). I’ll be interested to see what others come up with, since I often find exercising agency to be extremely draining—so much so that I was moved to write a paper, ‘Willpower Satisficing’, arguing (amongst other things) that it’s demands on our will, rather than our wallet, that are really relevant to “the demandingness of morality”. One upshot of that paper is the importance of good social norms (as promoted, for example, by Giving What We Can) that push us towards doing bigger and better things by default. Given that it’s easier to do what’s expected than to go against the grain, one simple method for building agency—that I think a lot of people in the EA community would vouch for—is to surround yourself with people who share your ambitions and have suitably high expectations.
In this post I want to focus on a different obstacle to exercising agency, which we might call “epistemic cheems mindset”. The original cheems mindset involves “automatically dismissing an idea on the basis that it cannot be done, or would be hard to do.” The epistemic version involves automatically dismissing an idea on the basis that it involves significant uncertainty, or could be questioned. As with the original, to adopt the epistemic cheems mindset is to preclude any hope of progress in an uncertain world, where one can always find some reason to oppose a new idea.
Progress requires a willingness to consider ideas “in a positive, can do way”; but to focus exclusively on implementation presumes that we’ve already found the best policy. Often, the biggest challenge is working out what policy we should even want to see implemented. And to make progress on that crucial prior question, we need not just good epistemics, but also epistemic ambition: a willingness to form (tentative, revisable) judgments, even in the face of uncertainty.
I think this is a very alien idea to many people. They associate epistemic responsibility with humility with a reticence to make positive claims in the face of uncertainty. This may be because people are often biased, and once they take a position they may become dogmatic and unwilling to reconsider it. But we should be clear that the problem is the dogmatism, not the position-taking. Ideally, we should attach credences (or probability judgments) to as many important claims as we can, and then continue to revise them as new evidence comes in.
It’s much more valuable to hear that a best-estimate probability for some proposition is 40%, or 65%, than to just hear the silence of a suspended judgment. Suspending judgment may keep you safe from criticism, but intelligent people should be able to contribute something more positive than that to our epistemic commons. (I’m hopeful that the growth of the forecasting community, prediction markets, etc., might gradually help to shift these norms.)
This problem became particularly vivid to me when I began working on pandemic ethics. The groupthink in liberal-academic circles during the early pandemic was oppressive. Orthodoxy-enforcers would express concern about the risk of harm from (even voicing) unconventional policy ideas, without any apparent concern for the corresponding (and, in my view, far greater) risk of harm from failing to adequately explore the option space and being left with bad policies by default.
As a result, I found participating in the online discourse often stressful and unpleasant. It’s a high-stakes topic, and obviously aroused strong feelings in people. But I’m glad I pushed through, first publishing an op-ed in the Washington Post (co-authored with Peter Singer) making the moral case for experiments on human volunteers, followed by a handful of academic papers culminating in my paper on ‘Status Quo Risk’ that I summarized here.
Something I found very striking throughout this discourse was the inertial force of a certain kind of epistemic conservatism. Many people seemed outraged by my “confidence” that a policy of experimental vaccination + variolation from early in the pandemic would have had (very) high expected value—not because they thought a different point estimate would be better warranted, but just because (it seemed) they didn’t think it appropriate to form any judgment on the matter at all.
When faced with significant uncertainty, epistemic cheems mindset tells us to suspend judgment, and think no more on the matter until respected authorities tell us it’s okay to do otherwise. But again, this is a serious obstacle to progress in an uncertain world. When default outcomes are terrible (as in a pandemic) we urgently need to be capable of finding better alternatives. So we do much better, in my view, to form the best estimates that we can (given our available evidence), while of course remaining open to revising these estimates in light of new evidence.
Abandoning epistemic cheems mindset can be liberating. Once you’re committed to forming some judgment on a question—some best estimate of a policy proposal’s expected value—it takes substantive argument (addressing the first-order issues on their merits) to criticize your answer. Instead of annoyingly attacking you for “over-confidence”, critics need to actually address the issue, which is much more important and rewarding.
So, one piece of advice I have to offer on building agency is to first boost one’s epistemic ambitions by being willing to form judgments in the face of uncertainty. To this end, it helps to oppose epistemic cheems mindset by rejecting the assumption that there’s anything inherently dogmatic about forming judgments. I’d say it is more epistemically virtuous to use your best judgment than to suspend it. (Of course, suspending judgment is often fine, even if it isn’t ideal.) Rather than valorizing epistemic timidity, we may aim instead at an ideal of bold views, weakly held.
Ideas are worth exploring, and if we sometimes get it wrong then that’s okay: an acceptable cost for more often getting things right when it really counts. And especially in any situation where the status quo risks are severe, we should be at least as concerned about mistakenly neglecting a good policy solution as we are about mistakenly advancing a bad policy.
To fix misaligned social incentives, we should all try to remember to blame others less for exploring new ideas—even ones that ultimately prove misguided. At least half of our epistemic sanctions should be directed towards those who are unduly conservative or closed-minded. I would even go further, and argue that excessive conservatism is much the greater risk—and so, if anything, a greater share of our epistemic sanctions should be directed against that error.
As the original cheems post concludes:
It may be that after consideration many seemingly good ideas are non-starters, but unless we consider them in a positive, can do way—rather than reflexively shooting them down—we will never truly know. Unless we vanquish cheems mindset, we will never be as successful as we otherwise could be.
> "Something I found very striking throughout this discourse was the inertial force of a certain kind of epistemic conservatism. Many people seemed outraged by my “confidence” that a policy of experimental vaccination + variolation from early in the pandemic would have had (very) high expected value—not because they thought a different point estimate would be better warranted, but just because (it seemed) they didn’t think it appropriate to form any judgment on the matter at all."
Great insight; I've encountered this phenomenon as well. I'm not recalling a specific example, but I know I've made claims about affecting the course of the future before (more than a couple decades out) and have encountered people who seem to claim that I can't know that an action has positive expected value on the future that far out. They don't seem to want to defend the view that the expected value is zero, but just seem to assert that it is unknowable without argument.