Against Confidence-Policing
It's generally good for academics to share their honest, considered opinions
Everyone has opinions. Ideally, you want to hear ones that are both interesting and epistemically well-grounded. It can be especially valuable to hear well-grounded thoughts that offer novel insights, by differing from the “conventional wisdom” on an important topic. Ideally, social norms should encourage such high-value contributions to the epistemic commons, despite the risk that they could be mistaken.1
Not all contributions are helpful. Ignorant or poorly-considered contributions add “noise” that can obstruct better-considered takes. This can pollute the epistemic commons, as can social pressures to enforce orthodox or high-status opinions independently of their epistemic merits. It’s generally easy enough to just ignore wild conspiracy theories and the like, so I tend to think the greater threat here (at least in academic circles) is likely to be unthinking orthodoxy-enforcement. So if social norms are to discourage any kinds of epistemic contributions, those would be at the top of my list.
Alas, the actual social norms that seem to operate in online discussions don’t serve these goals very well. In this post I’ll focus on two in particular: lane-policing (briefly), and confidence-policing (in greater depth).
Lane-Policing
Non-experts should have the humility and self-awareness that they could very easily be missing crucial background knowledge. When venturing beyond our areas of expertise, we should obviously be receptive to correction from those who know more. Even so, within our areas of expertise, I think we should generally welcome well-considered questions and challenges from outside perspectives, since it’s always possible for experts to make mistakes, build upon questionable foundations, or overreach beyond their true expertise (see, e.g.,: ‘There’s no such thing as following the science’). It’s very important to identify such mistakes, and overzealous lane-policing makes such corrections less likely.
So it’s often a bad idea to admonish non-experts to shut up and “stay in [their] lane”. If they make silly mistakes, then by all means correct them. But to be epistemically closed to outside contributions and perspectives is an implicit invitation to groupthink, and should be avoided.
Confidence-Policing
Lane-policing was widely recognized and discussed following the pandemic. A different form of epistemic norm-enforcement that seems less often discussed is confidence-policing: admonishing someone for taking a confident view on a disputed topic.
Crucially, the kind of criticism I’m interested in here is not one that emerges from careful first-order judgment of the merits of the issue and concluding that the target person’s credence level diverges from what is objectively warranted (i.e., substantive overconfidence). Rather, it’s the purely procedural criticism of their making a confident assertion as such. The thought seems to be that observers can recognize this as epistemically out of bounds without themselves knowing anything about the issue at all. I find this thought quite extraordinary. Yet it seems surprisingly common.
Example 1: When my ‘Lessons from the Pandemic’ post was shared on Daily Nous, the first comment questioned my “authority to pronounce so confidently on complex causal matters such as these”—as though having and expressing thoughts was verboten, until granted a permission slip by suitable authorities! (As I replied, I claimed no special authority, and was very open to correction if anyone had evidence that my background assumptions were mistaken. But the points I raised were clearly worth taking seriously because it’s easy to see that (i) excessive conservatism could do immense harm in a pandemic; and (ii) there’s plenty of publicly available evidence that the medical/policy establishment is, in fact, extremely conservative in the ways I suggested, and no indication that they are adequately aware of the downsides of this.)
Example 2: Just the other day, someone commented on my ‘Philosophical Myth-busting’ post that they thought it was “staggeringly arrogant” for an academic to claim that their academic work reveals a common assumption to be “mistaken”.
In this case, it emerged, the commenter apparently thought that asserting another view to be “mistaken” entailed dogmatic certainty in this verdict (despite my explicit invitation to readers to “feel free to dispute” any of my suggestions).
This seems a very common mistake. People conflate epistemic confidence with dogmatism, as though merely judging it unlikely that there are any good counterarguments entails being disposed to dismiss good counterarguments even were one to be (surprisingly) presented.2
It’s important to appreciate that these are different things! Note, after all, that it may be true that some claim is in fact better supported than its negation (and a subject-area expert may be in a position to know this). If it is true, and one is in a position to know it, then high confidence is presumably rationally warranted. But even then, one should always remain open to the possibility of unexpectedly good counterarguments arising.
Open-Mindedness ≠ In-Betweenism
I think the core issue here is that too many people don’t understand the difference between what constitutes being open-minded vs what conventionally signals open-mindedness.3 Because dogmatists are typically very confident, many people associate the two and come to mistakenly infer that any confident agent must thereby be dogmatic. Conversely, they take epistemic timidity and in-betweenism to signal open-mindedness and epistemic virtue, regardless of the first-order merits of the case. (Ironically, this means that they are closed to the possibility that a stronger, less “moderate” verdict might actually be most warranted.)4
I think this is all bad, and people should take greater care to distinguish non-middling first-order verdicts from a dispositional lack of receptivity to new evidence or arguments. To this end, it may be helpful to explicitly consider the possibility of both:
(1) Reasonably coming to a strong conclusion, whilst remaining open to (surprising) future corrections; and
(2) Dogmatic refusal to countenance any arguments for non-“moderate” verdicts.
So: norms against dogmatism do not support indiscriminate confidence-policing. Genuine open-mindedness instead means using your best judgment without pre-judging where that will land you. Since confidence-policing prejudges matters in favour of moderate views, it conflicts with genuine open-mindedness.
Self-Promotion
A final thought is that people might be tempted to apply good norms for personal interactions to the very different context of asynchronous, online posting.
I know lots of academics feel reticent about sharing their papers on social media, etc., due to distaste for “self-promotion”. This strikes me as unfortunate, since I rather think they’d be doing the rest of us a favour by sharing their interesting new arguments! (It’s always easy enough for us to “unfollow” people we don’t find interesting, or just scroll past a particular update.)
It’s very different in offline contexts, where those around you have not specifically “opted in” to hearing your thoughts, and can’t easily/politely leave if they’re not enjoying them. Self-aggrandizing behaviour there is obnoxious and costly to others. I’m a big fan of Nagel’s classic paper, ‘Concealment and Exposure’, and the norms of reticence for personal interactions that are recommended therein:5
What is allowed to become public and what is kept private in any given transaction will depend on what needs to be taken into collective consideration for the purposes of the transaction and what would on the contrary disrupt it if introduced into the public space… [I]f the conventions of reticence are well designed, material will be excluded if the demand for a collective or public reaction to it would interfere with the purpose of the encounter.
But online posting isn’t a forced interaction, but simply an invitation for those who are interested to follow along. Since no-one is forced to read you, there’s minimal cost to being annoying (to some), whereas there’s significant benefit to being interesting (to others). This strongly shifts the balance of what it’s socially beneficial to share, towards greater disclosure than would be welcome, uninvited, in person.
Accordingly, we should all want to see more people sharing their ideas and opinions online, so that we can more easily find more that’s of interest to us. That goes double for academics, who are disproportionately likely to come up with valuable new ideas (if doing their jobs properly).
Conclusion
Good norms for improving the epistemic commons require us to invite novel insights (and so avoid expressing unthinking conformism), reject lane-policing, reject confidence-policing (as unreasonably prejudging the correct verdicts to reach at the end of inquiry), and encourage greater sharing of ideas—including “self-promotion”—in “opt-in” spaces than would be desirable in inescapable common areas like the workplace.
So, if you’re a generally reasonable and epistemically responsible person: start a blog, and share more of your considered opinions on important topics! It could easily help others (and the epistemic commons), and it seems unlikely to do them (or it) harm.
Or am I missing something?
See also Epistemic Cheems Mindset: “When faced with significant uncertainty, epistemic cheems mindset tells us to suspend judgment, and think no more on the matter until respected authorities tell us it’s okay to do otherwise. But again, this is a serious obstacle to progress in an uncertain world… Ideas are worth exploring, and if we sometimes get it wrong then that’s okay: an acceptable cost for more often getting things right when it really counts.”
It’s also worth stressing that even substantively bad arguments (ones that shouldn’t really lead readers to update their credences) may nonetheless have many philosophical virtues—even ones sufficient for academic publication, discussion, etc.
It’s interesting to compare this to the distinction between being benevolent vs conventionally signaling benevolence, which many critics of EA similarly fail to distinguish.
This is my biggest annoyance with the EA Forum: everyone loves “take underdog view X more seriously” posts, and hates any suggestion that “X is clearly wrong”. But you can’t determine that the former view is better just as a matter of form: it depends on the substantive details! As I put it here, too many are prioritizing “cheap signals of epistemic virtue… over actually doing the epistemically virtuous work of assessing arguments.”
If anything, I’m probably often more reticent in person than would be ideal, due to high agreeableness, introversion, and social anxiety: but I’m guessing my mild-mannered conflict-aversion doesn’t come through so clearly in my public writing!
I lean towards being anti confidence policing, but anyone with a credence in it being bad--given expert disagreement--above 70% is a dogmatist.
Confidence-policing is a new concept to me:
> Rather, it’s the purely procedural criticism of their making a confident assertion as such. The thought seems to be that observers can recognize this as epistemically out of bounds without themselves knowing anything about the issue at all.
A month ago I made a bet against Roman Yampolskiy that his credence of existential catastrophe from AI is too high. I claim that a layperson without specific knowledge of AI risk can know that his credence is too high. In this case, I think "confidence-policing" by this person would be valid. Do you agree? Is there something different going on here making this not confidence-policing?
My bet (archive.is/izuQ2):
"I bet Roman Yampolskiy that his p(doom from AI), which is currently equal to 99.999999%, will drop below 99.99% by January 1st, 2030. If I am right, Roman owes me $100. If I am wrong, and his credence does not drop below 99.99% by 2030, then I owe him $1.
""Bet accepted. To summarize the terms: If your estimate of existential risk due to AI drops from your current credence of 99.999999% to less than 99.99% before 2030, you owe me $100. If it does not drop below 99.99%, I will owe you $1 on January 1st, 2030.
Your current credence, p(doom)=99.999999%, implies that there is at most a 0.01% chance that your credence will ever drop to 99.99%, which is why you risking your $100 to my $1 seems profitable to you in expectation.
On the other hand, this bet seems profitable to me because I think your p(doom)=99.999999% is irrationally high and think that there is >1% chance that you will recognize this, say ""oops"", and throw the number out in favor of a much more reasonable p(doom)<99.99%."""