10 Comments

I lean towards being anti confidence policing, but anyone with a credence in it being bad--given expert disagreement--above 70% is a dogmatist.

Expand full comment

Very moderate of you! :-)

Expand full comment

Confidence-policing is a new concept to me:

> Rather, it’s the purely procedural criticism of their making a confident assertion as such. The thought seems to be that observers can recognize this as epistemically out of bounds without themselves knowing anything about the issue at all.

A month ago I made a bet against Roman Yampolskiy that his credence of existential catastrophe from AI is too high. I claim that a layperson without specific knowledge of AI risk can know that his credence is too high. In this case, I think "confidence-policing" by this person would be valid. Do you agree? Is there something different going on here making this not confidence-policing?

My bet (archive.is/izuQ2):

"I bet Roman Yampolskiy that his p(doom from AI), which is currently equal to 99.999999%, will drop below 99.99% by January 1st, 2030. If I am right, Roman owes me $100. If I am wrong, and his credence does not drop below 99.99% by 2030, then I owe him $1.

""Bet accepted. To summarize the terms: If your estimate of existential risk due to AI drops from your current credence of 99.999999% to less than 99.99% before 2030, you owe me $100. If it does not drop below 99.99%, I will owe you $1 on January 1st, 2030.

Your current credence, p(doom)=99.999999%, implies that there is at most a 0.01% chance that your credence will ever drop to 99.99%, which is why you risking your $100 to my $1 seems profitable to you in expectation.

On the other hand, this bet seems profitable to me because I think your p(doom)=99.999999% is irrationally high and think that there is >1% chance that you will recognize this, say ""oops"", and throw the number out in favor of a much more reasonable p(doom)<99.99%."""

Expand full comment

Yeah, I think that's different: you're making an alternative claim (perhaps based on general background knowledge) about what substantive credence is most reasonable. That seems very different from the kind of purely procedural confidence-policing I have in mind. For the latter, the policers would generally be unwilling to take any bet precisely because that would be *committal* in a way that they are opposed to.

Expand full comment

I wonder whether what's going on in these cases is a kind of higher-order evidence thing. When we pronounce on controversial, disputed issues we (often) know that other people have different views to us, or will almost certainly come to a different view on the issue even after considering our arguments. And often those people are our epistemic peers, or at least not epistemically dismissible. So this is evidence that people with the kind of epistemic capacities we have aren't terribly reliable at evaluating the evidence. And that's evidence that, when I give my honest, considered views on such issues, there's a pretty good chance I have misevaluated the evidence. And that should lead me to have low confidence in such matters. So when someone has a very high confidence in such a matter, they're either just ignoring the higher order evidence or, insultingly, presupposing that nobody who disagrees with them is an epistemic peer.

To take a concrete case: when I publish a philosophy paper advancing some view, I can be pretty sure (on the basis of induction) that many many excellent philosophers will disagree with my view. They might find the view interesting, but it is hardly likely to win universal or even majority assent. I should take that into account, so a high confidence in my view would be unjustified. And third-parties, even when they haven't read my (fantastic) paper, can predict all this too, and so they are licensed in thinking I'd be a bit epistemically arrogant if I was very confident in the views I published. So they're usually not doing anything wrong for criticizing me for my high confidence. Either I'm misevaluating the higher-order evidence, or I'm just assuming all other philosophers are my epistemic inferiors.

Expand full comment

I disagree!

I mean, it's an interesting argument, but I think it rests on mistaken assumptions about higher-order evidence.

Consider: By "epistemic peer", do you mean: (i) someone who has a comparable degree of *procedural* rationality / epistemic virtues, or (ii) someone who is comparably *substantively* rational, and so equally likely to be *getting things right*?

It would be incoherent to think that someone with radically different philosophical starting-points from you could still be an epistemic peer in the second sense. (If you really believed that, you would have to abandon your position entirely and become a radical skeptic.) So you shouldn't think that. But merely procedural peerhood isn't epistemically undermining (or, at least, is no more epistemically undermining than the prior modal fact that there are possible alternative views to ours that are internally coherent/defensible -- something we know perfectly well without needing to see actual "excellent philosophers" defending those possible views).

See: https://www.philosophyetc.net/2021/06/philosophical-pluralism-and-modest.html

Though, fwiw, I think plenty of actually-existing disagreement (at least on relatively specific questions) is more tractable and really just stems from ignorance. Obviously big-picture questions about *which overall philosophical worldview is correct* are open to immense reasonable dispute. But most papers aim to make a more specific point. And often, the points they make are -- once noticed -- clearly correct, in a way that even someone who disagrees on "big picture" issues should still appreciate.

Lots of the examples from my "myth-busting" post are like this. For example: the myth that "Utilitarianism doesn’t value individuals, or values them only instrumentally." I don't think anyone could reasonably believe this after reading my 'Value Receptacles' paper. They could have other objections to utilitarianism, of course. But I really think I refuted that one decisively. (I'm open to revising that judgment, but I would be genuinely *very* surprised if it happened.)

Expand full comment

I've come across your substack from a 2008 blog post criticising suspension of disbelief as a not admirable position (https://www.philosophyetc.net/2008/07/why-suspend-judgment.html?lr=1&m=1). I wrote a comment that couldn't be posted (I've pasted it below), and reading this post alongside that makes me want to ask a question: Why do you do philosophy, was is its aim? I'm not asking why people do philosophy or why philosophy has value, I'm rather asking you, the author of this substack, Richard Y Chappell, why do you do philosophy? And if you think that's a bad or uninteresting question, why is it?

My original comment:

I apologize for commenting so long after this was written, but I am struck by the word "admirable", which if I've learned anything from linguistics and economics displays a revealed preference for the approval of others. Ataraxia, however, is a therapeutic goal: it provides peace as an alternative to the cocophony of an endless debate.

Could it be there are different goals at play for your imagined hypothetical opponent here? Sure the judgment suspender is less admirable to you and to others, maybe many others, but I think Pyrrhonists would respond: Sure it's less admirable; I don't care, philosophy isn't a tool to gain the admiration of others for me even if it is for you.

Expand full comment

I take the aim of philosophy to be to seek true beliefs, or epistemically warranted degrees of belief, in interesting philosophical questions.

"Admirable" there is a normative term, indicating what *warrants* positive evaluation, not what will *actually* "gain the admiration of others". (No doubt there are things other than philosophy that would better serve the latter goal.) Feel free to substitute another term for positive epistemic evaluation, like "rational" or "wise", if you prefer. My point there was just that suspending belief isn't especially epistemically good, and shouldn't be assumed to be the ideal response to uncertainty. Adopting a best-guess credence or *degree* of belief may be epistemically better.

Expand full comment
User was indefinitely suspended for this comment. Show
Expand full comment

I'm not sure what any of this has to do with what I wrote. To promote higher standards, I'm gonna throw out a ban for "random / irrelevant commentary".

Expand full comment