9 Comments
Mar 29Liked by Richard Y Chappell

Have you seen Tim Williamson' new stuff on heuristics in philosophy?: https://www.philosophy.ox.ac.uk/sitefiles/overfittingdraftch1.pdf

Seems similar in spirit to your last paragraph.

Expand full comment
author

That's new to me, thanks for the pointer!

Expand full comment

Excellent article but it frustrates me a bit that it's necessary.

With this epistemic mistake and the negative utilitarian stuff it seems like there's a decent minority of philosophers who just love the idea of being pro end of the world.

I'd rather not speculate on the motivations. Maybe it is just as you say mostly people taking formal arguments too far but that doesn't well explain the fairly unrelated negative utilitarian arguments which are often believed by the same people as the risk aversion arguments (in my experience).

And this would be fine as it's not like weird philosophical views like idk Lewisian modal realism don't get defenders but I've seen this "maybe extinction is good actually" get brought up as an argument in debates on AI risk policy or existential risk mitigation.

Expand full comment
author

To be clear: Pettigrew does not endorse the pro-extinctionist implication!

He's officially neutral on what background assumption should be jettisoned to avoid it, but my impression -- at least from an earlier draft of the paper -- was that he thought longtermism might be more to blame. I didn't bother to discuss that here because I think it's so clear that the problem stems entirely from his interpretation of risk aversion. Taking the interests of future generations into account shouldn't lead us to wish them not to exist at all, at least when their expected well-being is positive -- or so I'd insist. (And it would be bizarre to say that it's only morally OK to allow future generations to exist insofar as we can morally ignore their interests!) But Pettigrew might not agree with my background judgments here.

More generally, I don't think there's more than a small handful of academic philosophers who are explicitly pro-extinction. Many more endorse views (e.g. the procreation asymmetry) that push in that direction, but my sense is that they usually at least try (perhaps unsuccessfully) to resist the implication. I suspect that pro-extinctionism is actually much more common amongst non-academics (e.g. folks at S-risk think tanks, along with trendy (non-EA) misanthropic environmentalist types). I could be wrong, though. It certainly is an appalling view, whoever holds it.

Expand full comment

I think an important element is whether the probability of a future full of suffering actually is tiny. That changes calculations a lot.

If you say "do you want to be born, but with a 1 in a million chance of suffering", it sounds silly to refuse. But is 1 in a million the correct number.

If I say "Do you want to be reincarnated in a random animal, given that you have 90% chance to be born as a small fish or bug that will die of hunger a few days after being born, a 9% chance of being a factory farmed animal living in a cramped cage their whole life, and a 1% chance of being a successful human or animal that will reach adulthood", conclusions should be pretty different.

Expand full comment
author

Yes, that's a different issue. I'm using Pettigrew's numbers, which are what is relevant to his argument and the particular kind of risk-aversion that he invokes.

Expand full comment

"Recall that risk-averse decision theories are motivated purely by intuitions about cases." Are we sure that's uncontroversial?

Also worth distinguishing risk-aversion from ambiguity aversion (or maybe you've done this already).

Expand full comment
author

It's the only motivation I've heard on their behalf. Other suggestions welcome, though!

Expand full comment
Mar 30·edited Mar 30

I grasp the basic distinction between intuitions-about-cases and intuitions-about-principles, but I'd like to learn more about it.

Expand full comment