5 Comments
⭠ Return to thread

Sorry to start yet another thread, but I wanted to mention another thought that occurred to me while reading your post against subjectivism:

I agree that "Normative subjectivists have trouble accommodating the datum that all agents have reason to want to avoid future agony" gets at a real problem for subjectivism; but I find it telling that the strongest example you can come up with is avoiding pain. At least for me, my intuitions just really are very asymmetrical with respect to pleasure and pain, and I suspect you picked "avoiding future" agony rather than "achieving future joy" because you have the intuition that the former is a harder bullet to bite than the latter.

I think this asymmetry is why I feel intuitively bound to rank the unfortunate child against the void in a way I don't feel when it comes to the happy child; and why I don't like the idea of us turning ourselves into anti-humans, but I don't have a strong intuitive reaction against us choosing the void--I think our reasons for avoiding pain are much more convincing and inarguable than our reasons for pursuing pleasure.

I think in general, utilitarianism has a harder time working out the details of what should count toward *positive* utility--this may just be my impression, but I'd guess there's a lot more controversy over what counts as well-being, and what sorts of things contribute to it, and in what way, than over what sorts of things contribute to *negative* utility.

I think maybe the reason I think of pleasure and pain as asymmetric, then, is that I find utilitarianism's arguments much more convincing when talking about suffering; so maybe one doesn't need to adopt an extreme view like "all utility functions are bounded above by 0" to explain why it feels more intuitive to reason about preventing suffering than about promoting joy; maybe it's a matter of moral uncertainty: no plausible competitor can think it's good to let someone suffer pointlessly, that's more or less the strongest moral datum we have. But plausible competitors *can* disagree with utilitarian conclusions about well-being.

Expand full comment

Yes, I agree that it's much more controversial exactly what contributes to positive well-being. (This isn't specific to utilitarianism.) FWIW, my own view is that positive hedonic states don't really matter all *that* much; they're nice and all, but the things that *really* make life worth living are more objective/external: things like loving relationships, achievements, etc. But as you note, that specific claim about the good is going to be much more controversial than "pain is bad", so it makes it a bit more difficult to make specific claims about what's worth pursuing. That's why I try to keep the claim more general: the best lives, *whatever it is* that makes them worth living, are better to have exist than to not exist.

Expand full comment

That makes sense to me; I re-read your older post on killing vs. failing to create, and I think "strong personal reasons" to worry about people who will exist independently of our choices, vs. "weak impersonal reasons" to worry about bringing into existence future people is a distinction I find intuitive.

I think one thing I hadn't done a good job separating out is, in arguments contrasting the void with future Utopias, often the Utopias are stipulated to be filled with staggeringly large numbers of people, so that even with only weak impersonal reasons to create future lives, the number of people involved is big enough that the overall product is still a huge number--I think part of my intuitive rejection of this sort of reasoning is it feels a bit to me like a Pascal's mugging. But I was conflating that with a contrast between the void and Utopia *at all*.

And I guess the void still has the unique property that the void precludes *any* number of future people existing, so comparisons with it will always have something of a Pascal-ish quality.

Anyway, thanks for a very interesting discussion! I really appreciate your willingness to engage with amateurs like me, and I really enjoy the blog as a whole. I loved Reasons and Persons when I read it years ago, and I'm really glad I've found a place where I can not just follow, but even participate in, discussions on the same issues.

Expand full comment

It's only a Pascal's mugging if the one making the argument can just make up any number they want, with no independent argument for an expected range. Some people peripherally involved in long-termist arguments online undoubtedly do this, but the central figures in long-termism do make indepedent arguments based upon the history and mathematics of population growth, technology and wealth growth, and predictions about the colonization of space.

Expand full comment

That's a fair point; it's definitely a lot better that the numbers filling the postulated utopias are not just ex culo.

And I don't want to keep fighting this point on an otherwise dead thread, but I just want to articulate my feeling that, at least in the formulation above, there's still something fuzzy about the math: it's not clear how exactly to multiply "weak impersonal reasons" by large numbers (and, also of course, by the probability that these numbers are actually attainable) to come to clear conclusions, and it sometimes feels like the strength of these arguments derives from the stupefaction one feels at the largeness of the large numbers.

But, as I say, it's a pretty good reminder that actually, the large numbers are in some ways the least controversial part of that calculation--definitely in comparison to quantifications of "weak impersonal reasons", and probably in comparison to the probabilities too--they are not (usually) just picked to be stupendously large out of convenience, so thanks for pointing that out.

Expand full comment