> If the worry is just that, structurally, we're committed to accepting *some* 50/50 gamble, I guess I'd want to hear more about what view avoids this possibility, and what other problems it faces instead.
Oh sure, I agree that you can't avoid having to pick some gamble like that. I guess the question is…
> If the worry is just that, structurally, we're committed to accepting *some* 50/50 gamble, I guess I'd want to hear more about what view avoids this possibility, and what other problems it faces instead.
Oh sure, I agree that you can't avoid having to pick some gamble like that. I guess the question is, does the move to diminishing marginal value matter here, or do we just want to say something like, yes, expected-value-maximization says we should take some gamble of this form, but
a) your alternative pet theory probably does the same, a la your "Puzzles for Everyone" post, and
b) we shouldn't imagine we are correctly conceptualizing both ends of the gamble "correctly" so we should be wary of relying too heavily on our intuition here.
> So, in particular, an expressivist who wants there to be good lives in future, and favours others sharing this desire, could affirm this by calling good lives "intrinsically valuable".
I'm not sure I totally understand--how is this different from the expressivist just having a preference for future good lives? I suppose from their point of view, they would say "I don't think this is good just because it satisfies my preferences", but from an outside view, it seems to me hard to distinguish an opinion on "intrinsic value" from a preference, at least from the point of view of a non-moral-realist.
> I would be "pro-utopia, even on the hypothetical condition that I cease to be pro-utopia").
I guess this is where we disagree. I am basically fine with the idea of the anti-natalists winning, as long as they do so by honourable means.
> even if there isn't (yet) any definite person for whose sake we would be acting when we tell Sally not to have the miserable child.
> By parity of reasoning, there's no basis for denying the parallel claims about positive welfare being intrinsically good.
I agree that *if* we bring a child into the world, and they love their life, we can regard that as a benefit for the child...but only conditional on bringing them into the world. If I had to summarize the view I think I'm arguing for, it would be something like, "you only have to care about the benefits/harms to a person in the worlds where they actually exist"--so Sally's child is harmed by being "forced" to live in a world where they will suffer; and a person with a good life is benefited by being born in any of the worlds in which they are, in fact, born. But in the worlds where a person is not born, we don't have to weight their benefits/harms in our calculation of what to do. We can *choose* to do so, as a matter of our personal preferences, or for other instrumental reasons, but I don't see why there is any intrinsic reason to do so.
Quick counterexample to your last claim: suppose Sally flips a coin to decide whether to create a miserable child. Fortunately, the coin directs her not to. But now your view implies that Sally needn't have taken into account the interests of the child who would've been miserable. But this seems wrong. Sally was wrong to flip a coin, and take a 50% risk of causing such suffering. She should have outright (100%) chosen not to have the miserable child, and she should have done it out of concern for that child's interests.
> from an outside view, it seems to me hard to distinguish an opinion on "intrinsic value" from a preference, at least from the point of view of a non-moral-realist.
Yeah, I'm no expert on expressivism, but a couple of possibilities:
(1) The relevant thing might be that it's a special kind of universal higher-order preference: they want *everyone* to have the relevant first-order preference.
(2) Alternatively, it might be that they're in favour of blaming or otherwise morally criticizing people who don't have the relevant first-order preference.
Sorry, I realized overnight that I missed the point that in the example where we don't create the child, the void is ranked against the world the miserable child is born; if we can do a comparison in that case, why not in the other case?
That actually feels pretty convincing to me; I still feel conflicted about this, but I think if I really want to believe that the void isn't worse than Utopia I really do need an explicit person-affecting view, or to have an explicit asymmetry between negative welfare and positive welfare.
Thanks for the very good response!
> If the worry is just that, structurally, we're committed to accepting *some* 50/50 gamble, I guess I'd want to hear more about what view avoids this possibility, and what other problems it faces instead.
Oh sure, I agree that you can't avoid having to pick some gamble like that. I guess the question is, does the move to diminishing marginal value matter here, or do we just want to say something like, yes, expected-value-maximization says we should take some gamble of this form, but
a) your alternative pet theory probably does the same, a la your "Puzzles for Everyone" post, and
b) we shouldn't imagine we are correctly conceptualizing both ends of the gamble "correctly" so we should be wary of relying too heavily on our intuition here.
> So, in particular, an expressivist who wants there to be good lives in future, and favours others sharing this desire, could affirm this by calling good lives "intrinsically valuable".
I'm not sure I totally understand--how is this different from the expressivist just having a preference for future good lives? I suppose from their point of view, they would say "I don't think this is good just because it satisfies my preferences", but from an outside view, it seems to me hard to distinguish an opinion on "intrinsic value" from a preference, at least from the point of view of a non-moral-realist.
> I would be "pro-utopia, even on the hypothetical condition that I cease to be pro-utopia").
I guess this is where we disagree. I am basically fine with the idea of the anti-natalists winning, as long as they do so by honourable means.
> even if there isn't (yet) any definite person for whose sake we would be acting when we tell Sally not to have the miserable child.
> By parity of reasoning, there's no basis for denying the parallel claims about positive welfare being intrinsically good.
I agree that *if* we bring a child into the world, and they love their life, we can regard that as a benefit for the child...but only conditional on bringing them into the world. If I had to summarize the view I think I'm arguing for, it would be something like, "you only have to care about the benefits/harms to a person in the worlds where they actually exist"--so Sally's child is harmed by being "forced" to live in a world where they will suffer; and a person with a good life is benefited by being born in any of the worlds in which they are, in fact, born. But in the worlds where a person is not born, we don't have to weight their benefits/harms in our calculation of what to do. We can *choose* to do so, as a matter of our personal preferences, or for other instrumental reasons, but I don't see why there is any intrinsic reason to do so.
Quick counterexample to your last claim: suppose Sally flips a coin to decide whether to create a miserable child. Fortunately, the coin directs her not to. But now your view implies that Sally needn't have taken into account the interests of the child who would've been miserable. But this seems wrong. Sally was wrong to flip a coin, and take a 50% risk of causing such suffering. She should have outright (100%) chosen not to have the miserable child, and she should have done it out of concern for that child's interests.
> from an outside view, it seems to me hard to distinguish an opinion on "intrinsic value" from a preference, at least from the point of view of a non-moral-realist.
Yeah, I'm no expert on expressivism, but a couple of possibilities:
(1) The relevant thing might be that it's a special kind of universal higher-order preference: they want *everyone* to have the relevant first-order preference.
(2) Alternatively, it might be that they're in favour of blaming or otherwise morally criticizing people who don't have the relevant first-order preference.
(There may be other options!)
Sorry, I realized overnight that I missed the point that in the example where we don't create the child, the void is ranked against the world the miserable child is born; if we can do a comparison in that case, why not in the other case?
That actually feels pretty convincing to me; I still feel conflicted about this, but I think if I really want to believe that the void isn't worse than Utopia I really do need an explicit person-affecting view, or to have an explicit asymmetry between negative welfare and positive welfare.