(1) On the philosophical issue: Michael covers the matter well. But to justify the asymmetry in a more straightforward way - arguments for creating net-negative lives is self-defeating, while creating net-positive lives is question-begging. If you ground normative value on conscious beings and their preferences & interests (i.e. my life …
(1) On the philosophical issue: Michael covers the matter well. But to justify the asymmetry in a more straightforward way - arguments for creating net-negative lives is self-defeating, while creating net-positive lives is question-begging.
If you ground normative value on conscious beings and their preferences & interests (i.e. my life and welfare matters because there is *this* perspective from which these things matter and are valuable), then if such beings/perspectives *don't exist and never will* unless we create them, there is no question-begging reason to create them (i.e. the argument goes - we should create such people because their lives matter; but their lives matter only because they want to live; but they would only want to live if we create such people and they exist in the first place).
In contrast, the reasons for not creating bad lives are self-defeating (i.e. if we create them, then their lives are bad and we shouldn't have created them). There is no circularity here - conditional on us creating them, we have reason to regret out decision, and so have reason not to create them in the first place.
(2) On the practical side - within the EA space, I do think totalist views on population don't necessarily lead to prioritizing existential risk. There are two reasons for this. (a) Firstly, saving lives, at whatever point, creates future lives (since people have kids who have kids who have kids etc), and to the extent that global fertility rates are dropping, it's much more valuable to save lives now then in the future (and in AMF et al's operating locations especially). (b) Secondly, unless you think there will be a population bounce-back post catastrophe (e.g. 90% of the world is wiped out), in the sense of the remaining population having more kids than they otherwise would have sans catastrophe (which is extremely unlikely, and with the converse remaining far more probable, in fact), such near-catastrophes are merely a levelling down of the future aggregate human population, and 100% dying in the catastrophe would indeed be 10% worse than 90% dying, not disproportionately worse.
Your (1) just assumes that we have structural reasons to avoid regrettable (self-conditionally negative) outcomes, but no structural reasons to pursue desirable (self-conditionally positive) ones. I don't see any reason to assume that.
But as I flagged to Michael, I think the deeper issue here is that resorting to merely structural reasons is itself undermotivated. We should just directly appreciate that miserable lives are bad, even in prospect (and, correspondingly, that awesome lives are good, even in prospect). I guess you can call this "question-begging" if you like; many obvious truths are.
(1) On the philosophical issue: Michael covers the matter well. But to justify the asymmetry in a more straightforward way - arguments for creating net-negative lives is self-defeating, while creating net-positive lives is question-begging.
If you ground normative value on conscious beings and their preferences & interests (i.e. my life and welfare matters because there is *this* perspective from which these things matter and are valuable), then if such beings/perspectives *don't exist and never will* unless we create them, there is no question-begging reason to create them (i.e. the argument goes - we should create such people because their lives matter; but their lives matter only because they want to live; but they would only want to live if we create such people and they exist in the first place).
In contrast, the reasons for not creating bad lives are self-defeating (i.e. if we create them, then their lives are bad and we shouldn't have created them). There is no circularity here - conditional on us creating them, we have reason to regret out decision, and so have reason not to create them in the first place.
(2) On the practical side - within the EA space, I do think totalist views on population don't necessarily lead to prioritizing existential risk. There are two reasons for this. (a) Firstly, saving lives, at whatever point, creates future lives (since people have kids who have kids who have kids etc), and to the extent that global fertility rates are dropping, it's much more valuable to save lives now then in the future (and in AMF et al's operating locations especially). (b) Secondly, unless you think there will be a population bounce-back post catastrophe (e.g. 90% of the world is wiped out), in the sense of the remaining population having more kids than they otherwise would have sans catastrophe (which is extremely unlikely, and with the converse remaining far more probable, in fact), such near-catastrophes are merely a levelling down of the future aggregate human population, and 100% dying in the catastrophe would indeed be 10% worse than 90% dying, not disproportionately worse.
Your (1) just assumes that we have structural reasons to avoid regrettable (self-conditionally negative) outcomes, but no structural reasons to pursue desirable (self-conditionally positive) ones. I don't see any reason to assume that.
But as I flagged to Michael, I think the deeper issue here is that resorting to merely structural reasons is itself undermotivated. We should just directly appreciate that miserable lives are bad, even in prospect (and, correspondingly, that awesome lives are good, even in prospect). I guess you can call this "question-begging" if you like; many obvious truths are.