Can you say more? My thought is that vivid imagination enables us to appropriately expand our moral concern. Someone might initially think they had no positive reasons of aid, and then Singer's Pond case convinces them otherwise: they come to appreciate that it would be a horrific mistake to not care, and to do nothing when they could he…
Can you say more? My thought is that vivid imagination enables us to appropriately expand our moral concern. Someone might initially think they had no positive reasons of aid, and then Singer's Pond case convinces them otherwise: they come to appreciate that it would be a horrific mistake to not care, and to do nothing when they could help the child -- or anyone else in similarly dire straits.
Fictional characters we're not able to do anything for. So there's an unbridgeable gap between caring about them (being disposed to promote their interests insofar as you're able) and action: the "insofar as you're able" condition can never be triggered. But if you learned that a movie was *not* actually fictional, but rather depicted true and ongoing events, you would presumably recognize a pro tanto reason to help the protagonists (insofar as you are able). So I'm not seeing how the analogy helps you.
I'm wondering what you think explains the putative gap in the non-fictional case of a movie reel accurately depicting the life that Joy *really would* have if brought into existence. Caring about her life involves being disposed to promote her interests insofar as you're able. And, as argued in the OP, it promotes her interests to bring her into (happy) existence. If you explicitly think, while watching the movie reel, about how it depicts a *potential* life that could either come to be, or not, depending on your decision, my suggestion is that any well-adjusted moral agent will feel the stakes of the decision (much like in Singer's pond case).
One might always just *refuse* to acknowledge the reasons revealed by our moral emotions: an egoist could respond to Singer that they "feel" for drowning child "during imagination", but there's a "gap" between that and "thinking they have a reason" to *do* anything to save the child's life. They might accuse Singer of "conflating that gap." (They might even accuse Singer of "insulting" them for observing that it would seem monstrous to just watch the child drown.) They can say all these things. But they aren't good or convincing things to say. Moral emotions can be normatively revealing. If you want to block the path that they reveal, you need something more than a mere assertion that you refuse to travel it.
1. Misery is harmed by existence, such that we have reason not to create her.
2. The best explanation of (1) is that there are non-comparative harms and benefits: having a life of negative welfare is bad for one, and a life of positive welfare is good for one, when the alternative is non-existence.
3. We have reasons of beneficence to do what is good for a subject, and to avert what is bad for a subject.
So
4. We have reasons of beneficence to create good lives like Joy's.
The alternative suggestion that there are non-comparative harms but no analogous non-comparative benefits is utterly ad hoc and unprincipled.
I'm not sure what it means to say that "The experiences of Joy lack in intuitiveness": do you mean that they aren't intuitively *good*?
Given deprivationism about the harm of death, denying that positive experiences are good would seem to leave you unable to explain why death is ever bad.
Since neither Misery nor Joy's life was precisely specified, there doesn't seem much basis for intuitive comparison. I can imagine pairings that go either way. In general, I think harms (and bad lives) are such that they can be more than compensated by sufficient benefits (and good lives). I'm glad that humanity exists, for example.
Note that nothing is either good or bad for those who *don't* exist. So if you grant that some things are good for those who exist, that's everything that *can* be granted as far as the existence of benefits is concerned.
The further moral question is when we should *care* about benefiting someone. My view is that we should *always* want to provide benefits to innocent subjects (insofar as we're able). Your view seems to be that, although Joy's life would be wonderful for her, this gives you no moral reason whatsoever to want this positive outcome to actually eventuate.
I'm trying to better understand this (because what you claim to be "intuitive" is utterly alien to me). Suppose you believe that Joy exists while watching the movie reel of her life. How do you feel about her existence? Like, suppose someone tells you that Joy was *almost* prevented from coming into existence. Would you feel relieved that this obstacle was overcome? Or indifferent? If someone tells you they would (if they could) go back in time to restore the obstacle and prevent her existence, would you be horrified? Or indifferent?
I'm hoping you're not indifferent in those cases (you agree that *that* would be objectionably nihilistic, right?). But now suppose you learn that you're mistaken about the date: actually, Joy has not been conceived yet. So you could actually firm up the obstacle and prevent her from ever coming to exist. Do you suddenly switch to feeling indifferent towards that prospect, and see *no reason at all* to want the happy future you saw to come to fruition? You think that's the "intuitive" verdict / reaction to have about this case?
(P.S. I'm not going to pursue the "package deal" tangent here because the core dispute doesn't hang on it in any way. Lots of people have prioritarian intuitions that I don't especially share, but giving extra weight to averting bads is still compatible with thinking that we always have *some* reason to want good lives to exist.)
My thought was just that it would seem *odd* for one's attitudes towards Joy's life to radically switch back and forth in the described way. As I reported in closing footnote, it doesn't seem to me that our attitude here should depend on where we are in time.
> "I don't see how that [near-prevention] would make a difference"
I don't see how this relates to the dialectic. I asked a question: "Would you feel relieved that this obstacle was overcome? Or indifferent?" The point of the thought-experiment is just to make *vivid* the unconditional value of Joy's life, and the moral loss involved in replacing it with nothing. (Of course the value of her life does not change depending on whether or not it was almost prevented.)
When we reflect on how *different* people easily could have come into existence, I think we should feel a kind of ambivalence. To see this, imagine that we could vividly apprehend all the future possibilities. All else equal, we should have some desire that GoodLife1 be realized, and a roughly comparable desire that GoodLife2 instead be realized, and so on. But we should unambiguously prefer the disjunct of any good life over no life at all. (And, of course, prefer no life at all over any bad life.)
> "I see no philosophical problem with that horrorlessness."
Sure, as I mentioned in the footnote, I see it as akin to denying that wild animal suffering matters. Someone could consistently hold such a view, and in that sense see "no philosophical problem" with it. But it strikes me as (very) morally bad, and I hope my thought experiments can help at least *some* readers to see why. (In general, vividly imagining & focusing on an excluded interest seems like the best hope for helping someone to see why it's morally problematic to neglect that interest.) If you don't share my core intuition, there's probably not much more to discuss here: we've reached argumentative bedrock.
You explained why you disagree; that's fine, but is not the same as "demonstrating problems". The latter would require presenting reasons that anyone ought to recognize as *good* reasons. I think you're rejecting my arguments for bad reasons. (You disagree. Again, that's fine.)
> "Here is a case where I think your view is intuitively mistaken and (very) morally bad..."
No, that's clearly a bad argument. You can substitute *any* moderate benefit (e.g. feeding hungry mice) and it will seem intuitively wrong for the medical team to ignore the urgent needs of the burn victim merely in order to bring about a bunch more of the other good (making mice happy). You *obviously* can't conclude from this that there is nothing good about moderate benefits (whether to hungry mice, or to future people who would otherwise not get to exist).
> "To my mind your view is on untested ground, not known bedrock."
That's an odd claim. You're not really in any position to know how thoroughly my views here have been tested (by prior reflection). My judgment of the dialectic remains that neither of us is likely to convince the other.
Can you say more? My thought is that vivid imagination enables us to appropriately expand our moral concern. Someone might initially think they had no positive reasons of aid, and then Singer's Pond case convinces them otherwise: they come to appreciate that it would be a horrific mistake to not care, and to do nothing when they could help the child -- or anyone else in similarly dire straits.
Fictional characters we're not able to do anything for. So there's an unbridgeable gap between caring about them (being disposed to promote their interests insofar as you're able) and action: the "insofar as you're able" condition can never be triggered. But if you learned that a movie was *not* actually fictional, but rather depicted true and ongoing events, you would presumably recognize a pro tanto reason to help the protagonists (insofar as you are able). So I'm not seeing how the analogy helps you.
I'm wondering what you think explains the putative gap in the non-fictional case of a movie reel accurately depicting the life that Joy *really would* have if brought into existence. Caring about her life involves being disposed to promote her interests insofar as you're able. And, as argued in the OP, it promotes her interests to bring her into (happy) existence. If you explicitly think, while watching the movie reel, about how it depicts a *potential* life that could either come to be, or not, depending on your decision, my suggestion is that any well-adjusted moral agent will feel the stakes of the decision (much like in Singer's pond case).
One might always just *refuse* to acknowledge the reasons revealed by our moral emotions: an egoist could respond to Singer that they "feel" for drowning child "during imagination", but there's a "gap" between that and "thinking they have a reason" to *do* anything to save the child's life. They might accuse Singer of "conflating that gap." (They might even accuse Singer of "insulting" them for observing that it would seem monstrous to just watch the child drown.) They can say all these things. But they aren't good or convincing things to say. Moral emotions can be normatively revealing. If you want to block the path that they reveal, you need something more than a mere assertion that you refuse to travel it.
Ha, oops, seems I misremembered my own post :-)
But here's a straightforward argument:
1. Misery is harmed by existence, such that we have reason not to create her.
2. The best explanation of (1) is that there are non-comparative harms and benefits: having a life of negative welfare is bad for one, and a life of positive welfare is good for one, when the alternative is non-existence.
3. We have reasons of beneficence to do what is good for a subject, and to avert what is bad for a subject.
So
4. We have reasons of beneficence to create good lives like Joy's.
The alternative suggestion that there are non-comparative harms but no analogous non-comparative benefits is utterly ad hoc and unprincipled.
I'm not sure what it means to say that "The experiences of Joy lack in intuitiveness": do you mean that they aren't intuitively *good*?
Given deprivationism about the harm of death, denying that positive experiences are good would seem to leave you unable to explain why death is ever bad.
Since neither Misery nor Joy's life was precisely specified, there doesn't seem much basis for intuitive comparison. I can imagine pairings that go either way. In general, I think harms (and bad lives) are such that they can be more than compensated by sufficient benefits (and good lives). I'm glad that humanity exists, for example.
Note that nothing is either good or bad for those who *don't* exist. So if you grant that some things are good for those who exist, that's everything that *can* be granted as far as the existence of benefits is concerned.
The further moral question is when we should *care* about benefiting someone. My view is that we should *always* want to provide benefits to innocent subjects (insofar as we're able). Your view seems to be that, although Joy's life would be wonderful for her, this gives you no moral reason whatsoever to want this positive outcome to actually eventuate.
I'm trying to better understand this (because what you claim to be "intuitive" is utterly alien to me). Suppose you believe that Joy exists while watching the movie reel of her life. How do you feel about her existence? Like, suppose someone tells you that Joy was *almost* prevented from coming into existence. Would you feel relieved that this obstacle was overcome? Or indifferent? If someone tells you they would (if they could) go back in time to restore the obstacle and prevent her existence, would you be horrified? Or indifferent?
I'm hoping you're not indifferent in those cases (you agree that *that* would be objectionably nihilistic, right?). But now suppose you learn that you're mistaken about the date: actually, Joy has not been conceived yet. So you could actually firm up the obstacle and prevent her from ever coming to exist. Do you suddenly switch to feeling indifferent towards that prospect, and see *no reason at all* to want the happy future you saw to come to fruition? You think that's the "intuitive" verdict / reaction to have about this case?
(P.S. I'm not going to pursue the "package deal" tangent here because the core dispute doesn't hang on it in any way. Lots of people have prioritarian intuitions that I don't especially share, but giving extra weight to averting bads is still compatible with thinking that we always have *some* reason to want good lives to exist.)
My thought was just that it would seem *odd* for one's attitudes towards Joy's life to radically switch back and forth in the described way. As I reported in closing footnote, it doesn't seem to me that our attitude here should depend on where we are in time.
> "I don't see how that [near-prevention] would make a difference"
I don't see how this relates to the dialectic. I asked a question: "Would you feel relieved that this obstacle was overcome? Or indifferent?" The point of the thought-experiment is just to make *vivid* the unconditional value of Joy's life, and the moral loss involved in replacing it with nothing. (Of course the value of her life does not change depending on whether or not it was almost prevented.)
When we reflect on how *different* people easily could have come into existence, I think we should feel a kind of ambivalence. To see this, imagine that we could vividly apprehend all the future possibilities. All else equal, we should have some desire that GoodLife1 be realized, and a roughly comparable desire that GoodLife2 instead be realized, and so on. But we should unambiguously prefer the disjunct of any good life over no life at all. (And, of course, prefer no life at all over any bad life.)
> "I see no philosophical problem with that horrorlessness."
Sure, as I mentioned in the footnote, I see it as akin to denying that wild animal suffering matters. Someone could consistently hold such a view, and in that sense see "no philosophical problem" with it. But it strikes me as (very) morally bad, and I hope my thought experiments can help at least *some* readers to see why. (In general, vividly imagining & focusing on an excluded interest seems like the best hope for helping someone to see why it's morally problematic to neglect that interest.) If you don't share my core intuition, there's probably not much more to discuss here: we've reached argumentative bedrock.
You explained why you disagree; that's fine, but is not the same as "demonstrating problems". The latter would require presenting reasons that anyone ought to recognize as *good* reasons. I think you're rejecting my arguments for bad reasons. (You disagree. Again, that's fine.)
> "Here is a case where I think your view is intuitively mistaken and (very) morally bad..."
No, that's clearly a bad argument. You can substitute *any* moderate benefit (e.g. feeding hungry mice) and it will seem intuitively wrong for the medical team to ignore the urgent needs of the burn victim merely in order to bring about a bunch more of the other good (making mice happy). You *obviously* can't conclude from this that there is nothing good about moderate benefits (whether to hungry mice, or to future people who would otherwise not get to exist).
> "To my mind your view is on untested ground, not known bedrock."
That's an odd claim. You're not really in any position to know how thoroughly my views here have been tested (by prior reflection). My judgment of the dialectic remains that neither of us is likely to convince the other.