Yeah, definitely world 2. I think it's fine to give moderately more weight to relieving suffering than to promoting the good, but not lexical priority or anything close to it.
If you actually vividly imagine world 2, picture some of those wonderful lives, and then ask yourself: would it be morally better for 99% of those wonderful lives never to have existed, just to prevent this *one* other bad life? That doesn't seem "intuitive" to me at all.
To clarify the position I've defended: what Rebecca and I said in our paper was not that World 1 was better (Richard and I agree that World 2 is better) but that, if you can actualize any one of an infinitely ascending hierarchy of better and better worlds--including World 3, which is just like World 2 except that the suffering person instead gets a great life--it seems objectionable to create World 2, screwing over that person for no reason, in a way that it doesn't seem objectionable to create World 1, even though World 2 is better than World 1. To get that, you need person-directed reasons to factor into deliberation in some special way that's different from how undirected reasons factor in--which Richard agrees with. We were thinking that's not really compatible with consequentialism in the sense people usually mean it, but I guess Richard suggests above that he's not too worried about that--so there may not really be a deep difference here.
I would want to live those lives. Why would you prevent someone from living a life that they clearly have good reason to want to live?
E.g. suppose you distribute the suffering so that there is just one second of suffering per (otherwise blissful) year, until it reaches 1000 years (or whatever) of suffering in aggregate, after which point the years are purely blissful, without even so much as a second of suffering. It seems crazy to regard this as a bad life.
Yeah, definitely world 2. I think it's fine to give moderately more weight to relieving suffering than to promoting the good, but not lexical priority or anything close to it.
If you actually vividly imagine world 2, picture some of those wonderful lives, and then ask yourself: would it be morally better for 99% of those wonderful lives never to have existed, just to prevent this *one* other bad life? That doesn't seem "intuitive" to me at all.
To clarify the position I've defended: what Rebecca and I said in our paper was not that World 1 was better (Richard and I agree that World 2 is better) but that, if you can actualize any one of an infinitely ascending hierarchy of better and better worlds--including World 3, which is just like World 2 except that the suffering person instead gets a great life--it seems objectionable to create World 2, screwing over that person for no reason, in a way that it doesn't seem objectionable to create World 1, even though World 2 is better than World 1. To get that, you need person-directed reasons to factor into deliberation in some special way that's different from how undirected reasons factor in--which Richard agrees with. We were thinking that's not really compatible with consequentialism in the sense people usually mean it, but I guess Richard suggests above that he's not too worried about that--so there may not really be a deep difference here.
I would want to live those lives. Why would you prevent someone from living a life that they clearly have good reason to want to live?
E.g. suppose you distribute the suffering so that there is just one second of suffering per (otherwise blissful) year, until it reaches 1000 years (or whatever) of suffering in aggregate, after which point the years are purely blissful, without even so much as a second of suffering. It seems crazy to regard this as a bad life.