Share this comment
This is a great post! Somehow Utilitarianism has become an easy target for difficult problems, likely as you say because it is sufficiently rigorous to surface them.
I'm curious as to whether anyone has done work around moral uncertainty and randomness for some of these cases: for example, with the Repugnant Conclusion, what does it ge…
© 2025 Richard Y Chappell
Substack is the home for great culture
This is a great post! Somehow Utilitarianism has become an easy target for difficult problems, likely as you say because it is sufficiently rigorous to surface them.
I'm curious as to whether anyone has done work around moral uncertainty and randomness for some of these cases: for example, with the Repugnant Conclusion, what does it get us to recognise that we are going to be uncertain about the actual day to day experience of future people? And that it will, in fact, vary from hour to hour in any case - as ours does every day? So by pushing a vast number of people on average close to the "barely worth living" line, at any particular time many of them will actually be under that line, due to the stochastic nature of human experience.
Does it buy us anything to say that this world is, at any particular time, clearly worse for (say) the current bottom 10% than an alternative world with fewer, happier, people, and that this bottom 10% might in practice represent a very considerable number? How might we account for this in our reasoning?