I agree that's a good reason to be skeptical of efforts to narrowly shape the far future. But I don't know of any longtermist projects that would fall afoul of that. The projects I'm aware of are either (i) broad efforts to advance (esp. moral) progress, or (ii) narrowly targeted at efforts in the coming few years and decades to protect …
I agree that's a good reason to be skeptical of efforts to narrowly shape the far future. But I don't know of any longtermist projects that would fall afoul of that. The projects I'm aware of are either (i) broad efforts to advance (esp. moral) progress, or (ii) narrowly targeted at efforts in the coming few years and decades to protect against global catastrophic risks. (For a recent historical example: we'd presumably be in a much better place re: climate and pandemics now if my parents' generation had been more guided by longtermist lights!)
Those two classes of project strike me as very worthwhile, and not undermined by the extent of our knowledge. But I guess someone more skeptical than I am might instead try to implement longtermism by (literally) investing resources for future use, Ben Franklin style. So I don't really see any objection to longtermism *per se* here, as opposed to one narrowly-imagined implementation of it.
Why should we think our notion of what constitutes moral progress won't age similarly to "The White Man's Burden," Manifest Destiny, or any of the other horribles of the last few centuries, other than hubris? Like a nature reserve, we should focus on not wrecking it without trying to garden.
I'm not sure what that means. If an asteroid is on track to wipe us out, does deflecting it count as "gardening", or could sitting back and doing nothing count as a form of "wrecking"?
In any case, I think a sensible degree of epistemic humility doesn't entail full-blown moral skepticism (as if we should be unsure whether to bother saving innocent lives), but just calls for things like (i) avoiding value lock-in, (ii) encouraging Millian "experiments in living", and (iii) preferring *robustly* good options (e.g. increase human knowledge and capacities) over morally *risky* ones (e.g. trapping humanity in experience machines). These are all standard longtermist ideas: https://rychappell.substack.com/p/review-of-what-we-owe-the-future#%C2%A7improving-values-and-institutions
I agree that's a good reason to be skeptical of efforts to narrowly shape the far future. But I don't know of any longtermist projects that would fall afoul of that. The projects I'm aware of are either (i) broad efforts to advance (esp. moral) progress, or (ii) narrowly targeted at efforts in the coming few years and decades to protect against global catastrophic risks. (For a recent historical example: we'd presumably be in a much better place re: climate and pandemics now if my parents' generation had been more guided by longtermist lights!)
Those two classes of project strike me as very worthwhile, and not undermined by the extent of our knowledge. But I guess someone more skeptical than I am might instead try to implement longtermism by (literally) investing resources for future use, Ben Franklin style. So I don't really see any objection to longtermism *per se* here, as opposed to one narrowly-imagined implementation of it.
Why should we think our notion of what constitutes moral progress won't age similarly to "The White Man's Burden," Manifest Destiny, or any of the other horribles of the last few centuries, other than hubris? Like a nature reserve, we should focus on not wrecking it without trying to garden.
I'm not sure what that means. If an asteroid is on track to wipe us out, does deflecting it count as "gardening", or could sitting back and doing nothing count as a form of "wrecking"?
In any case, I think a sensible degree of epistemic humility doesn't entail full-blown moral skepticism (as if we should be unsure whether to bother saving innocent lives), but just calls for things like (i) avoiding value lock-in, (ii) encouraging Millian "experiments in living", and (iii) preferring *robustly* good options (e.g. increase human knowledge and capacities) over morally *risky* ones (e.g. trapping humanity in experience machines). These are all standard longtermist ideas: https://rychappell.substack.com/p/review-of-what-we-owe-the-future#%C2%A7improving-values-and-institutions