I think there's at least a tendency for deontological ethics to give less attention to the "allowing" side of the doing/allowing distinction. So there will be cases where it's clearly worth investing in x-risk prevention, in expectational terms, yet it won't qualify as an "obligation" on most versions of commonsense deontology. Probably…
I think there's at least a tendency for deontological ethics to give less attention to the "allowing" side of the doing/allowing distinction. So there will be cases where it's clearly worth investing in x-risk prevention, in expectational terms, yet it won't qualify as an "obligation" on most versions of commonsense deontology. Probably the easiest way to establish this is to just ramp up the uncertainty: make it a 10% chance of stopping the collision, instead of 100%. Or 1%, or 0.1%, or... until it's not obligatory.
Alternatively, you could approach it from the epistemic side. Deontology generally doesn't establish obligations to gather information that are as stringent as would be socially optimal. So suppose we don't know yet whether there are any asteroids heading our way. Are we obliged to invest in satellite arrays and early warning systems? Suppose we're not (though it would be positive expected value to gather such info). And then we all die as a result of people not bothering. Again: seems bad!
One can imagine a version of deontology that avoids these problems by explicitly building in an obligation to *positively pursue what's important* (whenever it doesn't violate rights etc.). That would fix it. But what's really doing the work then is the focus on what's important, not just on avoiding wrongdoing. The latter only helps insofar as it entails the former. And, again, real-life deontology tends not to take this superior form, at least in my experience. Compare the mystery of why more non-consequentialists don't embrace beneficentrism: https://rychappell.substack.com/p/beneficentrism
Maybe we have in mind different sorts of deontology (or are just using the term differently). I have in mind "common-sense deontology" of Foot, Thomson, Kamm etc. And while I agree they have questions to answer vis-a-vis uncertainty (ones they've spent time addressing), I don't see why their distinguishing between doing and allowing gives rise to those questions
As for beneifcentrism...I would think most commonsense deontologists accept an obligation to help those that we can (when it's not too hard for us to do so). They just think it has limits (certainly temporal limits). After all, it's commonsense that we have such obligations (and that they have limits) and the project is to build a theory around that data.
I think there's at least a tendency for deontological ethics to give less attention to the "allowing" side of the doing/allowing distinction. So there will be cases where it's clearly worth investing in x-risk prevention, in expectational terms, yet it won't qualify as an "obligation" on most versions of commonsense deontology. Probably the easiest way to establish this is to just ramp up the uncertainty: make it a 10% chance of stopping the collision, instead of 100%. Or 1%, or 0.1%, or... until it's not obligatory.
Alternatively, you could approach it from the epistemic side. Deontology generally doesn't establish obligations to gather information that are as stringent as would be socially optimal. So suppose we don't know yet whether there are any asteroids heading our way. Are we obliged to invest in satellite arrays and early warning systems? Suppose we're not (though it would be positive expected value to gather such info). And then we all die as a result of people not bothering. Again: seems bad!
One can imagine a version of deontology that avoids these problems by explicitly building in an obligation to *positively pursue what's important* (whenever it doesn't violate rights etc.). That would fix it. But what's really doing the work then is the focus on what's important, not just on avoiding wrongdoing. The latter only helps insofar as it entails the former. And, again, real-life deontology tends not to take this superior form, at least in my experience. Compare the mystery of why more non-consequentialists don't embrace beneficentrism: https://rychappell.substack.com/p/beneficentrism
Maybe we have in mind different sorts of deontology (or are just using the term differently). I have in mind "common-sense deontology" of Foot, Thomson, Kamm etc. And while I agree they have questions to answer vis-a-vis uncertainty (ones they've spent time addressing), I don't see why their distinguishing between doing and allowing gives rise to those questions
As for beneifcentrism...I would think most commonsense deontologists accept an obligation to help those that we can (when it's not too hard for us to do so). They just think it has limits (certainly temporal limits). After all, it's commonsense that we have such obligations (and that they have limits) and the project is to build a theory around that data.