4 Comments
⭠ Return to thread

I don't follow this: "Imagine that a killer asteroid is heading straight for Earth. With sufficient effort and ingenuity, humanity could work to deflect it. But no-one bothers. Everybody dies. This is clearly not a great outcome, even if no-one has done anything morally wrong (since no-one has done anything at all). This scenario poses a challenge to the adequacy of traditional morality, with its focus on moral prohibitions, or “thou shalt nots”. "

What is the challenge? Plainly it's impermissible to ignore the child drowning in the pond (/to do nothing while the child drowns in the pond—if you're reifying the idea of "doing nothing"). And plainly it's also impermissible to ignore an asteroid flying towards the planet if you're in a position to stop the collision. (One reason to be sceptical of giving weight to this idea of "doing nothing" is that it's very hard to cash out in a sensible way. Does napping count? Standing still? Ignoring someone? Trying hard to ignore someone? Etc. etc.) Unless I'm missing something, I don't see why this would be a challenge for anyone—deontologists included.

Expand full comment

I think there's at least a tendency for deontological ethics to give less attention to the "allowing" side of the doing/allowing distinction. So there will be cases where it's clearly worth investing in x-risk prevention, in expectational terms, yet it won't qualify as an "obligation" on most versions of commonsense deontology. Probably the easiest way to establish this is to just ramp up the uncertainty: make it a 10% chance of stopping the collision, instead of 100%. Or 1%, or 0.1%, or... until it's not obligatory.

Alternatively, you could approach it from the epistemic side. Deontology generally doesn't establish obligations to gather information that are as stringent as would be socially optimal. So suppose we don't know yet whether there are any asteroids heading our way. Are we obliged to invest in satellite arrays and early warning systems? Suppose we're not (though it would be positive expected value to gather such info). And then we all die as a result of people not bothering. Again: seems bad!

One can imagine a version of deontology that avoids these problems by explicitly building in an obligation to *positively pursue what's important* (whenever it doesn't violate rights etc.). That would fix it. But what's really doing the work then is the focus on what's important, not just on avoiding wrongdoing. The latter only helps insofar as it entails the former. And, again, real-life deontology tends not to take this superior form, at least in my experience. Compare the mystery of why more non-consequentialists don't embrace beneficentrism: https://rychappell.substack.com/p/beneficentrism

Expand full comment

Maybe we have in mind different sorts of deontology (or are just using the term differently). I have in mind "common-sense deontology" of Foot, Thomson, Kamm etc. And while I agree they have questions to answer vis-a-vis uncertainty (ones they've spent time addressing), I don't see why their distinguishing between doing and allowing gives rise to those questions

As for beneifcentrism...I would think most commonsense deontologists accept an obligation to help those that we can (when it's not too hard for us to do so). They just think it has limits (certainly temporal limits). After all, it's commonsense that we have such obligations (and that they have limits) and the project is to build a theory around that data.

Expand full comment

Is something still.imperssible if no one cares whether you did it not? Who is issuing the permission?

Expand full comment