2 Comments
⭠ Return to thread

I don’t think I understand the robots and paper clips case. My reason for helping the robots find the paper clips is that this will secure their help saving drowning kids. The fact the robots care about clips is itself no reason for me to help them. It is the fact that the robots will help me if I help them that is my reason. My reason for saving drowning kids is that they are drowning. The fact that they are drowning is reason to help because (says Scanlon) a person could reasonably reject an argument that gave that fact no weight. Are you assuming the robots are persons?

Expand full comment

Yes, the thought is that we can add to the robots whatever is necessary to make them count as moral agents (albeit weird ones) who are eligible to participate in the contractualist's imagined agreement.

Scanlon's reason for saving drowning kids is not that they're drowning, but that they could reasonably reject a principle according to which you failed to intervene when you could easily save them. And the robots can similarly reject a principle according to which you fail to intervene in the face of easily-rescued paperclips. Since he cares nothing for well-being in particular, but only for whatever principles emerge from the contractualist process, it seems like these two reasons are -- for the contractualist -- on a par.

Expand full comment