I’m not that sure what I mean either. But something seems to be missing. I’m trying to put my finger on it. When we discuss the moral assessment of large-scale policies, we can consider how do we know it is best and what institutional structure allows us to implement it, among other things. The topic implicitly addresses questions such a…
I’m not that sure what I mean either. But something seems to be missing. I’m trying to put my finger on it.
When we discuss the moral assessment of large-scale policies, we can consider how do we know it is best and what institutional structure allows us to implement it, among other things. The topic implicitly addresses questions such as, when is it good to take an action that affects moral agents without their knowledge? Without their consent? If someone became dictator, how would they constrain themselves or even know if they were acting as a benevolent dictator? The discussion seems to assume that everything being discussed is independent of such issues, but I don’t think it is.
Asking moral questions as if once we knew the answer, we would then be justified in unilaterally imposing it on the world seems odd to me, unless I were much more confident of my own judgement and others' ability to be persuaded by good reason.
When would it be wise to impose the best policy on a population that unanimously opposed and misunderstood it? What effect would treating moral agents as moral patients have on them, if we don’t assume they agree with our conclusions and consent? How is the best still the best in such circumstances? I think they would have to be very unusual.
Beings can't consent to being created. But this means that whoever is creating them is acting as their proxy. Such proxies could ask whether they could *expect* that their creations would subsequently grant their retrospective consent, or whatever else they thought respected them as potential moral agents and actual moral patients. What do the proxies owe to their creations if either they calculate incorrectly, or decide by some entirely different criteria?
"Asking moral questions as if once we knew the answer, we would then be justified in unilaterally imposing it on the world seems odd to me..."
I strongly disagree that this "as if" claim is apt. Asking moral questions does NOT imply that "once we knew the answer, we would then be justified in unilaterally imposing it on the world." We can ask moral questions as part of ordinary democratic discourse. That's precisely what I'm doing here: offering public reasons that (ideally) could convince *everyone* that one policy approach is better than another (and inviting correction if my arguments are mistaken).
So there is a presumption that the decision is being made within an institutional structure that preserves the moral agency of the participants. But… this is framed in consequentialist, not deontological terms, and then the constraints this imposes are not addressed within the analysis. This could be explained in a couple of ways, either that the safeguards embedded in the process are so reliable that we can feel confident that no solution will be implemented that violates them, or that we don’t need safeguards. I think this ambiguity has made me uncomfortable.
I'm presuming the former, and find it weird that you find this "ambiguous". It would never occur to me, in reading a philosopher argue that we (society) should do X, to imagine that they mean we should impose a dictator who will do X against everyone else's will. That's just a ridiculously uncharitable reading of any ordinary moral-political argument.
For future reference, whenever I am arguing for a policy, you should take it as given that I am arguing for it to be implemented via the usual democratic processes.
P.S. Footnote 4 explicitly notes that I take my arguments here to be compatible with deontology.
Well, it’s a bit of a pet peeve with me. I don’t see the usual processes as particularly democratic. The US has weak safeguards, and philosophers tend to ignore this. A solution that depends on the existence of an adequately non-corrupt state doesn’t make a lot of sense in an environment that lacks this prerequisite. it is much easier to imagine a benevolent dictator than to deal with the actual obstacles to implementation.
I didn’t understand footnote 4, so I am not sure what it means to be compatible with deontology here.
I’m not that sure what I mean either. But something seems to be missing. I’m trying to put my finger on it.
When we discuss the moral assessment of large-scale policies, we can consider how do we know it is best and what institutional structure allows us to implement it, among other things. The topic implicitly addresses questions such as, when is it good to take an action that affects moral agents without their knowledge? Without their consent? If someone became dictator, how would they constrain themselves or even know if they were acting as a benevolent dictator? The discussion seems to assume that everything being discussed is independent of such issues, but I don’t think it is.
Asking moral questions as if once we knew the answer, we would then be justified in unilaterally imposing it on the world seems odd to me, unless I were much more confident of my own judgement and others' ability to be persuaded by good reason.
When would it be wise to impose the best policy on a population that unanimously opposed and misunderstood it? What effect would treating moral agents as moral patients have on them, if we don’t assume they agree with our conclusions and consent? How is the best still the best in such circumstances? I think they would have to be very unusual.
Beings can't consent to being created. But this means that whoever is creating them is acting as their proxy. Such proxies could ask whether they could *expect* that their creations would subsequently grant their retrospective consent, or whatever else they thought respected them as potential moral agents and actual moral patients. What do the proxies owe to their creations if either they calculate incorrectly, or decide by some entirely different criteria?
"Asking moral questions as if once we knew the answer, we would then be justified in unilaterally imposing it on the world seems odd to me..."
I strongly disagree that this "as if" claim is apt. Asking moral questions does NOT imply that "once we knew the answer, we would then be justified in unilaterally imposing it on the world." We can ask moral questions as part of ordinary democratic discourse. That's precisely what I'm doing here: offering public reasons that (ideally) could convince *everyone* that one policy approach is better than another (and inviting correction if my arguments are mistaken).
So there is a presumption that the decision is being made within an institutional structure that preserves the moral agency of the participants. But… this is framed in consequentialist, not deontological terms, and then the constraints this imposes are not addressed within the analysis. This could be explained in a couple of ways, either that the safeguards embedded in the process are so reliable that we can feel confident that no solution will be implemented that violates them, or that we don’t need safeguards. I think this ambiguity has made me uncomfortable.
I'm presuming the former, and find it weird that you find this "ambiguous". It would never occur to me, in reading a philosopher argue that we (society) should do X, to imagine that they mean we should impose a dictator who will do X against everyone else's will. That's just a ridiculously uncharitable reading of any ordinary moral-political argument.
For future reference, whenever I am arguing for a policy, you should take it as given that I am arguing for it to be implemented via the usual democratic processes.
P.S. Footnote 4 explicitly notes that I take my arguments here to be compatible with deontology.
Well, it’s a bit of a pet peeve with me. I don’t see the usual processes as particularly democratic. The US has weak safeguards, and philosophers tend to ignore this. A solution that depends on the existence of an adequately non-corrupt state doesn’t make a lot of sense in an environment that lacks this prerequisite. it is much easier to imagine a benevolent dictator than to deal with the actual obstacles to implementation.
I didn’t understand footnote 4, so I am not sure what it means to be compatible with deontology here.