Hi Jeremy, it's a great topic to work on! Three main thoughts:
(1) I take naive instrumentalism to be true as an *ideal theory*, but it doesn't follow that it is the true account of what is instrumentally rational for *humans*. We need a different non-ideal theory, that takes into account -- and corrects for -- our deep biases and higher-…
Hi Jeremy, it's a great topic to work on! Three main thoughts:
(1) I take naive instrumentalism to be true as an *ideal theory*, but it doesn't follow that it is the true account of what is instrumentally rational for *humans*. We need a different non-ideal theory, that takes into account -- and corrects for -- our deep biases and higher-order unreliability.
(2) I wouldn't describe non-ideal theory as "lowering the standards of morality or rationality". Instrumental rationality is *still* about how we can (expectably) *best* achieve the correct moral goals. It's just that the answer to this ambitious question depends upon details of our nature (incl. cognitive limitations). Principled proceduralism offers guidance that's better suited to human-sized minds. (This is an important *truth* about instrumental rationality.) Our minds have lower cognitive capacity than those of ideal agents. But that doesn't mean that the guidance is aptly described as having "lower standards". In some ways, it would seem just as natural to describe principled proceduralism as insisting upon "higher standards". But I think it's most accurate to just say that the guidance is *different* (not "higher" or "lower") from what would be suitable for ideal agents.
(3) As mentioned in the OP, I think non-consequentialists are often naive instrumentalists when it comes to politics and intellectual inquiry, in ways that are predictably very bad. But maybe there's an ideal form of Rossian Pluralism (or virtue ethics) that gives sufficiently greater *non-instrumental* weight to Millian liberal virtues to properly match their deep *instrumental* value, and thereby deter "naive" violations even when agents are themselves applying a naive decision procedure? It must be possible in theory. I guess the standard worry is just how psychologically feasible it is for people to abide by this, as the value of protecting people from oppression (or whatever) is apt to be much more *salient* than more abstract values like free speech (especially since it's so dubious that the *non-instrumental* value of something so abstract could reasonably trump real harms to vulnerable people).
Hi Jeremy, it's a great topic to work on! Three main thoughts:
(1) I take naive instrumentalism to be true as an *ideal theory*, but it doesn't follow that it is the true account of what is instrumentally rational for *humans*. We need a different non-ideal theory, that takes into account -- and corrects for -- our deep biases and higher-order unreliability.
(2) I wouldn't describe non-ideal theory as "lowering the standards of morality or rationality". Instrumental rationality is *still* about how we can (expectably) *best* achieve the correct moral goals. It's just that the answer to this ambitious question depends upon details of our nature (incl. cognitive limitations). Principled proceduralism offers guidance that's better suited to human-sized minds. (This is an important *truth* about instrumental rationality.) Our minds have lower cognitive capacity than those of ideal agents. But that doesn't mean that the guidance is aptly described as having "lower standards". In some ways, it would seem just as natural to describe principled proceduralism as insisting upon "higher standards". But I think it's most accurate to just say that the guidance is *different* (not "higher" or "lower") from what would be suitable for ideal agents.
(3) As mentioned in the OP, I think non-consequentialists are often naive instrumentalists when it comes to politics and intellectual inquiry, in ways that are predictably very bad. But maybe there's an ideal form of Rossian Pluralism (or virtue ethics) that gives sufficiently greater *non-instrumental* weight to Millian liberal virtues to properly match their deep *instrumental* value, and thereby deter "naive" violations even when agents are themselves applying a naive decision procedure? It must be possible in theory. I guess the standard worry is just how psychologically feasible it is for people to abide by this, as the value of protecting people from oppression (or whatever) is apt to be much more *salient* than more abstract values like free speech (especially since it's so dubious that the *non-instrumental* value of something so abstract could reasonably trump real harms to vulnerable people).