Yes to all of this. Here's some psychological speculation about why we do this. Part of the professional deformation of being a philosopher is a disposition to see our tool kit as having broader applicability than it really does. When it comes to applied ethics and political philosophy, this often takes the form of seeing practical problems as resolvable with philosophy alone, rather than recognizing that their resolution almost always also turns on empirical questions that the philosophical toolkit doesn't do much to illuminate.
I agree with all of this, but I think that there's a bit more to say about Setiya's point about integrity. Of course you are entirely right that there is no conflict between integrity and high stakes where we should be uncertain about the relevant consequences (because of limits both on our information and on our ability to assess our information correctly). And it may be that Setiya is here misled by the stipulated certainty of (most) thought experiments, which as you say rarely if ever matches real-life practical problems. BUT... I think Setiya might insist that even if there is no direct conflict between integrity and high stakes in the real-life cases, the fact that there is a conflict in thought experiments still creates a problem. I can ask myself whether I should/would commit fraud if I *knew* that this maximised EV (by a lot); and this is often taken to be a dilemma for rule-consequentialism (and other "indirect" forms of consequentialism). Now I think we both have fairly similar responses to this familiar problem, but I at least think that the upshot is that integrity for the consequentialist looks a bit different from common-sense integrity (and the same goes for other virtues/values). The way I would put it (following Hare) is that we can think critically about integrity, or think intuitively about it, and though from both standpoints we end up endorsing it, our two strands of thinking do not seamlessly mesh together. I don't think this is ultimately a big concession for consequentialists to make, but I do at least understand why some of our opponents think that the kind of integrity they believe in is not quite the same as the kind that we believe in. One way of putting the contrast is that according to us a perfect (omniscient, unbiased, clear-thinking) moral agent would have no use for integrity; and that sounds a bit weird to common sense.
I think you may have phrased the latter point a bit too strongly. Consider a moral analogue of Parfit's Hitchhiker: it may be genuinely important, in order to better achieve your ends in society, that others can *trust* you. As long as we lack omnipotence, even if we had omniscience, co-operative dispositions could still be practically important.
I guess I'm imagining an agent who is able to anticipate/fake all the signs of trust (and insofar as they know that defecting from some cooperation will make them less trusted in future, they factor that into their decision-making). Maybe I'm missing something about the case though.
Ah, right, yeah the case assumes transparency in the one-off version.
But your point about future-directed reputational concerns is a more realistic one, and suggests a respect in which the omniscient agent would still have "use" for (a kind of) integrity. They will, at least, be concerned to maintain a reputation for trustworthiness. And often the easiest way to do that is to *be* trustworthy, i.e. maintain their integrity.
This is great! I can sometimes almost pride myself on being able to entertain the most absurd stipulated scenarios, and getting to the bone of what is really driving my moral intuitions, but this is a great wake-up call to remember to think about the real world when you are thinking about the real world, hehe.
More people than just philosophy professors are prone to this error--some people couple the tendency to pattern-overmatch from thought experiments to real world issues with a tendency to treat the thought experiments like solved mathematic problems. Not only "this is just the trolley problem," but also "and the trolley problem compels us to believe (their preferred solution), it's a waste of time to discuss this any further." Anyhow, I think there are more psychological propensities afoot in the first manuever than the ones you mentioned, some of which may even afflict philosophy professors. (Enjoyed this piece a lot, thanks.)
Yes to all of this. Here's some psychological speculation about why we do this. Part of the professional deformation of being a philosopher is a disposition to see our tool kit as having broader applicability than it really does. When it comes to applied ethics and political philosophy, this often takes the form of seeing practical problems as resolvable with philosophy alone, rather than recognizing that their resolution almost always also turns on empirical questions that the philosophical toolkit doesn't do much to illuminate.
I agree with all of this, but I think that there's a bit more to say about Setiya's point about integrity. Of course you are entirely right that there is no conflict between integrity and high stakes where we should be uncertain about the relevant consequences (because of limits both on our information and on our ability to assess our information correctly). And it may be that Setiya is here misled by the stipulated certainty of (most) thought experiments, which as you say rarely if ever matches real-life practical problems. BUT... I think Setiya might insist that even if there is no direct conflict between integrity and high stakes in the real-life cases, the fact that there is a conflict in thought experiments still creates a problem. I can ask myself whether I should/would commit fraud if I *knew* that this maximised EV (by a lot); and this is often taken to be a dilemma for rule-consequentialism (and other "indirect" forms of consequentialism). Now I think we both have fairly similar responses to this familiar problem, but I at least think that the upshot is that integrity for the consequentialist looks a bit different from common-sense integrity (and the same goes for other virtues/values). The way I would put it (following Hare) is that we can think critically about integrity, or think intuitively about it, and though from both standpoints we end up endorsing it, our two strands of thinking do not seamlessly mesh together. I don't think this is ultimately a big concession for consequentialists to make, but I do at least understand why some of our opponents think that the kind of integrity they believe in is not quite the same as the kind that we believe in. One way of putting the contrast is that according to us a perfect (omniscient, unbiased, clear-thinking) moral agent would have no use for integrity; and that sounds a bit weird to common sense.
I think you may have phrased the latter point a bit too strongly. Consider a moral analogue of Parfit's Hitchhiker: it may be genuinely important, in order to better achieve your ends in society, that others can *trust* you. As long as we lack omnipotence, even if we had omniscience, co-operative dispositions could still be practically important.
I guess I'm imagining an agent who is able to anticipate/fake all the signs of trust (and insofar as they know that defecting from some cooperation will make them less trusted in future, they factor that into their decision-making). Maybe I'm missing something about the case though.
Ah, right, yeah the case assumes transparency in the one-off version.
But your point about future-directed reputational concerns is a more realistic one, and suggests a respect in which the omniscient agent would still have "use" for (a kind of) integrity. They will, at least, be concerned to maintain a reputation for trustworthiness. And often the easiest way to do that is to *be* trustworthy, i.e. maintain their integrity.
This is great! I can sometimes almost pride myself on being able to entertain the most absurd stipulated scenarios, and getting to the bone of what is really driving my moral intuitions, but this is a great wake-up call to remember to think about the real world when you are thinking about the real world, hehe.
More people than just philosophy professors are prone to this error--some people couple the tendency to pattern-overmatch from thought experiments to real world issues with a tendency to treat the thought experiments like solved mathematic problems. Not only "this is just the trolley problem," but also "and the trolley problem compels us to believe (their preferred solution), it's a waste of time to discuss this any further." Anyhow, I think there are more psychological propensities afoot in the first manuever than the ones you mentioned, some of which may even afflict philosophy professors. (Enjoyed this piece a lot, thanks.)
The nice thing about dealing with ea critics is that one doesn’t need to construct straw men. They’ve erected themselves entirely out of straw.