As Parfit defines it, a theory T is “indirectly individually self-defeating when it is true that, if someone tries to achieve his T-given aims, these aims will be, on the whole, worse achieved.”
Put aside cases in which the agent fails due to personal incompetence or ignorance. The more interesting cases of indirect self-defeat are ones in which merely possessing the motive or disposition to pursue some aim tends to undermine the achievement of that very aim. How is this possible? In brief: behavioural dispositions can have causal effects other than just the actions that they directly produce. They might change our available options, influence our emotions or other mental states, or change how others behave towards us.
Perhaps the most well-known example of this is the paradox of hedonism, according to which happiness tends to be better achieved by aiming at something else. An egoist who cares nothing for others thereby lacks access to the happiness that genuine love and friendship may bring, for example. Other “essential byproducts” that resist deliberate and focused pursuit in the moment include unaided sleep and spontaneity. In these cases, self-defeat results from how our own minds work. But Parfit is especially interested in “game-theoretic” cases in which self-defeat results from how others would respond to us, in a society where everyone’s dispositions were transparent.
A transparent egoist could not be trusted to keep their end of a bargain, for example, and so would miss out on important gains from co-operation. If you hope to entice someone to help you by promising them some future reward, this will only work (given transparency) if you are truly disposed to follow through on the reward. But by that point in time, you’ve already been helped, so paying out the promised reward is no longer in your interest (supposing that the cheated party has no way to publicize or otherwise punish your perfidy). So, in being disposed to always choose the most favourable option of those available, you end up with a less favourable roster of options to choose from. In Parfit’s hitchhiker case, one may even be left to die in the desert for one’s inability to credibly promise any reward to a potential rescuer.
These cases suggest that egoism can be indirectly individually self-defeating. Your self-interest may (in some situations) be better achieved by making yourself less purely self-interested. If Parfit’s hitchhiker could take a pill that would make him always keep his promises, he would choose to do so since it would make him better-off—even though it would also make him cease to be an egoist. This sounds paradoxical, but it isn’t really. It’s an open possibility that the most effective way to achieve some goal may be to change yourself so that you no longer seek to pursue it.
While most of us likely approve of promise-keeping over egoism in any case, we should not rush to conclude that such “indirect self-defeat” suffices to demonstrate that an aim is objectively unjustified or irrational. (Better reasons to reject egoism were discussed previously.) For on any sensible view of what matters, it’s possible to imagine a situation in which the only way to protect what matters most is to make your future self insensitive to such reasons. After all, rationality leaves us vulnerable to coercion by those who would threaten what we most care about. Faced with such threats, Parfit argues, it would be rational to take a pill that rendered you (temporarily) utterly irrational, and hence impervious to such threats. So it can be rational (or aim-promoting) to make yourself irrational (or insensitive to the aims in question). Parfit calls this phenomenon ‘rational irrationality’.
Similar considerations also apply in the moral realm. It could be virtuous to make yourself vicious. Just imagine that an evil demon will torture everyone forever unless you take a pill that will make you come to desire (non-instrumentally) that others suffer. Further suppose that the demon will similarly torture everyone if you ever lose your new malicious desire, prior to your natural death. (Curiously, once you’ve acquired the malicious desire, it will lead you to try to rid yourself of it, so as to cause the threatened universal torture. This demonstrates how it could be vicious to make yourself virtuous. But to ensure that your initial moral corruption was not for naught, let us stipulate that your evil future self will lack the means to lose the world-saving malicious desire.) In that case, it’s a very good thing (for the world) that you have this bad (malicious) desire. This brings out that we need to carefully distinguish two very different ways of normatively assessing desires. The malicious desire is good in that it serves to promote or achieve moral aims. But it is bad in the sense of exemplifying evil attitudes and dispositions, being morally misguided, or aiming at the very opposite of the correct moral aims.
When the two conflict, which matters more? Should you prefer to achieve moral aims or exemplify them? Parfit thought the former, and I’m inclined to agree. Even so, it is worth recognizing both modes of normative assessment: it would be an impoverished moral theory that found itself unable to articulate any sense in which useful malicious desires are nonetheless criticizable.
An important upshot of Parfit’s discussion: a theory isn’t disproven just because it’s indirectly self-defeating. Here’s a quick argument of my own to that same conclusion: Any sane moral theory requires us to avoid disaster in high-stakes situations. An evil demon might require us to abandon such a theory, in order to avoid disaster. So any sane moral theory is possibly self-effacing in this way. But some sane moral theory must be the correct one, so a possibly self-effacing moral theory may still be the correct one. Further, the truth or falsity of a moral theory is non-contingent: it does not depend upon which possible world is actual. So even if a theory is actually self-effacing, that no more counts against its truth than if it were merely possibly self-effacing. So, a theory’s being self-effacing doesn’t mean it’s wrong.
Conversely: just because it’s good or rational to acquire some motivation or disposition, and even if it continues to be good or rational to maintain it, it does not follow that it is in any way good or rational to act upon the motivation or disposition in question. For the benefits may stem from the mere possession of the disposition, rather than the downstream acts that it disposes you towards. The latter may be entirely bad, even if the former are sufficiently good to outweigh this. In our previous example, even though we should want you to acquire the world-saving malicious desire, we certainly shouldn’t want you to act upon it, gratuitously harming others.
Parfit himself focused on more prosaic moral examples with this structure, which he called “blameless wrongdoing”. He asked us to imagine that Clare has optimal motives, which include strong love for her child. This strong love sometimes causes Clare to act in suboptimal ways: providing a small benefit to her child rather than a much greater benefit to some stranger, for example. From the perspective of impartial consequentialism, Parfit suggests that Clare’s act is still wrong, despite stemming from optimal motives. But he nonetheless suggests the following defense on her behalf: “Because I am acting on a set of motives that it would be wrong for me to lose, these [wrong] acts are blameless.”
This strikes me as a point at which Parfit may have failed to take sufficient account of his own lessons. Recall our earlier example of the world-saving malicious motives. If you act on those motives and gratuitously harm someone, just because you want to see them suffer, we may reject Parfit’s suggestion that the optimality of possessing the motives renders their harmful exercise “blameless”. We can agree that it’s a good thing that you have the malicious motives, but that doesn’t change the fact that they are malicious, or ill-willed, and hence in acting upon them you act viciously, which merits disapproval. Perhaps others should refrain from expressing their disapproval, as we wouldn’t want you to lose your world-saving malicious motives. If that’s all that Parfit means by ‘blameless’, we can grant him this stipulative use of the term. But it’s worth bearing in mind that this remains compatible with judging the present agent to be “blameworthy” in the ordinary sense of being worthy of moral criticism or disapproval. Just as the agent’s motives may be doubly-assessable, as simultaneously optimal and yet morally misguided, so too may our attitudes of approval or disapproval.
One might resist such judgments on the grounds that the agent’s prior sacrifice was so immensely praiseworthy that any overall assessment of the agent must be positive. That latter claim seems right, but I am assuming here that we can offer a temporally “localized” criticism of the present agent, without thereby implying that they are eternally or overall bad.
The distinction between optimality and accuracy becomes even more stark when we consider that one can privately feel disapproval without in any way expressing this. For suppose that the evil demon will now start torturing people if anyone dares to—even privately—disapprove of malice. In that case, we had all best hope to have magic pills available that will allow us to cease disapproving of malice! But it wouldn’t change the fact that malice warrants disapproval, in the same way that truth warrants belief, no matter how severely a demon might incentivize us to believe falsehoods instead. Consequentialists may rightly insist that some things matter more than having warranted attitudes. But they need not—and should not—deny these basic facts about warranted or accurate moral judgment.
Clare seems blameless in a way that the world-saving malicious agent does not. Since both agents are acting upon motives that it would be wrong for them to lose, that cannot be sufficient to explain Clare’s blamelessness. So there is more work to be done in making sense of Parfit’s cases.
I think we would do better to build upon the fact that, in loving her child, Clare’s motives aim at something (her child’s wellbeing) that is genuinely good. Even if she overweights this value relative to others, and so ends up acting incorrectly, she does not act with ill will when she prioritizes her own child over others. This may go some way towards explaining why she at least merits less disapproval than the world-saving malicious agent who gratuitously harms others. But if one further wishes to claim that Clare is entirely blameless, it may be that the only way to fully vindicate this intuition is to abandon impartiality, and hold that Clare truly makes no moral mistake in giving more weight to the wellbeing of those close to her. Such a move has its own costs, however.
(For more, see sec. 4 of Parfit’s Ethics.)
I don’t understand how you can write this beautifully about a topic, but completely miss the point about big R Risk and Fairness.
It’s almost as if you missed the forest for the trees. Truly beautiful trees, indeed.