15 Comments

People who tell me that I should study for my exams which will be in the future fail to recognize the separateness of moments, and that there is no super moment that contains both this moment and the moment during which I'm taking the exam.

One interesting question is how exactly normative differences should affect attitudes. I'm slightly dubious that we should feel deeply conflicted when we're benefitting one and harming another, though we've discussed that before. But it seems even odder to think that when one is making a decision that has positive expected value but unpredictable actual value they should feel deeply conflicted and worried.

Expand full comment
author

Suppose you have two kids, one of whom is suffering a severe illness. A genie offers you a magic button. If you press the button, there's a 60% chance it'll cure your child completely, and a 40% chance it'll spread the illness to your second child. This is positive EV, and worth doing (those who disagree can tweak the numbers as needed); but any loving parent would feel deeply conflicted and worried while pressing the button.

Generalizing, if a positive EV action threatens something we care deeply about, it makes sense to feel deeply conflicted and worried about it. I think we should care deeply about others, which is why I think we should feel deeply conflicted and worried about many such tradeoffs.

Expand full comment
Jun 22, 2023Liked by Richard Y Chappell

It sounds like you're claiming we should feel DCaW about going for a drive and altering the traffic pattern.

P1: We should care deeply about people dying or being seriously injured in car crashes. (For example, we clearly should not do things that significantly increase the net chance of causing crashes without any large benefit, such as driving intoxicated or tired.)

P2: Even when our overall reasonable expected value is close to net neutral, we should feel DCaW when the range of outcomes of an action includes things we should care deeply about.

P3: Going for a drive changes the people who will encounter death or serious injury in highly unpredictable ways, such that the reasonable expected value is close to net neutral.

C: We should feel DCaW whenever we go for a drive.

It seems to me that we ought to feel DCaW whenever (and to the extent that) it will help us make morally better decisions in the future (perhaps adjusted a bit for the fact that the DCaW is itself a form of suffering). In the butterfly effect cases like going for a drive, such a feeling doesn't seem useful at all, where in the sick child case it might. Of course, this is also distinct from the question of when (morality aside) we're just psychologically disposed to feel DCaW.

Expand full comment
author

Note that *refraining* from going for a drive, when one could have done otherwise, is equally momentous in terms of these butterfly effects.

But yeah, I at least think that *when we reflect* on how our choices will inevitably have massive but unpredictable effects on the far future, it's fitting to feel deeply conflicted about that. I agree it might not be useful. There's no guarantee that accurate/warranted attitudes will be useful. To help reconcile our positions, note that part of being a competent agent may be to redirect one's attention away from unhelpful considerations and towards more tractable ones.

Compare this old post, which distinguishes (i) the appropriate answer to a question, from (ii) whether a well-functioning agent would raise that question in the first place:

https://www.philosophyetc.net/2009/09/satisficing-and-salience.html

We don't normally think about the possible butterfly effects of our everyday actions. In many contexts (e.g., when we have more urgent issues to attend to) it could be outright irrational to overly focus on such an intractable matter. I suspect it is really this -- the mistaken question, not the answer -- that your "mental illness" intuition is tracking.

Expand full comment

I think that there are proper preferences but not proper nonpreferential attitudes. So you should prefer a state of affairs in which the child gets medicine, I don't think that you should feel conflicted. I think most normal humans would feel conflicted, but it seems odd to think they should. In normal cases, given unpredictable outcomes, you should think people should constantly feel deeply conflicted, even about mundane actions like going to the store.

Expand full comment
author

You think it is "odd" to think that a parent should feel conflicted when gambling with the well-being of their children? On the contrary, I think this is utterly obvious. It's very often the case that we should feel pleased about (salient) good things happening, and feel negatively about (salient) bad things. Given a gamble between the two, we should typically feel conflicted. Failure to have these emotional responses indicates that one's emotions are not properly reasons-responsive. Because there are obviously reasons to feel good (or bad) about good (or bad) events.

Granted, we don't actually end up responding emotionally to everything going on in the world, because we just don't have the cognitive capacity to bear it all vividly in mind. Not everything is salient. When going to the store, for example, the possible butterfly effects are not remotely salient. But we can imagine a more (rationally) ideal being who lacked these limitations, and empathetically felt the full force of all the reasons in the world.

It's an interesting question whether more things *should* (in the fittingness sense, not the practical sense) be salient to us. For example, whether someone should feel bad about taking a $5000 vacation when they could instead have saved a life with that money. But obviously if an angel were to show you a video of a child dying of malaria, and say, "This is the child whose life would've been saved if you'd donated that vacation money to AMF," you'd have to be a callous monster not to feel bad about that. Such callousness involves a failure to respond to reasons, just like failing to update beliefs in the light of new evidence involves a failure to respond to reasons.

We can imagine cases where epistemic rationality is bad for you. Maybe overconfidence helps to promote a useful sort of ambition, for example. But that doesn't change what beliefs are really *warranted*. Likewise with other attitudes.

Expand full comment

Yeah, okay, I think this is plausible. Just as there are some cases where it's proper to feel fear or anger, there might be cases where it's proper to feel conflicted.

Expand full comment
Jun 23, 2023Liked by Richard Y Chappell

Interesting. So, whereas I would have said that most situational distress over uncertainty is psychologically understandable albeit not necessarily morally good, your position is that all such distress would be morally good, and our lack of distress in the vast majority of situations is what's psychologically understandable?

Expand full comment
author

Roughly. Though "distress" sounds like an overall negative attitude, which would only be warranted by overall negative prospects. I prefer "ambivalent", or "conflicted". (And "morally good" is ambiguous between "fitting" and "fortunate". I'm just talking about what's fitting, or warranted.)

Expand full comment

Indeed, the vast majority of cases in which we have a reasonable balanced expectation of greatly benefiting some while greatly harming others are cases of "the butterfly effect", e.g. how whenever you drive somewhere, you alter the specifics of the traffic such that different people are about as likely to be dealt a fatal crash or spared one. Yet we would regard it as a mental illness if someone were to feel "deeply conflicted and worried" about this fact.

Expand full comment

Your position here is very appealing, but I find it difficult to reconcile with your insistence that that ethics should focus on "what really matters," and that what really matters is benefits and harms to sentient beings. Are you saying that attitudes can also "matter" in a way that has moral significance?

It would be very plausible for a utilitarian to say that attitudes can have instrumental value insofar as attitudes shape behaviour. But it is harder to see how attitudes can be part of a utilitarian's fundamental moral theory, as you seem to be asserting here, rather than part of their practical moral advice or their empirical account of moral psychology.

A thought experiment:

Imagine two intelligent utilitarian robots, Annie and Bertie, both capable of responding to normative reasons, and also capable of editing their own software. They are programmed to act identically in all circumstances — the only difference is that Annie is programmed like a strawman (strawrobot?) utilitarian who just adds up benefits and harms, while Bertie is a sophisticated Chapellian utilitarian whose code includes some additional lines defining the function experience_heightened_angst() and calling that function whenever Bertie encounters the sorts of situations you describe in this post.

(Q1) If Annie gains access to Bertie's source code, should she edit her own software to include the additional lines from Bertie's code?

(Q2) If I am a robot engineer, should I make robots with Annie's code or Bertie's?

Expand full comment
author
Jun 21, 2023·edited Jun 21, 2023Author

To determine *what ought to be done*, we should be guided by what really matters, and not let unimportant considerations swamp important ones. But there are all sorts of other fundamental normative facts that don't matter in this practical way. Getting the impractical normative facts right is still of interest to those who care about truth and warrant. See: https://rychappell.substack.com/p/consequentialism-beyond-action

(Especially the section, "Virtue and Value Agree: What Matters is Value, not Virtue")

Expand full comment

I read through that post, but I'm still not sure what it means for Annie and Bertie.

One possible view:

Whether or not Annie should edit her code is a matter of *what ought to be done*, so her action should be guided by what really matters. Annie would presumably experience some degree of suffering when experiencing her heightened angst when confronted with moral tradeoffs. Perhaps the suffering is minor, but in any case it would harm her without compensatory benefit to anyone. Therefore she should not edit her code, and should continue to have an unfitting lack of angst when confronting moral tradeoffs. (Perhaps Bertie should even edit out the angst function from his own code?)

Expand full comment
author

Yes, pretty much. Though I have sympathy for the view that virtue has *some* (moderate) value, so there could accordingly be some reason to switch to Bertie's more accurate/virtuous code. But if the costs were too great (either due to the unpleasantness of the angst, or for extrinsic reasons -- maybe an evil demon has threatened to punish her if she switches) then it's always possible for false beliefs to prove the better option.

Similar claims apply to Cora the saintly commonsense moralist, who acts very well despite having thoroughly mistaken views about ethical theory (and mistakenly believes that superheroes act rightly when they spare the life of the world-threatening villain, etc.). It may be that her false beliefs are for the best, in her particular circumstances. That doesn't make them any less false. But it may mean we have little reason to want to see them changed.

Expand full comment