13 Comments
⭠ Return to thread

I think some of Frick's arguments aren't terrible or can be saved. I think Frick was getting at a general pattern of intuitively wrong conclusions that follow if you don't hold that all of our reasons should be conditional or negative with respect to there being standard bearers (people, promises, justice, other duties) or otherwise instrumental for such reasons. This seems pretty plausible across non-welfare-regarding reasons, and he gives many examples. Then, the same kinds of reasons that plausibly explain how we think we should treat promises, duties, and many other reasons can also better explain intuitions about wellbeing that many people also hold, like that it's not better to have more children with good lives even if it's worse for you and your existing family, or that it's better to adopt or foster children or animals than bring new ones into the world, both all else equal. You don't have to get to extreme cases pushing against aggregation like the Repugnant Conclusion.

Plus, if it were the case that all of our non-welfare-regarding reasons should be standard-affecting in some way (and we do have non-welfare-regarding reasons or intuitions about non-welfare regarding reasons still get at how to treat reasons in the right way, even if ultimately wrong), then this provides support for the claim that all of our reasons should be standard-affecting in the same way, including welfare-regarding ones. This is like assuming symmetry: we require argument for treating similar cases differently and can just deny (find unpersuasive) the arguments that try to identify morally relevant differences. Why should we treat wellbeing differently from promises, justice and other duties?

None of this seems like terrible arguments to me, whether or not we ultimately accept Frick's other arguments or specific account.

In further support of this and for standard-affecting views more generally, inherent reasons to create standard bearers just to satisfy those standards can lead to intuitively perverse replacement-like implications, like:

1. shirking duties in order to pick up new ones in general (https://link.springer.com/article/10.1007/s11098-018-1171-y)

2. involuntarily killing and replacing everyone with happier people (https://www.tandfonline.com/doi/full/10.1080/0020174X.2019.1658631) and the logic of the larder / (mostly) happy animal farming being good, and

3. involuntarily creating new preferences/interests in someone just to satisfy and increase their wellbeing, at the cost of their current interests. For example, make them really care about or be intensely happy counting blades of grass, far more than they cared about their loved ones or their careers. Or, on hybrid views, dramatically and involuntarily increase the weight they give to some objective goods, forcing them to abandon their current priorities in life. This seems way too alienating/paternalistic/disrespectful. (Parfit's "global preferences" don't block this, because you can also replace global preferences with stronger and more satisfied ones).

Preference-affecting views can avoid these problems. They can avoid the intuitively bad cases of the standard Repugnant Conclusion. They can avoid your intrapersonal Repugnant Conclusion. Preference-affecting views focused on global preferences can also avoid fixed preference generalizations of the intrapersonal RC. However, this doesn't *really* carry over to fixed population generalizations of the interpersonal RC, like making the torture of huge numbers of people already being tortured only barely worse each vs torturing one extra person. But all the interpersonal solutions will be alienating to some people if applied to intrapersonal cases, so we have reason to treat them differently.

Expand full comment

> "Why should we treat wellbeing differently from promises, justice and other duties?"

My linked post responding to Frick ('Against Conditional Beneficence') addresses this, esp. in sec. 2.1. The key issue is that many deontic reasons are purely *negative* in nature: it's not that keeping promises is good, but just that breaking them is bad. The analogous claim about people (that our existence can only be neutral or bad, all else equal) seems substantively very implausible.

> "logic of the larder / (mostly) happy animal farming being good"

There could be empirical/psychological reasons to oppose happy farming as corrupting, but fwiw I do think we should agree, on reflection, that it could be good:

https://rychappell.substack.com/p/accepting-merely-comparative-harms

> "dramatically and involuntarily increase the weight they give to some objective goods, forcing them to abandon their current priorities in life. This seems way too alienating/paternalistic/disrespectful."

I think many parents would agree that this can be good. Adults may have rights against certain kinds of "paternalistic" intervention, but that's no reason to deny the possibility that such intervention *could*, in principle, be good for them. (I'd simply oppose it in practice for Millian reasons: I wouldn't trust the intervener to know better than their target.)

> "Preference-affecting views can avoid... your intrapersonal Repugnant Conclusion."

At the cost that they can't explain why it's worth saving the life of someone who is temporarily depressed (to the point of lacking any future-directed desires, or at least any of sufficient strength to outweigh their current suicidal desire). That seems a more clear-cut mistake.

I also wonder how much preferentism helps when we consider an agent who desires future pleasure, but doesn't have any settled dispositions on how to balance quantity vs quality here. Maybe they just have an implicit "in whatever way would be most reasonable" proviso to their general desire for more pleasure. I guess the preferentist would just have to take this desire to be ill-defined? That seems a cost, at least (though not a decisive one).

Expand full comment

>At the cost that they can't explain why it's worth saving the life of someone who is temporarily depressed (to the point of lacking any future-directed desires, or at least any of sufficient strength to outweigh their current suicidal desire). That seems a more clear-cut mistake.

I don't think it's a clear-cut mistake. Letting someone like that die could just mean respecting their wishes, if there really is no option preferable to them. They may be misinformed or biased because of their depression, e.g. they can't imagine themselves happy again, but they're mistaken about how the future could go.

And an objection to your response is that the reason you give to want to save them is also the same kind of reason their parents could have to let them die to focus on having another child with better prospects, or for someone to kill and replace them or anyone else, or to involuntarily manipulate someone into having different preferences. This doesn't seem plausible to me.

I suppose another possibility is that we could mistrust people's stated or apparent global preferences even if they aren't mistaken about how the future will go or how they'll feel. Then, we can idealize their global preferences instead. People typically already have certain dispositions, e.g. to feel joy in response to certain things or come to prefer certain things or come to be grateful for life or look forward to more of it, which we might consider hidden or implicit preferences. Idealization can bring those forward and give them greater weight, but idealization doesn't have to allow replacement with totally different preferences. They are not disposed to preferring absolutely anything. Few people have dispositions to love counting blades of grass. Someone who's *temporarily* depressed has non-depressed dispositions and an idealized global preference that should also reflect their non-depressed states.

Most people also have dispositions towards appreciating what are commonly considered objective goods and disprefering what are commonly considered objective bads. And these dispositions come with specific ranges of relative weights across these apparent goods and bads. We don't get to set those weights to whatever we like for them.

Of course, this is even more ill-defined/vague.

Expand full comment

> "the reason you give to want to save them is also the same kind of reason their parents could have to let them die to focus on having another child with better prospects"

No, I think the reason to save them is that *they* would be better-off as a result of their happier future. That's certainly not a reason to replace them. (There may, independently, be some impersonal reasons to replace them, but I think those are weaker and can typically be ignored.)

Expand full comment

As I pointed out in the same sentence, it's not *just* whole person replacement, but also preference replacement within a person. It may be that we can treat these separately as impersonal vs personal, but we still have to respond to the personal problem. By what standard are you judging that they're better off that doesn't permit (in theory) radical involuntary preference replacement like the examples I gave? Or do you think it's in theory just fine to radically change a person's preferences against their wishes and cause them to abandon their important life projects and attachments for something else as long as they are better off overall, however you define that? (To be clear, I don't think they would be better off in some sense, but I suspect capturing that well requires us to treat some interests in an interest-affecting way.)

Maybe radical preference replacement requires (partial) identity change, so the reasons to do it are actually kind of impersonal? And, this may not apply to temporarily depressed people, say, if they already have hidden preferences to be happy, etc.. (EDIT: Looks like you said so in another reply.)

But then I also wonder if they have hidden preferences to live a life of wireheading, on a constant psychedelic trip or in an experience machine, and why those aren't more important.

And we can probably radically replace people's preferences without changing too much else about their identity. But then maybe identity should be defined primarily in terms of these dispositions, or "multiplicatively" in terms of them, so that radical changes to them do ~totally change identity.

Expand full comment

You're conflating a lot of issues here.

Whether it's "just fine" to paternalistically make someone better-off against their wishes depends upon whether they have *rights* against such interference. (I'm only sympathetic to instrumentally-grounded rights. But I'd sooner accept fundamental rights than deny that life can be good. So, given that superior solution, I don't see any reason to resort to denying that life can be good, in response to the line of argument that you're trying to pursue here. Still, I'd ideally like to avoid both, as below.)

A separate issue is whether it's *true* that an individual could be made genuinely better-off through preference change. I suggested that (i) it seems like this *is* possible through moderate changes, e.g. of the sort that parents try to inculcate in their children; but (ii) sufficiently radical changes would risk changing/undermining their identity, and hence *not* qualify as a benefit to the original individual. I'm not sure what remaining "personal problem" you think is not addressed by this.

[Edit: sorry, I missed the context of which comment you were replying to here. It's not that I think temporarily depressed people have "hidden preferences" for happiness, but just that creating/restoring preferences for good things -- esp. if continuous with their past personality -- can make them better-off without threatening their identity.]

Expand full comment

(I was taken as given that we weren't considering fundamental rights. Neither of us are sympathetic to them.)

There are fairly radical involuntary preference changes that wouldn't really affect identity much. We can *just* completely change their life goals, (moral or political) values, attachments/love and/or many other specific but important preferences. We don't really need to touch the features that are more often treated as integral to personal identity as psychological connectedness, like their memories, how they experience the world or their personality traits. If a person's identity changes a lot just for falling in or out of love, gaining or losing other attachments, changing moral/political views or changing life goals, then this could be a big problem, because these happen to almost everyone, often multiple times over their life and sometimes in big abrupt changes. (We can also just pick one very important preference to change and it's still objectionable.)

And, arguably, a major depressive episode could even have larger effects on measures of psychological connectedness: it affects how someone experiences the world and several dispositions that are also considered personality traits when stable over time (e.g. within the neuroticism/emotional stability cluster or depressive personality disorder).

Also, in some cases, they already have some underlying dispositions to draw on, e.g. how they would respond to drugs, video games (or virtual reality or the experience machine), seduction or new projects.

Expand full comment

You can't set aside the possibility of rights while simultaneously pumping intuitions about what's "objectionable" (independently of what's *good* for the affected parties).

Apply the standard "naturalization" test for whether the dispute is axiological or deontic: would it still seem as bad if it happened as a result of purely natural causes? Presumably not: as you say, people go through preference changes in ordinary life, such as via falling in love. Do we have strong reasons to try to prevent this from happening? Seems not. So it isn't bad (in those sorts of cases). Whatever residual intuition you have that *acting* to bring about this result would be "objectionable" is a purely deontic intuition, and no reason to revise our theory of value.

Expand full comment

It could be objectionable even if it happens due to purely natural causes. It's objectionable (although possibly not all-things-considered objectionable) if and because the individual specifically prefers it to not happen. It's worse according to their prior preferences, similar to how personal reasons would count against natural death and replacement. In my view, similar reasons should apply.

People also sometimes do work to prevent otherwise "natural" preference change. People will generally avoid some highly addictive substances. People prone to addictions will avoid situations where they will even be tempted. People will work to maintain affectionate feelings for another and avoid situations that could cause their loss. Married people will keep some distance from others they would otherwise be attracted to, to avoid falling in love or cheating. People will make pledges, like the Giving What We Can Pledge, enter contracts like marriage (in part) or get tattoos to bind themselves to commitments and their current values. Some of these are just people satisfying what they take to be duties, but their own subjectively recognized duties are also preferences. Broadly speaking, people's moral views and intuitions are preferences.

Or maybe personal value is also just deontic and only impersonal reasons capture theory of value? If we're classifying things this way, then sure, but then I might deny that we need a theory of value of this kind at all (at least to explain my intuitions).

Expand full comment

We have independent reasons to avoid preference changes that would make our lives worse. (Addictions, undermining valuable relationships, etc., will plausibly make one's life go worse on any plausible account of welfare.) And sometimes we can have commitments to some cherished project or relationship that we prioritize over our own well-being, and so resist replacement for other-regarding reasons (even when the replacement would be better for us).

But if someone just wants to count blades of grass (pathologically, without even much enjoying the process), and then a knock on their head causes them to instead pursue different things that are both more objectively worth caring about *and* more subjectively enjoyable to the agent, then that strikes me as a clear and big improvement.

Generally speaking, I don't find preferentism very plausible, in either its unrestricted or "preference-affecting" forms.

Expand full comment

I'm stuck trying to make preferentism of some kind work, because I find hedonism and objective list theories, including hybrid accounts, too alienating, and separately, hedonism too Goodharted, and too hard to defend identifying and privileging any (non-subjective) objective values/goods/bads over other things. Preferentism seems to be the only account that aims exactly at what matters to the individual from their own perspective and to do so in the way and the degree to which they matter, like the "Platinum Rule". And then, of preferentist accounts, to avoid further Goodharting and unwanted preference change, I think I'm stuck going in a preference-affecting direction (which may very well be deontic in some way).

To be clear, I use 'preference' quite broadly as any kind of subjective evaluation, and consider pleasure and unpleasantness also kinds of preferences, specifically as "felt evaluations". So, if someone is less happy counting blades of grass, that might be worse in one way for them, even if they desire to do it and/or reflectively endorse it. I'm not confident about the particulars, though.

Expand full comment

Ah, I meant the question rhetorically. Even if you can make a decent or even decisive counterargument against, that doesn't make the argument for negative or conditional treatment of all reasons terrible. The argument form is fine and uses typically strong premises (treating cases similarly unless there's a specific reason to do otherwise), it just might be unsound in this case if you're right. (But I don't think you are right, and many would take your premises to be implausible or highly non-obvious. I'm not specifically sold on Frick's account, though.)

But also, besides possibly reasons regarding welfare, are there any other reasons or duties (to people, say) that shouldn’t be valued negatively or conditionally?

I think you're underestimating how bad the preference manipulation is allowed to be. The point is that you can turn someone into any other kind of person of your choice, with any other kinds of preferences, as long as overall preference satisfaction increases. It's not just parents gently guiding their children.The Millian response might work in practice now, because of our currently limited understanding of cognitive neuroscience, but in theory and possibly in the future on humans or artificial consciousness (reprogramming), the intervener can have far more control in deciding what the target will become. It can go basically as far as killing someone and replacing them with someone entirely different of your choice, as long as the preference satisfaction increases overall. That's basically the same problem, anyway.

Hybrid views place some constraints on what you can replace their preferences with, but they don't stop you from choosing with a lot of flexibility.

On whether their global preferences (about future pleasure) are ill-defined, this could be the case and maybe that does count against, but the alternatives face similar problems. How do you come up with objective cardinal values for them to aggregate on their behalf, and why is that not ill-defined (specifically vague)? Isn't this something you're also undecided about?

Expand full comment

> "besides possibly reasons regarding welfare, are there any other reasons or duties (to people, say) that shouldn’t be valued negatively or conditionally?"

This is just the question of whether anything else is good. I'm pretty sympathetic to welfarism, but plausible non-welfarist goods could include things like natural beauty, impressive cultural attainments / perfectionist goods, etc.

> "The point is that you can turn someone into any other kind of person of your choice, with any other kinds of preferences, as long as overall preference satisfaction increases."

It wouldn't be a benefit *to that person* if you changed them so radically as to undermine their personal identity. If you're talking about killing & replacement (which seems very different from *paternalistic* interventions on an individual), one may reject that on grounds of either partiality or deontic rights violation. It's a very familiar idea in ethics that some impartially better worlds aren't ones we should want to see realized: there are other things we should care about in addition to impartial value. The suggestion that there's *nothing* (even pro tanto) better about the happier replacement is a silly overreaction -- like denying that it's in any way better to kill one to save five.

Expand full comment