I disagree with the argument that "you will cease to regret this change once this change has happened" undermines the moral force against making the change in the first place. To use a very extreme example, think of fictional monsters like the Borg or the Cybermen who forcibly transform their victims into more of them. One common argumen…
I disagree with the argument that "you will cease to regret this change once this change has happened" undermines the moral force against making the change in the first place. To use a very extreme example, think of fictional monsters like the Borg or the Cybermen who forcibly transform their victims into more of them. One common argument they use to try to get their victims to submit is that once they have made you into one of them, you will want to be one of them. I do not think this argument is persuasive. It is possible to recognize that some events can change what you value while also having some meta-values about what kinds of changes to your values are desirable and when.
In Parfit's "What Makes your Life Go Best" he discusses a case where someone gives you a drug that you have a strong preference take more of, and a lifetime supply of the drug. Parfit argues, correctly, that doing that does not make your life go better, because you have "global preferences," which are meta-preferences about your life as a whole and what kinds of preferences you will have in the future. Because of these "global preferences" you can regret being addicted to the drug, even though now that you are addicted it is a good thing that you have a lifetime supply. Similarly, you can have global moral meta-values about when it is good or bad to create new "person-directed reasons." (Obviously this metaphor only goes so far. Creating Sally to replace Bob seems less analogous to addicting someone to a drug and more analogous to something like forcing someone to give up a fulfilling romantic relationship or life project and to adopt a different one)
Because of this, I do not think that a hybridist has to commit to a time inconsistent view where their normative standards change depending on whether it is before or after Sally's birth. I think that they can have timeless, impartial meta-values about when it is good, bad, or regrettable to create new "person-directed reasons." This can allow them to timelessly and impartially say that the world where Bob lives is the better one, overall. I do not think this is disrespectful to Sally or that it is saying that Bob counts more than she does. It is saying that the "person-directed reasons" for valuing Bob and Sally are exactly equal, but that there are timeless, impersonal reasons to regret the addition of more "person-directed reasons."
I think it is very important for a moral system to have some kind of "global preferences"/meta-values about what values it is acceptable to add or change. I think lacking these meta-values creates all sorts of problems, like the Mere Addition Paradox, or (more extremely) not being able to explain why the Federation should resist the Borg!
Yeah, I'm not a subjectivist so I'm inclined to respond to Parfit's addiction case by appeal to objective values: "addiction is bad, and creating desires for bad states of affairs is also bad". But Sally's existence isn't bad at all! That seems a very important difference.
It's an interesting suggestion to appeal to the idea that *replacing Bob with Sally* is bad. If you took that as a basic datum, that could indeed support a very robust anti-replacement obligation. But that seems to put the cart before the horse. *Why* is it bad? Isn't that something that needs to be explained, rather than assumed? That was what the appeal to person-directed reasons was supposed to accomplish: give us some normative machinery out of which anti-replacement verdicts naturally emerge, as a *conclusion*.
It's easy enough to replace Parfit's drug example with something genuinely good. For example, imagine someone deeply in love is given a drug that makes them fall out of love and then fall deeply in love with someone else. Or imagine an author writing a beloved book series with many fans that has to stop writing it because of some rights conflict with their publisher, and then they go on to start a new series that gains many fans. I think someone can rightfully have "global preferences" they not lose the original good, even if the new one is as good in many ways. (I think you once described your non-subjectivist views as an "objective menu," I am essentially arguing that forcing someone to stop eating their original "menu" item and order again is often bad, or at least not without cost).
I do think that "person-directed reasons" provide a normative reason against replacement, what I am trying to push back against is the idea that once the replacement has happened there is no longer a normative reason to regret it. I am not trying to propose a new, assumed moral principle against replacement, I am trying to describe what it even means to have "person-directed reasons" and how they are structured. I think that a neccessary part of valuing something is having meta-rules about how values change and are added. I don't think that what Parfit described as "global preferences" are a set of extra preferences above more down-to-Earth preferences, I think they are more like meta-rules about what it even means to value things and have wellbeing (regardless of whether that wellbeing comes from subjective sources or from picking from an objective menu).
I think in order to have values, be they personal or moral, you need to have these meta-values about how they change. Otherwise you get absurd arguments like how we should try not to value anything because once you don't value anything you won't care that you don't value it. I see the preceding sentence as one end of a spectrum. Further up that spectrum is the idea of replacement, where you have gone far enough down the spectrum to not value specific people, but not so far down it that you have stopped valuing human flourishing in general. The arguments against replacement I am making are not made up just to reject replacement, they are a part of our value architecture to stop us from sliding further and further down that spectrum.
Yeah, I definitely feel the force of anti-replacement concerns, even within the scope of the objectively valuable. But I think they are clearest *pre*-replacement: given my current attachments, I wouldn't want to undergo a relationship-swap, even if it involved rewiring my brain to love my new partner just as much. I think that's just part of what it is to have genuine attachments (see also G.A. Cohen on "cherishing" values: https://www.philosophyetc.net/2008/05/question-of-conservatism-is-value.html ).
The trickier question is whether the reasons for opposition persist even post-replacement. It's not so clear that they do. We do not have to (overall) regret that we grew out of our childhood desire to be a firefighter, for example. We may reasonably be glad that a past relationship ended, so as to make possible the one we value now. And so on. It seems like there can be a genuine conflict of normative perspectives across time, in these sorts of cases. I'm not sure that this can really be avoided.
An advantage of objective value here is that it at least blocks the more extreme problematic arguments, "like how we should try not to value anything because once you don't value anything you won't care that you don't value it." That sort of reasoning presupposes an extreme form of subjectivism. A more objective account of well-being can easily avoid that extreme end of the "spectrum" (as you put it). Objectivist views may (at worst) offer *retrospective* approval of the replacement of one objective good with another. That doesn't seem *so* problematic to me; partly, I guess, because I don't see what systematic principles would avoid this without having even more absurd implications (e.g. that we all must deeply regret not marrying our first crush).
I disagree with the argument that "you will cease to regret this change once this change has happened" undermines the moral force against making the change in the first place. To use a very extreme example, think of fictional monsters like the Borg or the Cybermen who forcibly transform their victims into more of them. One common argument they use to try to get their victims to submit is that once they have made you into one of them, you will want to be one of them. I do not think this argument is persuasive. It is possible to recognize that some events can change what you value while also having some meta-values about what kinds of changes to your values are desirable and when.
In Parfit's "What Makes your Life Go Best" he discusses a case where someone gives you a drug that you have a strong preference take more of, and a lifetime supply of the drug. Parfit argues, correctly, that doing that does not make your life go better, because you have "global preferences," which are meta-preferences about your life as a whole and what kinds of preferences you will have in the future. Because of these "global preferences" you can regret being addicted to the drug, even though now that you are addicted it is a good thing that you have a lifetime supply. Similarly, you can have global moral meta-values about when it is good or bad to create new "person-directed reasons." (Obviously this metaphor only goes so far. Creating Sally to replace Bob seems less analogous to addicting someone to a drug and more analogous to something like forcing someone to give up a fulfilling romantic relationship or life project and to adopt a different one)
Because of this, I do not think that a hybridist has to commit to a time inconsistent view where their normative standards change depending on whether it is before or after Sally's birth. I think that they can have timeless, impartial meta-values about when it is good, bad, or regrettable to create new "person-directed reasons." This can allow them to timelessly and impartially say that the world where Bob lives is the better one, overall. I do not think this is disrespectful to Sally or that it is saying that Bob counts more than she does. It is saying that the "person-directed reasons" for valuing Bob and Sally are exactly equal, but that there are timeless, impersonal reasons to regret the addition of more "person-directed reasons."
I think it is very important for a moral system to have some kind of "global preferences"/meta-values about what values it is acceptable to add or change. I think lacking these meta-values creates all sorts of problems, like the Mere Addition Paradox, or (more extremely) not being able to explain why the Federation should resist the Borg!
Yeah, I'm not a subjectivist so I'm inclined to respond to Parfit's addiction case by appeal to objective values: "addiction is bad, and creating desires for bad states of affairs is also bad". But Sally's existence isn't bad at all! That seems a very important difference.
It's an interesting suggestion to appeal to the idea that *replacing Bob with Sally* is bad. If you took that as a basic datum, that could indeed support a very robust anti-replacement obligation. But that seems to put the cart before the horse. *Why* is it bad? Isn't that something that needs to be explained, rather than assumed? That was what the appeal to person-directed reasons was supposed to accomplish: give us some normative machinery out of which anti-replacement verdicts naturally emerge, as a *conclusion*.
It's easy enough to replace Parfit's drug example with something genuinely good. For example, imagine someone deeply in love is given a drug that makes them fall out of love and then fall deeply in love with someone else. Or imagine an author writing a beloved book series with many fans that has to stop writing it because of some rights conflict with their publisher, and then they go on to start a new series that gains many fans. I think someone can rightfully have "global preferences" they not lose the original good, even if the new one is as good in many ways. (I think you once described your non-subjectivist views as an "objective menu," I am essentially arguing that forcing someone to stop eating their original "menu" item and order again is often bad, or at least not without cost).
I do think that "person-directed reasons" provide a normative reason against replacement, what I am trying to push back against is the idea that once the replacement has happened there is no longer a normative reason to regret it. I am not trying to propose a new, assumed moral principle against replacement, I am trying to describe what it even means to have "person-directed reasons" and how they are structured. I think that a neccessary part of valuing something is having meta-rules about how values change and are added. I don't think that what Parfit described as "global preferences" are a set of extra preferences above more down-to-Earth preferences, I think they are more like meta-rules about what it even means to value things and have wellbeing (regardless of whether that wellbeing comes from subjective sources or from picking from an objective menu).
I think in order to have values, be they personal or moral, you need to have these meta-values about how they change. Otherwise you get absurd arguments like how we should try not to value anything because once you don't value anything you won't care that you don't value it. I see the preceding sentence as one end of a spectrum. Further up that spectrum is the idea of replacement, where you have gone far enough down the spectrum to not value specific people, but not so far down it that you have stopped valuing human flourishing in general. The arguments against replacement I am making are not made up just to reject replacement, they are a part of our value architecture to stop us from sliding further and further down that spectrum.
Yeah, I definitely feel the force of anti-replacement concerns, even within the scope of the objectively valuable. But I think they are clearest *pre*-replacement: given my current attachments, I wouldn't want to undergo a relationship-swap, even if it involved rewiring my brain to love my new partner just as much. I think that's just part of what it is to have genuine attachments (see also G.A. Cohen on "cherishing" values: https://www.philosophyetc.net/2008/05/question-of-conservatism-is-value.html ).
The trickier question is whether the reasons for opposition persist even post-replacement. It's not so clear that they do. We do not have to (overall) regret that we grew out of our childhood desire to be a firefighter, for example. We may reasonably be glad that a past relationship ended, so as to make possible the one we value now. And so on. It seems like there can be a genuine conflict of normative perspectives across time, in these sorts of cases. I'm not sure that this can really be avoided.
An advantage of objective value here is that it at least blocks the more extreme problematic arguments, "like how we should try not to value anything because once you don't value anything you won't care that you don't value it." That sort of reasoning presupposes an extreme form of subjectivism. A more objective account of well-being can easily avoid that extreme end of the "spectrum" (as you put it). Objectivist views may (at worst) offer *retrospective* approval of the replacement of one objective good with another. That doesn't seem *so* problematic to me; partly, I guess, because I don't see what systematic principles would avoid this without having even more absurd implications (e.g. that we all must deeply regret not marrying our first crush).