The Thornley paper is a good critique of pure person-affecting population ethics. I was wondering how it would affect the type of hybrid view that you advocate in your post, "Killing vs Failing to Create."
I've had some thoughts on that matter recently, especially after reading this post about goal vs desire-based thinking:
The article is about singe individual's preferences rather than about population ethics, but I think the ways of thinking about goals and desires that it explores are relevant. It basically talks about how some people sometimes have difficulty understanding the idea that people can have specific life-goals, and assume (wrongly) that everyone has some kind of underlying meta-goal like "extinguish all desires" that is indifferent about the specific content of those desires. Because of this they see replacing one of a person's life goals with a different one (through brainwashing or some other type of invasive behavior modification) as good rather than bad, providing the new goal is easier to achieve than the previous one. In some more extreme differences they may see death as desirable because it extinguishes all desires.
I think this can be analogized to Parfit's Impersonal Total Principle where personal identity does not matter, only the "total quantity of whatever makes life worth living." This principle seems similar to the "extinguish all desires" attitude in that it posits a single meta-goal and does not care exactly how it is fulfilled. This threatens to collapse the distinction between killing and failing to create, because it does not recognize the strong moral reasons to value specific existing people. There needs to be some way to recognize those reasons, the same way a proponent of "goals" based thinking recognizes that people have specific reasons to achieve specific goals they have.
Interesting! Yeah, I think that's a worry. To make the parallel argument explicit:
Suppose at time t1 Amy already exists at welfare level 50. We independently have the option to create Bobby at t2 with welfare level 40. At t3, we have the choice to bring any one then-existing person (Amy or, if he exists, Bobby) up to 100. What should we do?
We shouldn't create Bobby and then subsequently choose to benefit Amy. (To make that decision at t3 would unfairly count Bobby's greater interest for less.)
We shouldn't decline to create Bobby: since we reject PAVs, his positive existence is better than nothing.
So the only option remaining is to create Bobby and then subsequently choose to benefit Bobby. But that creates time-inconsistencies if we can reasonably grant some weight to person-affecting reasons (as per my hybrid view): at time t1, we should prefer to secure the later benefit for Amy: this huge improvement for Amy is more important than Bobby getting to exist, for example.
Backwards induction might thus motivate us to *not* create Bobby, just so that we aren't later tempted to benefit him over Amy. But that seems messed up -- ideal moral agents shouldn't pass up a free chance to improve the world! Perhaps the ideal is instead to *resolutely* opt for creating Bobby and then benefiting Amy. (Say, e.g., "We'll create Bobby, but only on the condition that he doesn't get to claim Amy's benefit at t3.") It would be wrong to count Bobby for less at t3. But now we aren't doing that: we're *keeping a promise/resolution* that was necessary in order to incentivize bringing Bobby into existence in the first place.
It's an interesting case, and maybe brings out that certain kinds of "past choices" can make a difference to what one ought to do after all.
I think the issue is that, if we stipulate that we have both person-directed and undirected moral reasons, there is still a danger of collapsing the distinction if one allows unlimited creation of new person-directed reasons. In your example we begin by having person-directed and undirected reasons to help Amy, and purely undirected reasons to create Bobby. However, after Bobby is created this creates new person-directed reasons to help him, which puts helping him on par with helping Amy.
It seems to me that one way out of this dilemma is to penalize the creation of new person-directed reasons in some way, so that the undirected reasons to create someone need to be especially great in order to overcome the penalty. This would probably function a lot like critical-level or critical-range utilitarianism. This might threaten to imply the Sadistic Conclusion in the case of critical-level utilitarianism, but not in critical-range utilitarianism. This seems analogous in some ways to the way people treat creation of new personal goals, we are often reluctant to start new major life projects or serious relationships unless we are reasonably certain they will turn out well.
I had not thought of your precommitment idea before. It also seems a worthwhile idea to explore. I am not sure that backwards induction motivating us to *not* create Bobby seems perverse if you try to translate it into a realistic scenario instead of imagining it in the abstract. It sounds like normal family planning to me. Many parents choose to have fewer children than they otherwise would because they want to devote their resources to their existing children. Imagine you were to point out to them that they could have an extra child if they practiced extreme favoritism by committing to devoting far fewer resources to the new child and directing the bulk of their resources towards the older kids. Their new child's life would be much worse than that of their older siblings, but still worth living. I think most parents would respond to such a suggestion with horror. Maybe they have already precomitted to not create Bobby.
The fact that Daniel Munoz (one of the absolutely brilliant deontologist philosopher) liked the paper by Elliott Thornley shows the genius of Elliott. God bless him. Elliott is one of my favorites among the long list of philosophers (that I respect). He does magnificent work!
The Thornley paper is a good critique of pure person-affecting population ethics. I was wondering how it would affect the type of hybrid view that you advocate in your post, "Killing vs Failing to Create."
I've had some thoughts on that matter recently, especially after reading this post about goal vs desire-based thinking:
https://www.lesswrong.com/posts/iWJ5kzeqvx4kvB527/goal-thinking-vs-desire-thinking
The article is about singe individual's preferences rather than about population ethics, but I think the ways of thinking about goals and desires that it explores are relevant. It basically talks about how some people sometimes have difficulty understanding the idea that people can have specific life-goals, and assume (wrongly) that everyone has some kind of underlying meta-goal like "extinguish all desires" that is indifferent about the specific content of those desires. Because of this they see replacing one of a person's life goals with a different one (through brainwashing or some other type of invasive behavior modification) as good rather than bad, providing the new goal is easier to achieve than the previous one. In some more extreme differences they may see death as desirable because it extinguishes all desires.
I think this can be analogized to Parfit's Impersonal Total Principle where personal identity does not matter, only the "total quantity of whatever makes life worth living." This principle seems similar to the "extinguish all desires" attitude in that it posits a single meta-goal and does not care exactly how it is fulfilled. This threatens to collapse the distinction between killing and failing to create, because it does not recognize the strong moral reasons to value specific existing people. There needs to be some way to recognize those reasons, the same way a proponent of "goals" based thinking recognizes that people have specific reasons to achieve specific goals they have.
Interesting! Yeah, I think that's a worry. To make the parallel argument explicit:
Suppose at time t1 Amy already exists at welfare level 50. We independently have the option to create Bobby at t2 with welfare level 40. At t3, we have the choice to bring any one then-existing person (Amy or, if he exists, Bobby) up to 100. What should we do?
We shouldn't create Bobby and then subsequently choose to benefit Amy. (To make that decision at t3 would unfairly count Bobby's greater interest for less.)
We shouldn't decline to create Bobby: since we reject PAVs, his positive existence is better than nothing.
So the only option remaining is to create Bobby and then subsequently choose to benefit Bobby. But that creates time-inconsistencies if we can reasonably grant some weight to person-affecting reasons (as per my hybrid view): at time t1, we should prefer to secure the later benefit for Amy: this huge improvement for Amy is more important than Bobby getting to exist, for example.
Backwards induction might thus motivate us to *not* create Bobby, just so that we aren't later tempted to benefit him over Amy. But that seems messed up -- ideal moral agents shouldn't pass up a free chance to improve the world! Perhaps the ideal is instead to *resolutely* opt for creating Bobby and then benefiting Amy. (Say, e.g., "We'll create Bobby, but only on the condition that he doesn't get to claim Amy's benefit at t3.") It would be wrong to count Bobby for less at t3. But now we aren't doing that: we're *keeping a promise/resolution* that was necessary in order to incentivize bringing Bobby into existence in the first place.
It's an interesting case, and maybe brings out that certain kinds of "past choices" can make a difference to what one ought to do after all.
I think the issue is that, if we stipulate that we have both person-directed and undirected moral reasons, there is still a danger of collapsing the distinction if one allows unlimited creation of new person-directed reasons. In your example we begin by having person-directed and undirected reasons to help Amy, and purely undirected reasons to create Bobby. However, after Bobby is created this creates new person-directed reasons to help him, which puts helping him on par with helping Amy.
It seems to me that one way out of this dilemma is to penalize the creation of new person-directed reasons in some way, so that the undirected reasons to create someone need to be especially great in order to overcome the penalty. This would probably function a lot like critical-level or critical-range utilitarianism. This might threaten to imply the Sadistic Conclusion in the case of critical-level utilitarianism, but not in critical-range utilitarianism. This seems analogous in some ways to the way people treat creation of new personal goals, we are often reluctant to start new major life projects or serious relationships unless we are reasonably certain they will turn out well.
I had not thought of your precommitment idea before. It also seems a worthwhile idea to explore. I am not sure that backwards induction motivating us to *not* create Bobby seems perverse if you try to translate it into a realistic scenario instead of imagining it in the abstract. It sounds like normal family planning to me. Many parents choose to have fewer children than they otherwise would because they want to devote their resources to their existing children. Imagine you were to point out to them that they could have an extra child if they practiced extreme favoritism by committing to devoting far fewer resources to the new child and directing the bulk of their resources towards the older kids. Their new child's life would be much worse than that of their older siblings, but still worth living. I think most parents would respond to such a suggestion with horror. Maybe they have already precomitted to not create Bobby.
Thornley’s paper is absurdly clever.
The fact that Daniel Munoz (one of the absolutely brilliant deontologist philosopher) liked the paper by Elliott Thornley shows the genius of Elliott. God bless him. Elliott is one of my favorites among the long list of philosophers (that I respect). He does magnificent work!