38 Comments

Chat GPT wrote a poem about this argument.

Person affecting views in population ethics,

Raise objections with their implications,

A barren rock, just as good as utopia?

Such claims bring forth ethical complications.

How can we value an empty place,

Over a world full of love and grace?

Is it fair to equate a void to paradise,

And make both seem like they're in the same place?

The value of life is in its living,

Its richness, joy, and love it's giving,

To suggest a barren rock is equal to utopia,

Is to ignore the beauty in living.

A utopia might not exist today,

But the hope for a better world leads the way,

To settle for less and call it the same,

Is to make morality just a game.

Let us strive for the best we can achieve,

For a world of happiness and love we can conceive,

Where every life has value and meaning,

And barren rocks remain barren, unfeeling.

Expand full comment
Jan 27, 2023Liked by Richard Y Chappell

Sorry to start yet another thread, but I wanted to mention another thought that occurred to me while reading your post against subjectivism:

I agree that "Normative subjectivists have trouble accommodating the datum that all agents have reason to want to avoid future agony" gets at a real problem for subjectivism; but I find it telling that the strongest example you can come up with is avoiding pain. At least for me, my intuitions just really are very asymmetrical with respect to pleasure and pain, and I suspect you picked "avoiding future" agony rather than "achieving future joy" because you have the intuition that the former is a harder bullet to bite than the latter.

I think this asymmetry is why I feel intuitively bound to rank the unfortunate child against the void in a way I don't feel when it comes to the happy child; and why I don't like the idea of us turning ourselves into anti-humans, but I don't have a strong intuitive reaction against us choosing the void--I think our reasons for avoiding pain are much more convincing and inarguable than our reasons for pursuing pleasure.

I think in general, utilitarianism has a harder time working out the details of what should count toward *positive* utility--this may just be my impression, but I'd guess there's a lot more controversy over what counts as well-being, and what sorts of things contribute to it, and in what way, than over what sorts of things contribute to *negative* utility.

I think maybe the reason I think of pleasure and pain as asymmetric, then, is that I find utilitarianism's arguments much more convincing when talking about suffering; so maybe one doesn't need to adopt an extreme view like "all utility functions are bounded above by 0" to explain why it feels more intuitive to reason about preventing suffering than about promoting joy; maybe it's a matter of moral uncertainty: no plausible competitor can think it's good to let someone suffer pointlessly, that's more or less the strongest moral datum we have. But plausible competitors *can* disagree with utilitarian conclusions about well-being.

Expand full comment
author

Yes, I agree that it's much more controversial exactly what contributes to positive well-being. (This isn't specific to utilitarianism.) FWIW, my own view is that positive hedonic states don't really matter all *that* much; they're nice and all, but the things that *really* make life worth living are more objective/external: things like loving relationships, achievements, etc. But as you note, that specific claim about the good is going to be much more controversial than "pain is bad", so it makes it a bit more difficult to make specific claims about what's worth pursuing. That's why I try to keep the claim more general: the best lives, *whatever it is* that makes them worth living, are better to have exist than to not exist.

Expand full comment
Feb 2, 2023Liked by Richard Y Chappell

That makes sense to me; I re-read your older post on killing vs. failing to create, and I think "strong personal reasons" to worry about people who will exist independently of our choices, vs. "weak impersonal reasons" to worry about bringing into existence future people is a distinction I find intuitive.

I think one thing I hadn't done a good job separating out is, in arguments contrasting the void with future Utopias, often the Utopias are stipulated to be filled with staggeringly large numbers of people, so that even with only weak impersonal reasons to create future lives, the number of people involved is big enough that the overall product is still a huge number--I think part of my intuitive rejection of this sort of reasoning is it feels a bit to me like a Pascal's mugging. But I was conflating that with a contrast between the void and Utopia *at all*.

And I guess the void still has the unique property that the void precludes *any* number of future people existing, so comparisons with it will always have something of a Pascal-ish quality.

Anyway, thanks for a very interesting discussion! I really appreciate your willingness to engage with amateurs like me, and I really enjoy the blog as a whole. I loved Reasons and Persons when I read it years ago, and I'm really glad I've found a place where I can not just follow, but even participate in, discussions on the same issues.

Expand full comment

It's only a Pascal's mugging if the one making the argument can just make up any number they want, with no independent argument for an expected range. Some people peripherally involved in long-termist arguments online undoubtedly do this, but the central figures in long-termism do make indepedent arguments based upon the history and mathematics of population growth, technology and wealth growth, and predictions about the colonization of space.

Expand full comment

That's a fair point; it's definitely a lot better that the numbers filling the postulated utopias are not just ex culo.

And I don't want to keep fighting this point on an otherwise dead thread, but I just want to articulate my feeling that, at least in the formulation above, there's still something fuzzy about the math: it's not clear how exactly to multiply "weak impersonal reasons" by large numbers (and, also of course, by the probability that these numbers are actually attainable) to come to clear conclusions, and it sometimes feels like the strength of these arguments derives from the stupefaction one feels at the largeness of the large numbers.

But, as I say, it's a pretty good reminder that actually, the large numbers are in some ways the least controversial part of that calculation--definitely in comparison to quantifications of "weak impersonal reasons", and probably in comparison to the probabilities too--they are not (usually) just picked to be stupendously large out of convenience, so thanks for pointing that out.

Expand full comment
Jan 26, 2023·edited Jan 26, 2023Liked by Richard Y Chappell

"One might infer that good states above a sufficient level have diminishing marginal value"

Can't one just restate the original gamble, but now with the Utopia stipulated to have arbitrarily large value, instead of whatever other good it was measured in before? If value itself is the unit of evaluation, then shouldn't a non-risk-averse person be indifferent between a decent world, and a 50/50 gamble with outcomes + N value, - N value, for any N?

Even if you think there is a maximum possible value (which as you note in the other post, has its own problems), it doesn't seem outrageous to me that the maximum would be large enough to admit a gamble of this form that would still be very counterintuitive for most people to actually accept over the alternative.

To the general point: I made a similar argument in the comments to an earlier post on a similar topic, but isn't it enough to note that most people have a preference for Utopia over the void, and argue that Utopia is better on the grounds that it satisfies our current preferences more? Does there need to be an *intrinsic* reason why Utopia is better than the void?

In general, the idea of intrinsic value seems odd to me. What appeals to me about consequentialism and utilitarianism is that they are very person-centric: utility is about what's good *for people*, unlike deontology or divine command or whatever, which center "goodness" somewhere else, somewhere outside of what actually affects and matters to people.

Obviously the above is too naive a conception of utilitarianism to be all that useful: we often face dilemmas where we have to decide how to evaluate situations that are good for some people but not for others, or where we face uncertainty over how good something is, or whether it's good at all, and so we need a more complex theory to help us deal with these issues.

But when contemplating the void, it feels to me like we aren't in one of these situations: there are no people in the void, and so no one for whom it to be good or bad; the only people for whom it can be good or bad are the people now who are contemplating it, and so we should be free to value it however we want, with no worry of our values coming into conflict with those of the people who live in that world. As it happens, we (mostly) currently very strongly dis-prefer the void--but there's no intrinsic reason we have to, and if we were to collectively change our minds on the point, that would be fine.

Expand full comment
author

You could also restate the gamble in terms of *risk-adjusted value*, where +/- N risk-adjusted value is just whatever it takes for a risk-averse agent to be indifferent to the gamble. But I think these restatements aren't so troubling, because we no longer have a clear idea of what the gamble is supposed to be (and hence whether it's really a bad deal). If the worry is just that, structurally, we're committed to accepting *some* 50/50 gamble, I guess I'd want to hear more about what view avoids this possibility, and what other problems it faces instead.

> In general, the idea of intrinsic value seems odd to me.

It sounds like you may be mixing up issues in normative ethics and metaethics here. Intrinsic value, as a normative concept, is just the idea of something's being non-instrumentally desirable. While I happen to be a moral realist, and think there are objective facts about this, you could just as well endorse my claims while being an expressivist. In that case, when you say "X is intrinsically valuable", you're just expressing that you're in favour of people non-instrumentally desiring X. So, in particular, an expressivist who wants there to be good lives in future, and favours others sharing this desire, could affirm this by calling good lives "intrinsically valuable". There's nothing metaphysical about it. It's just a normative claim.

> "[Why not] argue that Utopia is better on the grounds that it satisfies our current preferences more?"

Well, depending on who counts in the "we", just imagine it were otherwise. Suppose you were surrounded by anti-natalists. Mightn't you nonetheless oppose their view, and want utopia to come into existence? I sure would! As a moral realist, I happen to think this is also the *correct* attitude to have. But even if I were an expressivist, I wouldn't want my support for utopia to be conditional on others' attitudes (or even my own: I would be "pro-utopia, even on the hypothetical condition that I cease to be pro-utopia").

> "there are no people in the void, and so no one for whom it to be good or bad... and so we should be free to value it however we want"

This seems wrong. Suppose that Sally is considering having a child with a genetic condition that would cause it unbearable suffering. Clearly, it would be wrong to bring the miserable child into existence. The void is better. There's just no denying that negative welfare is impersonally and intrinsically bad: we have to oppose it, even if there isn't (yet) any definite person for whose sake we would be acting when we tell Sally not to have the miserable child.

By parity of reasoning, there's no basis for denying the parallel claims about positive welfare being intrinsically good. Just as the miserable child would be (non-comparatively) harmed by being brought into existence, and we should oppose that, so a wonderfully happy child would be (non-comparatively) benefited by being brought into existence, and we should support that. So, these conclusions are forced on us by a simple mixture of (i) basic decency and (ii) intellectual consistency.

Expand full comment

Thanks for the very good response!

> If the worry is just that, structurally, we're committed to accepting *some* 50/50 gamble, I guess I'd want to hear more about what view avoids this possibility, and what other problems it faces instead.

Oh sure, I agree that you can't avoid having to pick some gamble like that. I guess the question is, does the move to diminishing marginal value matter here, or do we just want to say something like, yes, expected-value-maximization says we should take some gamble of this form, but

a) your alternative pet theory probably does the same, a la your "Puzzles for Everyone" post, and

b) we shouldn't imagine we are correctly conceptualizing both ends of the gamble "correctly" so we should be wary of relying too heavily on our intuition here.

> So, in particular, an expressivist who wants there to be good lives in future, and favours others sharing this desire, could affirm this by calling good lives "intrinsically valuable".

I'm not sure I totally understand--how is this different from the expressivist just having a preference for future good lives? I suppose from their point of view, they would say "I don't think this is good just because it satisfies my preferences", but from an outside view, it seems to me hard to distinguish an opinion on "intrinsic value" from a preference, at least from the point of view of a non-moral-realist.

> I would be "pro-utopia, even on the hypothetical condition that I cease to be pro-utopia").

I guess this is where we disagree. I am basically fine with the idea of the anti-natalists winning, as long as they do so by honourable means.

> even if there isn't (yet) any definite person for whose sake we would be acting when we tell Sally not to have the miserable child.

> By parity of reasoning, there's no basis for denying the parallel claims about positive welfare being intrinsically good.

I agree that *if* we bring a child into the world, and they love their life, we can regard that as a benefit for the child...but only conditional on bringing them into the world. If I had to summarize the view I think I'm arguing for, it would be something like, "you only have to care about the benefits/harms to a person in the worlds where they actually exist"--so Sally's child is harmed by being "forced" to live in a world where they will suffer; and a person with a good life is benefited by being born in any of the worlds in which they are, in fact, born. But in the worlds where a person is not born, we don't have to weight their benefits/harms in our calculation of what to do. We can *choose* to do so, as a matter of our personal preferences, or for other instrumental reasons, but I don't see why there is any intrinsic reason to do so.

Expand full comment
author

Quick counterexample to your last claim: suppose Sally flips a coin to decide whether to create a miserable child. Fortunately, the coin directs her not to. But now your view implies that Sally needn't have taken into account the interests of the child who would've been miserable. But this seems wrong. Sally was wrong to flip a coin, and take a 50% risk of causing such suffering. She should have outright (100%) chosen not to have the miserable child, and she should have done it out of concern for that child's interests.

> from an outside view, it seems to me hard to distinguish an opinion on "intrinsic value" from a preference, at least from the point of view of a non-moral-realist.

Yeah, I'm no expert on expressivism, but a couple of possibilities:

(1) The relevant thing might be that it's a special kind of universal higher-order preference: they want *everyone* to have the relevant first-order preference.

(2) Alternatively, it might be that they're in favour of blaming or otherwise morally criticizing people who don't have the relevant first-order preference.

(There may be other options!)

Expand full comment
Jan 27, 2023Liked by Richard Y Chappell

Sorry, I realized overnight that I missed the point that in the example where we don't create the child, the void is ranked against the world the miserable child is born; if we can do a comparison in that case, why not in the other case?

That actually feels pretty convincing to me; I still feel conflicted about this, but I think if I really want to believe that the void isn't worse than Utopia I really do need an explicit person-affecting view, or to have an explicit asymmetry between negative welfare and positive welfare.

Expand full comment

“taking it as a premise that positive intrinsic value is possible (utopia is better than a barren rock), “

Is one an application of the other, or are they unrelated? I can think that utopia is better than a barren rock without accepting anything about intrinsic value. Am I just using the terms differently?

How is intrinsic value different from utility? I guess instrumental value counts as utility also, although it derives its utility from the end to which it serves as means.

In this context, would extrinsic value and instrumental value be the same thing?

Expand full comment

I agree with this, with one exception. I think that it is, in fact, possible to argue people out of the 'pleasure isn't good, but pain is bad position.' Among other things, even worse than implying utopia is worse than a barren rock, it implies it would be morally neutral to press a button that would make no future people ever happy again--and that utopia is no better than everyone just living slightly worthwhile lives with no suffering. That a life filled with love, good food, and general joy is no better than musak and potatoes.

Expand full comment

"The fires of the soul are great, and burn with the same light as the stars"

Merlin (HPMOR)

Expand full comment

>"Saying this risks coming off as insulting, but I don’t mean it that way"

>[the next paragraph:] "it’s insane to deny this premise... purely negative ethical views are insane"

(I think the "not being insulting" thing may need a little work here)

Expand full comment
author

Ha, well, I also don't want to downplay or sugarcoat how bad I think the view is. Sometimes philosophers defend crazy things! I think this is one of those times. Saying so is apt to make some people feel insulted, so I figured I should acknowledge that while clarifying that I'm not *aiming* to make anyone feel bad. But I'm basically OK with it being a foreseen side-effect of clearly conveying (i) how bad the view is, and hence (ii) why others should generally be on board with taking its rejection as a non-negotiable premise (as is needed for the rest of the post to get off the ground).

Expand full comment
Jan 27, 2023Liked by Richard Y Chappell

>"a foreseen side-effect"

Now you're wounding like a deontologist!

(I originally meant to type "sounding like a deontologist," but I think it works this way as well.)

Expand full comment

I agree with this, with one exception. I think that it is, in fact, possible to argue people out of the 'pleasure isn't good, but pain is bad position.' Among other things, even worse than implying utopia is worse than a barren rock, it implies it would be morally neutral to press a button that would make no future people ever happy again--and that utopia is no better than everyone just living slightly worthwhile lives with no suffering. That a life filled with love, good food, and general joy is no better than musak and potatoes.

Expand full comment
Jan 27, 2023Liked by Richard Y Chappell

This argument works against a crude statement like "pain bad, pleasure neutral," but fails against the following formulations:

(1) All conscious existence has negative value. What we call "pleasure" can make it less negative, and sufficient quantities of "love, good food, and general joy" can help the value of a life asymptotically approach the zero level, but they can't make existence better than nonexistence.

(2) Lexical negative utilitarianism and related axiologies. (e.g. Pleasure is good, but not good enough to offset even trivial amounts of pain.)

Expand full comment
Jan 27, 2023Liked by Richard Y Chappell

> (1) All conscious existence has negative value. What we call "pleasure" can make it less negative, and sufficient quantities of "love, good food, and general joy" can help the value of a life asymptotically approach the zero level, but they can't make existence better than nonexistence.

This seems like an extreme formulation to me, but I admit that something a little like it has at last some intuitive appeal to me; I often feel that I'm attracted to a sort of "palliative" version of utilitarianism: an ethics that tries to offer comfort and ease of suffering. Whereas more "positive" formulations of utilitarianism leave me cold; they often leave me feeling like we are doing a "make line go up" for the sake of the *Universe* rather than for the sake of the people living within it--it feels much more right to me to say, "while we're here, we have a duty to make the world more pleasant and livable" than to say "we have a duty to remain here, and make the universe a certain way, even if no one wants it that way"...but I think what this discussion makes me realize is that it might be very hard or even impossible to formulate a logically consistent version of my view without resort to an extreme position like the position (1) that you articulate above.

Expand full comment

>"very hard or even impossible to formulate a logically consistent version without resort to an extreme position like the position (1)"

For what it's worth, my own view is that trying to develop a logically consistent ethical system is a fundamentally misguided project, and that the ever-present temptation to borrow metaphors from mathematics (even basic ones like "good ~ positive" and "bad ~ negative") is especially likely to lead astray.

Expand full comment

I don't this question is generally addressable without discussing your meta-ethical views.

If welfare is good because (and only because) we as individuals care about our welfare (call this Premise C) then things being good requires actual people (whether past/present/future) to exist in the first place - otherwise there is no source of value, and no basis on which to judge what is good and what is not.

Note that this isn't necessarily constructivism, insofar as you will have something like "X is good iff it is non-contingently desired" is plausibly mind-independent in the Shafer-Landau sense, hence yielding some kind of moral realism.

The real question, of course, is whether we should accept Premise C. I think, broadly speaking, there are two compelling reasons for it:

(a) At a positive level, we do obviously care about our own lives/freedom/happiness/etc, and as a result these things are good (possess ought-to-be-ness, have reason for existence, whatever). And if you take a step back, and asked what would happen if you didn't care about these things, there *doesn't seem to be any reason for the universe to care* - there doesn't appear to be any reason separate from your caring for these things to matter.

(b) It would be an extremely implausibly metaphysical coincidence that our welfare just happens to be good from the point of view of the universe, separately from us caring about it. For the sake of argument, consider that there metaphysically could be a planet of anti-humans - with the residents here telically desiring the anti-welfare of humans (i.e. that we die, are made slaves, are unhappy etc), and have the same pro-attitudes towards the inverse of the things we have pro-attitudes to. And it's just hard to justify why we would be cosmically right and them cosmically wrong - why it just happens that the stuff we value (and not the stuff the anti-humans value) is what the universe also values in the Mackie/Platonic sense. But perhaps this is just a long-winded way of saying the evolutionary debunking arguments are compelling, unless you have Premise C and some sort a meta-ethical view that links valuation to objective value in a way that (a) avoids the coincidence, and (b) still gets you a sufficiently strong sense of mind-independence as to defeat the radical moral sceptic who keeps asking why we should care about others

Expand full comment
author

One challenge to Premise C is the temporary depressive. Suppose Adam doesn't care about his welfare, and is considering suicide. He also has access to a 100% reliable antidepressant pill. If he takes the pill, he will have a very happy future (which he would, at that future time, highly value). But right now, he doesn't value that at all. So he commits suicide instead. Did Adam make a mistake? I think yes: he should've taken the pill and had a happy future instead. C implies no.

For further arguments, see: https://rychappell.substack.com/i/54631002/against-subjectivism

On why my normative claims here don't rely on moral realism, see my response to JeremyL, upthread:

https://rychappell.substack.com/p/dont-valorize-the-void/comment/12255454

For my defense of moral realism against evolutionary debunking arguments, see my paper 'Knowing What Matters', summarized here: https://www.philosophyetc.net/2018/03/on-parfit-on-knowing-what-matters.html

Or you can read the full paper here: https://philpapers.org/archive/CHAKWM.pdf

Expand full comment
Jan 28, 2023·edited Jan 28, 2023

Hi Richard, thanks for the reply!

I'm not sure if I would be especially convinced by this thought experiment. Three points:

(1) Even if this worked as an intuition pump, it doesn't solve the deeper meta-ethical problems of how we ground objective value (if not in what we desire). And from the POV of the sceptic, this thought experiment is no better than a deontologist relying on Transplant thought experiments to defeat utilitarianism even though they can't point to appropriate grounding for side-constraints.

(2) I think the most plausible view of desire-based theories of welfare will be global/life-cycle (i.e. what does X person, as a chain-of-continuous-consciousness, intrinsically want?). That is to say, from the perspective of who we are over time, XYZ things matter, and normal human blips from depression/fatigue/etc, don't change what these XYZ things are. Moreover, this gets around logical issues like there being infinitely many people at t1, t1.5, 1.25 etc, wanting the same thing Y, such that Y has infinite value.

(3) I'm not even certain that it makes sense to say that a depressed person doesn't want to be happy. They may not see the meaning of life, because life isn't happy - that doesn't mean they don't see the value of happiness. If you asked any depressed person if they would press a button that would make them magically happy, they would! I guess the upshot of this is just that I don't believe this is a fair intuition pump with respect to what we're getting at - of course we agree that taking then happy pills (rather than suicide) is reasonable; it's just that the intuition motivating this conclusion (i.e. actually, from the POV of the depressed person, happiness still matters), doesn't prove that happiness has some of non-desire grounding for its value.

Expand full comment

>> One challenge to Premise C is the temporary depressive.

I feel like a lot of these sorts of arguments have the feel of debunkings of naive versions of utilitarianism; it's true that, all else equal, someone who holds Premise C has to bite a bullet here, but in a realistic case, there will be instrumental reasons and prudential reasons and all sorts of other reasons why a Premise C-er can still dodge the question.

Expand full comment
author

Hmm, well I'm trying to get at the deeper issue that it sure seems like we should want what's best for Adam, rather than having weird conditional desires (only wanting Adam to have a good life conditional on him -- or perhaps someone else in the imagined scenario -- *already* wanting this). So it's getting at the deep question of what's worth caring about, rather than any superficial intuition about what act-types just "seem wrong" or whatever.

To help confirm this, we can cut the agent's act of suicide (which some might find distracting) out of the picture. Suppose Adam dies from natural causes -- he's struck by lightning. This didn't thwart any of Adam's actual desires. But if he hadn't died right then, he would've quickly gotten over his depression and had a very happy future. Is it bad that Adam died? Should you regret this? I say "yes, obviously!" But C denies this. So we should reject C as incompatible with decent values.

Expand full comment

I'm still not sure I see it; you can regret this without giving up C as long as you think the reason to regret it is that it will make his family sad or whatever. It's also not clear to me if "temporary" is meant to be understood in the sense that he was previously not depressed, and then became depressed--if so, then I think there is also probably some complicated thing about how to evaluate an individual's intertemporal preferences that makes C more complicated, but is the sort of thing all utilitarians have to deal with to some extent.

You can still probably clean it up to sharpen the point: Adam has no friends or family, has been miserable and unhappy his whole life, and is suddenly killed painlessly--but in all possible worlds where he doesn't die at this point, his life becomes happy and joyful immediately after (perhaps the near-death experience makes him realize that life is precious?)

I think a version like this is more convincing, although one might argue that we are implicitly imputing to Adam some sort of meta-preference (he wishes he *could* be happy enough to value his welfare?) that might be affecting our judgement, and the violation of those meta-preferences triggers C.

If you stipulate that even an "It's a Wonderful Life"-style vision of his happy future wouldn't change his mind, then, though I agree that I would probably still regret his death, it would be harder for me to argue that this isn't just due to my preferences, rather than a considered judgement about values.

Expand full comment
author

Right, flesh out the details as needed. If it's just an arbitrary personal preference on your part to have Adam go on to have a happy life, then you'd seem committed to not judging anyone who happens to have a different preference about the case. But it doesn't seem (to me at least) *optional* to regard Adam's death here as a bad thing. Indifference towards his death (a death which makes him *worse off than he otherwise would have been*) seems objectionably callous, and a failure to appropriately value him as a person.

To bring this out more clearly, suppose that Adam's parents come to fully believe C, and so do not care *specifically for Adam's sake* that he died and missed out on a wonderful future. (Maybe they care for other reasons, but bracket that.) That would seem messed up. They aren't valuing Adam in the right kind of way -- and neither does anyone else who accepts C.

I general, I think it's a kind of philosophical error to let metaphysical qualms dictate your values. I don't think anyone would find C plausible except for feeling like they somehow *have* to affirm something like this in order to qualify as a naturalist in good standing (or some such). But really there's a kind of is/ought error happening here, and you don't need to rely on a "point of view of the universe" in order to non-instrumentally and unconditionally value others' well-being. Confused talk about the "sources of value" unfortunately has the effect of muddying this basic point, and misleading people into thinking that they have no choice but to constrain their values to accord with some form of normative subjectivism. But it's not true: you really can value whatever you want (or whatever most plausibly matters)! If you're not a realist, there's no outside force to tell you you're wrong. And if you are a realist, you should think the substantively plausible values are more likely to be correct. In neither case should you feel compelled by normative subjectivism. It's a philosophical trap.

Expand full comment

> If it's just an arbitrary personal preference on your part to have Adam go on to have a happy life, then you'd seem committed to not judging anyone who happens to have a different preference about the case. But it doesn't seem (to me at least) *optional* to regard Adam's death here as a bad thing.

I don't think it's *arbitrary*; it's formed from the observation that in the vast majority of cases it will in fact be wrong for people to die suddenly, and blah blah blah. And I don't feel committed to judging those who feel differently; in particular, if *Adam himself* feels differently, then I don't see what grounds I have to disagree with him.

>> They aren't valuing Adam in the right kind of way -- and neither does anyone else who accepts C.

I'm not sure why they aren't: it seems to me that part of valuing other people is deferring to their judgement on what it is that would count as a valuable life for them; if their judgement is that no such life would count, then we should take them at their word.

I think part of the issue is that it's very hard to imagine that Adam's judgement is sound; maybe his depression is clouding him from imagining what a happy life might look like--and I think that's a very reasonable concern!

What's more, I think it's very difficult to conceive of how Adam could be instantaneously 100% cured of his depression and go on to live a life of joy if the capacity for that joy weren't somehow latent in him already; the very formulation of the scenario seems to be pushing on us the implicit meta-preference I mentioned before: if, within an instant, he is capable of being completely turned around, there must already be some part of him that *wants* to be turned around, and so we can't take at face value the idea that the depressive part of him is speaking on behalf of "all of him".

I even think it's very plausible that it is just a fact of human psychology, evolutionarily driven into us, that *all* human beings will always have *some* part of us that wants to keep living, and so on...in which case, no actual human being could ever be in an Adam-like position without triggering C.

Expand full comment

The Adam case works as an intuition pump, but it falls short of being an argument. For someone inclined to accept C, it's pretty easy to accept that Adam didn't make a mistake.

On the moral realism thing — it may be true that your normative claims can be separated from your moral realism, but your writing style is so pervaded by realist turns of phrase that it's easy to see why people keep picking up on that and assuming that your realism is central to your argument. For example:

>"it’s insane to deny this premise"

>"important moral insights"

>"any theory formulated in purely negative terms... cannot possibly be correct"

>"moral theorists have misgeneralized their intuitions"

Whether or not this matters depends, I suppose, on how persuasive you want your arguments to seem to readers who do not share your moral realism.

Expand full comment
author

Hmm, I think it would be difficult to do normative ethics in a way that didn't sound at least superficially "realist". That's part of why anti-realists like Blackburn and Gibbard have put so much work into showing that their metaethics is compatible with "talking like a realist". So I think I'd rather just urge anti-realist readers to shed their unnecessary suspicion of objectively-tinged moral discourse.

Expand full comment

I guess what strikes me as objectionable is the conjunction of realism with the sort of strong confidence in your own normative views (and dismissal of other views) that you express in this post.

Suppose I find myself in in the following situation:

(1) I believe there is a true, objectively correct axiology.

(2) I sometimes encounter otherwise reasonable-seeming people who have thought carefully about the relevant issues and concluded that there is no strong reason to prefer utopia over the barren rock.

(3) I am unable to imagine that the correct axiology could be indifferent between the barren rock and utopia, but also unable to offer any arguments supporting my own view.

What is the appropriate attitude to adopt in such a situation? I don't think there is anything wrong with saying "I am just going to take it as a premise that utopia is substantially better than the barren rock." This is just like assuming the parallel postulate and studying Euclidean rather than non-Euclidean geometry. But if I can't imagine how the parallel postulate could possibly be false, this does not justify me in calling non-Euclidean geometry "insane" or assuming I possess a fundamental insight into mathematical truth that non-Euclidean geometers lack; rather, I should take it as an opportunity to reflect on the limitations of my own geometric imagination.

The precise relevance of (1) here is a little difficult to pin down, but perhaps it is something like this: without something like (1), I would not be tempted to call the other views crazy, any more than I would be tempted to call someone crazy for thinking that tea tastes better than coffee. And if I do say "the claim that tea tastes better than coffee is insane, and any theory of hot beverages that implies this claim should be instantly disqualified," it would seem disingenuous, when encountering someone who objects that hot beverage preferences are a matter of personal taste, for me to urge them drop their needless suspicion of objectively tinged hot beverage discourse, since they can just interpret my claims merely as an expression of my own strong preference for coffee — that clearly wasn't what I thought I was saying (or doing) when I made the claim.

Expand full comment
author

See 'Knowing What Matters' - https://philpapers.org/rec/CHAKWM - where I set out my normative epistemology, including an explanation of why actual disagreement is irrelevant. (You could have "otherwise reasonable-seeming people" who think that torture is the only good, so either moral realists can have default trust in their normative intuitions -- including about which other views are outright crazy -- or they're inevitably led to full-blown skepticism, which I think is plainly far worse.)

> without something like (1), I would not be tempted to call the other views crazy, any more than I would be tempted to call someone crazy for thinking that tea tastes better than coffee

Note that much of 20th century metaethics precisely involved anti-realists of various stripes showing how they can avoid being committed to the sort of simple subjectivism you seem inclined to attribute to them. Instead, they defend the claim that they can be just as firm as anyone in their moral conviction (e.g.) that the Holocaust was a moral *atrocity*. See: https://plato.stanford.edu/entries/moral-cognitivism/#QuaRea

I'm not super-invested in defending the view, but I do think that the sophisticated non-cognitivist views developed by philosophers are far superior to the simple subjectivist views that non-philosophers too often fall into (and that end up diluting and distorting their normative views).

Whatever your metaethical views, we should all be appalled by the torture inflicted on factory-farmed animals. We should all be appalled by the claim that children dying overseas don't matter. And we should all be appalled by the view that everything good in life isn't really good at all: that an utterly empty universe is as good as it gets, and that the extinction of all life is a matter of (intrinsic) indifference. These are horrific claims, and any metaethical view worth taking seriously needs to be compatible with aptness of criticizing horrific moral views as such.

Expand full comment

The section in that paper on "when actual disagreement matters" is a bit brief, so I'm not sure I fully understand the position you sketch there. It seems to be something like "disagreement matters when it is non-ideal, and we can hope to resolve it through clarifying arguments; but if there are fundamentally different worldviews, we can expect the arguments to be dialectically unsatisfying, so we don't need to worry about it too much."

Is that a fair summary? I so, I think it's a pragmatically reasonable approach to adopt (nobody wants to waste time coming up with arguments against the view that torture is the only good), but it seems difficult to reconcile with a strong commitment to metaethical realism. (In other words, your treatment of the skeptical argument from disagreement leaves me uncertain whether you think you have a counterargument to the skeptic or are just claiming that you don't need one.)

>"we should all be appalled by the view that everything good in life isn't really good at all, and that the extinction of all life is a matter of (intrinsic) indifference"

I agree with the other normative claims in your last paragraph, but this last one (which comes back to the original topic of your post here) I think is just wrong, quite independently of any metaethical disagreements. I could just as easily say that "we should all be appalled" by the view that it is worth risking an eternal dystopia merely in order to ensure humanity's continued existence. On either side, prematurely rejecting ideas that deserve serious consideration represents a failure of moral imagination that risks denying us access to the actual moral truth (if such a thing exists) or preventing us from satisfactorily clarifying the structure of our own moral attitudes (if it does not).

Expand full comment

>> If welfare is good because (and only because) we as individuals care about our welfare (call this Premise C) then things being good requires actual people (whether past/present/future) to exist in the first place - otherwise there is no source of value, and no basis on which to judge what is good and what is not.

>> It would be an extremely implausibly metaphysical coincidence that our welfare just happens to be good from the point of view of the universe, separately from us caring about it.

Yeah, this is the basic intuition that seems so compelling to me; I am not a moral realist, and TBH, I don't find your version of "intrinsically valuable" for non-moral-realists very compelling; but I think your example above points out that consistency might require me to assign value to the void in a certain way to be in accordance with what human beings actually currently value.

And while I think there's room to get around this, most of the avenues for doing so open the door to other things we disvalue based on our current value system being re-ranked as well.

My intuition is something like, in a world without humans, there would be no humans to be unhappy about that, so who cares? But this is basically the same as, in a world with anti-humans, anti-humans would be the measure of goodness, so who cares? And while I think that is very plausible, I still feel *we shouldn't create a world full of anti-humans*--because it conflicts with my values *now*. I don't have that feeling for *we shouldn't create a world devoid of people*, but I think on a first glance, at least, it's not clear why I should treat these cases differently.

Expand full comment
deletedFeb 10, 2023Liked by Richard Y Chappell
Comment deleted
Expand full comment
author

Hedonism is bad, but I guess that's an improvement nonetheless... ;-)

Expand full comment