55 Comments
⭠ Return to thread

I don't this question is generally addressable without discussing your meta-ethical views.

If welfare is good because (and only because) we as individuals care about our welfare (call this Premise C) then things being good requires actual people (whether past/present/future) to exist in the first place - otherwise there is no source of value, and no basis on which to judge what is good and what is not.

Note that this isn't necessarily constructivism, insofar as you will have something like "X is good iff it is non-contingently desired" is plausibly mind-independent in the Shafer-Landau sense, hence yielding some kind of moral realism.

The real question, of course, is whether we should accept Premise C. I think, broadly speaking, there are two compelling reasons for it:

(a) At a positive level, we do obviously care about our own lives/freedom/happiness/etc, and as a result these things are good (possess ought-to-be-ness, have reason for existence, whatever). And if you take a step back, and asked what would happen if you didn't care about these things, there *doesn't seem to be any reason for the universe to care* - there doesn't appear to be any reason separate from your caring for these things to matter.

(b) It would be an extremely implausibly metaphysical coincidence that our welfare just happens to be good from the point of view of the universe, separately from us caring about it. For the sake of argument, consider that there metaphysically could be a planet of anti-humans - with the residents here telically desiring the anti-welfare of humans (i.e. that we die, are made slaves, are unhappy etc), and have the same pro-attitudes towards the inverse of the things we have pro-attitudes to. And it's just hard to justify why we would be cosmically right and them cosmically wrong - why it just happens that the stuff we value (and not the stuff the anti-humans value) is what the universe also values in the Mackie/Platonic sense. But perhaps this is just a long-winded way of saying the evolutionary debunking arguments are compelling, unless you have Premise C and some sort a meta-ethical view that links valuation to objective value in a way that (a) avoids the coincidence, and (b) still gets you a sufficiently strong sense of mind-independence as to defeat the radical moral sceptic who keeps asking why we should care about others

Expand full comment

One challenge to Premise C is the temporary depressive. Suppose Adam doesn't care about his welfare, and is considering suicide. He also has access to a 100% reliable antidepressant pill. If he takes the pill, he will have a very happy future (which he would, at that future time, highly value). But right now, he doesn't value that at all. So he commits suicide instead. Did Adam make a mistake? I think yes: he should've taken the pill and had a happy future instead. C implies no.

For further arguments, see: https://rychappell.substack.com/i/54631002/against-subjectivism

On why my normative claims here don't rely on moral realism, see my response to JeremyL, upthread:

https://rychappell.substack.com/p/dont-valorize-the-void/comment/12255454

For my defense of moral realism against evolutionary debunking arguments, see my paper 'Knowing What Matters', summarized here: https://www.philosophyetc.net/2018/03/on-parfit-on-knowing-what-matters.html

Or you can read the full paper here: https://philpapers.org/archive/CHAKWM.pdf

Expand full comment

Hi Richard, thanks for the reply!

I'm not sure if I would be especially convinced by this thought experiment. Three points:

(1) Even if this worked as an intuition pump, it doesn't solve the deeper meta-ethical problems of how we ground objective value (if not in what we desire). And from the POV of the sceptic, this thought experiment is no better than a deontologist relying on Transplant thought experiments to defeat utilitarianism even though they can't point to appropriate grounding for side-constraints.

(2) I think the most plausible view of desire-based theories of welfare will be global/life-cycle (i.e. what does X person, as a chain-of-continuous-consciousness, intrinsically want?). That is to say, from the perspective of who we are over time, XYZ things matter, and normal human blips from depression/fatigue/etc, don't change what these XYZ things are. Moreover, this gets around logical issues like there being infinitely many people at t1, t1.5, 1.25 etc, wanting the same thing Y, such that Y has infinite value.

(3) I'm not even certain that it makes sense to say that a depressed person doesn't want to be happy. They may not see the meaning of life, because life isn't happy - that doesn't mean they don't see the value of happiness. If you asked any depressed person if they would press a button that would make them magically happy, they would! I guess the upshot of this is just that I don't believe this is a fair intuition pump with respect to what we're getting at - of course we agree that taking then happy pills (rather than suicide) is reasonable; it's just that the intuition motivating this conclusion (i.e. actually, from the POV of the depressed person, happiness still matters), doesn't prove that happiness has some of non-desire grounding for its value.

Expand full comment

>> One challenge to Premise C is the temporary depressive.

I feel like a lot of these sorts of arguments have the feel of debunkings of naive versions of utilitarianism; it's true that, all else equal, someone who holds Premise C has to bite a bullet here, but in a realistic case, there will be instrumental reasons and prudential reasons and all sorts of other reasons why a Premise C-er can still dodge the question.

Expand full comment

Hmm, well I'm trying to get at the deeper issue that it sure seems like we should want what's best for Adam, rather than having weird conditional desires (only wanting Adam to have a good life conditional on him -- or perhaps someone else in the imagined scenario -- *already* wanting this). So it's getting at the deep question of what's worth caring about, rather than any superficial intuition about what act-types just "seem wrong" or whatever.

To help confirm this, we can cut the agent's act of suicide (which some might find distracting) out of the picture. Suppose Adam dies from natural causes -- he's struck by lightning. This didn't thwart any of Adam's actual desires. But if he hadn't died right then, he would've quickly gotten over his depression and had a very happy future. Is it bad that Adam died? Should you regret this? I say "yes, obviously!" But C denies this. So we should reject C as incompatible with decent values.

Expand full comment

I'm still not sure I see it; you can regret this without giving up C as long as you think the reason to regret it is that it will make his family sad or whatever. It's also not clear to me if "temporary" is meant to be understood in the sense that he was previously not depressed, and then became depressed--if so, then I think there is also probably some complicated thing about how to evaluate an individual's intertemporal preferences that makes C more complicated, but is the sort of thing all utilitarians have to deal with to some extent.

You can still probably clean it up to sharpen the point: Adam has no friends or family, has been miserable and unhappy his whole life, and is suddenly killed painlessly--but in all possible worlds where he doesn't die at this point, his life becomes happy and joyful immediately after (perhaps the near-death experience makes him realize that life is precious?)

I think a version like this is more convincing, although one might argue that we are implicitly imputing to Adam some sort of meta-preference (he wishes he *could* be happy enough to value his welfare?) that might be affecting our judgement, and the violation of those meta-preferences triggers C.

If you stipulate that even an "It's a Wonderful Life"-style vision of his happy future wouldn't change his mind, then, though I agree that I would probably still regret his death, it would be harder for me to argue that this isn't just due to my preferences, rather than a considered judgement about values.

Expand full comment

Right, flesh out the details as needed. If it's just an arbitrary personal preference on your part to have Adam go on to have a happy life, then you'd seem committed to not judging anyone who happens to have a different preference about the case. But it doesn't seem (to me at least) *optional* to regard Adam's death here as a bad thing. Indifference towards his death (a death which makes him *worse off than he otherwise would have been*) seems objectionably callous, and a failure to appropriately value him as a person.

To bring this out more clearly, suppose that Adam's parents come to fully believe C, and so do not care *specifically for Adam's sake* that he died and missed out on a wonderful future. (Maybe they care for other reasons, but bracket that.) That would seem messed up. They aren't valuing Adam in the right kind of way -- and neither does anyone else who accepts C.

I general, I think it's a kind of philosophical error to let metaphysical qualms dictate your values. I don't think anyone would find C plausible except for feeling like they somehow *have* to affirm something like this in order to qualify as a naturalist in good standing (or some such). But really there's a kind of is/ought error happening here, and you don't need to rely on a "point of view of the universe" in order to non-instrumentally and unconditionally value others' well-being. Confused talk about the "sources of value" unfortunately has the effect of muddying this basic point, and misleading people into thinking that they have no choice but to constrain their values to accord with some form of normative subjectivism. But it's not true: you really can value whatever you want (or whatever most plausibly matters)! If you're not a realist, there's no outside force to tell you you're wrong. And if you are a realist, you should think the substantively plausible values are more likely to be correct. In neither case should you feel compelled by normative subjectivism. It's a philosophical trap.

Expand full comment

> If it's just an arbitrary personal preference on your part to have Adam go on to have a happy life, then you'd seem committed to not judging anyone who happens to have a different preference about the case. But it doesn't seem (to me at least) *optional* to regard Adam's death here as a bad thing.

I don't think it's *arbitrary*; it's formed from the observation that in the vast majority of cases it will in fact be wrong for people to die suddenly, and blah blah blah. And I don't feel committed to judging those who feel differently; in particular, if *Adam himself* feels differently, then I don't see what grounds I have to disagree with him.

>> They aren't valuing Adam in the right kind of way -- and neither does anyone else who accepts C.

I'm not sure why they aren't: it seems to me that part of valuing other people is deferring to their judgement on what it is that would count as a valuable life for them; if their judgement is that no such life would count, then we should take them at their word.

I think part of the issue is that it's very hard to imagine that Adam's judgement is sound; maybe his depression is clouding him from imagining what a happy life might look like--and I think that's a very reasonable concern!

What's more, I think it's very difficult to conceive of how Adam could be instantaneously 100% cured of his depression and go on to live a life of joy if the capacity for that joy weren't somehow latent in him already; the very formulation of the scenario seems to be pushing on us the implicit meta-preference I mentioned before: if, within an instant, he is capable of being completely turned around, there must already be some part of him that *wants* to be turned around, and so we can't take at face value the idea that the depressive part of him is speaking on behalf of "all of him".

I even think it's very plausible that it is just a fact of human psychology, evolutionarily driven into us, that *all* human beings will always have *some* part of us that wants to keep living, and so on...in which case, no actual human being could ever be in an Adam-like position without triggering C.

Expand full comment

The Adam case works as an intuition pump, but it falls short of being an argument. For someone inclined to accept C, it's pretty easy to accept that Adam didn't make a mistake.

On the moral realism thing — it may be true that your normative claims can be separated from your moral realism, but your writing style is so pervaded by realist turns of phrase that it's easy to see why people keep picking up on that and assuming that your realism is central to your argument. For example:

>"it’s insane to deny this premise"

>"important moral insights"

>"any theory formulated in purely negative terms... cannot possibly be correct"

>"moral theorists have misgeneralized their intuitions"

Whether or not this matters depends, I suppose, on how persuasive you want your arguments to seem to readers who do not share your moral realism.

Expand full comment

Hmm, I think it would be difficult to do normative ethics in a way that didn't sound at least superficially "realist". That's part of why anti-realists like Blackburn and Gibbard have put so much work into showing that their metaethics is compatible with "talking like a realist". So I think I'd rather just urge anti-realist readers to shed their unnecessary suspicion of objectively-tinged moral discourse.

Expand full comment

I guess what strikes me as objectionable is the conjunction of realism with the sort of strong confidence in your own normative views (and dismissal of other views) that you express in this post.

Suppose I find myself in in the following situation:

(1) I believe there is a true, objectively correct axiology.

(2) I sometimes encounter otherwise reasonable-seeming people who have thought carefully about the relevant issues and concluded that there is no strong reason to prefer utopia over the barren rock.

(3) I am unable to imagine that the correct axiology could be indifferent between the barren rock and utopia, but also unable to offer any arguments supporting my own view.

What is the appropriate attitude to adopt in such a situation? I don't think there is anything wrong with saying "I am just going to take it as a premise that utopia is substantially better than the barren rock." This is just like assuming the parallel postulate and studying Euclidean rather than non-Euclidean geometry. But if I can't imagine how the parallel postulate could possibly be false, this does not justify me in calling non-Euclidean geometry "insane" or assuming I possess a fundamental insight into mathematical truth that non-Euclidean geometers lack; rather, I should take it as an opportunity to reflect on the limitations of my own geometric imagination.

The precise relevance of (1) here is a little difficult to pin down, but perhaps it is something like this: without something like (1), I would not be tempted to call the other views crazy, any more than I would be tempted to call someone crazy for thinking that tea tastes better than coffee. And if I do say "the claim that tea tastes better than coffee is insane, and any theory of hot beverages that implies this claim should be instantly disqualified," it would seem disingenuous, when encountering someone who objects that hot beverage preferences are a matter of personal taste, for me to urge them drop their needless suspicion of objectively tinged hot beverage discourse, since they can just interpret my claims merely as an expression of my own strong preference for coffee — that clearly wasn't what I thought I was saying (or doing) when I made the claim.

Expand full comment

See 'Knowing What Matters' - https://philpapers.org/rec/CHAKWM - where I set out my normative epistemology, including an explanation of why actual disagreement is irrelevant. (You could have "otherwise reasonable-seeming people" who think that torture is the only good, so either moral realists can have default trust in their normative intuitions -- including about which other views are outright crazy -- or they're inevitably led to full-blown skepticism, which I think is plainly far worse.)

> without something like (1), I would not be tempted to call the other views crazy, any more than I would be tempted to call someone crazy for thinking that tea tastes better than coffee

Note that much of 20th century metaethics precisely involved anti-realists of various stripes showing how they can avoid being committed to the sort of simple subjectivism you seem inclined to attribute to them. Instead, they defend the claim that they can be just as firm as anyone in their moral conviction (e.g.) that the Holocaust was a moral *atrocity*. See: https://plato.stanford.edu/entries/moral-cognitivism/#QuaRea

I'm not super-invested in defending the view, but I do think that the sophisticated non-cognitivist views developed by philosophers are far superior to the simple subjectivist views that non-philosophers too often fall into (and that end up diluting and distorting their normative views).

Whatever your metaethical views, we should all be appalled by the torture inflicted on factory-farmed animals. We should all be appalled by the claim that children dying overseas don't matter. And we should all be appalled by the view that everything good in life isn't really good at all: that an utterly empty universe is as good as it gets, and that the extinction of all life is a matter of (intrinsic) indifference. These are horrific claims, and any metaethical view worth taking seriously needs to be compatible with aptness of criticizing horrific moral views as such.

Expand full comment

The section in that paper on "when actual disagreement matters" is a bit brief, so I'm not sure I fully understand the position you sketch there. It seems to be something like "disagreement matters when it is non-ideal, and we can hope to resolve it through clarifying arguments; but if there are fundamentally different worldviews, we can expect the arguments to be dialectically unsatisfying, so we don't need to worry about it too much."

Is that a fair summary? I so, I think it's a pragmatically reasonable approach to adopt (nobody wants to waste time coming up with arguments against the view that torture is the only good), but it seems difficult to reconcile with a strong commitment to metaethical realism. (In other words, your treatment of the skeptical argument from disagreement leaves me uncertain whether you think you have a counterargument to the skeptic or are just claiming that you don't need one.)

>"we should all be appalled by the view that everything good in life isn't really good at all, and that the extinction of all life is a matter of (intrinsic) indifference"

I agree with the other normative claims in your last paragraph, but this last one (which comes back to the original topic of your post here) I think is just wrong, quite independently of any metaethical disagreements. I could just as easily say that "we should all be appalled" by the view that it is worth risking an eternal dystopia merely in order to ensure humanity's continued existence. On either side, prematurely rejecting ideas that deserve serious consideration represents a failure of moral imagination that risks denying us access to the actual moral truth (if such a thing exists) or preventing us from satisfactorily clarifying the structure of our own moral attitudes (if it does not).

Expand full comment

Yep, fair summary! In general (including, e.g., external world skepticism) I don't think it's possible to present non-question-begging counterarguments against radical skepticism. We can just explain why we reject some of the skeptic's premises (as I do in response to Street earlier in the paper -- see especially my discussion of the "moral lottery"), and hence why we aren't ourselves committed to sharing their skepticism. For more on my generally anti-skeptical approach to philosophy, see: https://www.philosophyetc.net/2009/02/skepticism-rationality-and-default.html

> I think [anti-nihilism about positive value] is just wrong, quite independently of any metaethical disagreements

Fair enough! There I was just wanting to stress the point that you shouldn't let metaethical qualms prevent you from making full-blooded moral judgments.

> prematurely rejecting ideas...

Sure, no-one wants to do that. I'm very much a fallibilist, and think even our verdicts about which claims are crazy/disqualifying must always be held somewhat tentatively and open to revision. I'm always interested to read arguments for nihilism, solipsism, radical skepticism, or whatever. But still, in the meantime, I think it's right to treat the fact that a view implies nihilism (or whatever) as disqualifying, and so we should be prompted to rethink some distinctions in the way I propose in the OP. (It's fine if you have a different list of claims you currently regard as disqualifying.) After all, there are also downsides to being too indecisive or philosophically non-committal.

Expand full comment

Regarding skepticism, I think Moorean responses fail pretty hard in the face of moral disagreement. It is as if I say "here is one hand, here is another," and my apparently good-faith interlocutor replys "I agree the first thing is a hand, but the second is obviously an octopus; but don't worry, you really do have another hand, it's right there!" [points at my shoe] — enough experiences of this type, would (and perhaps should?) push me towards skepticism concerning the existence and knowability of an external material world.

So, hooray for fallibilism! (But also: Boo for indecisiveness!)

Expand full comment

>> If welfare is good because (and only because) we as individuals care about our welfare (call this Premise C) then things being good requires actual people (whether past/present/future) to exist in the first place - otherwise there is no source of value, and no basis on which to judge what is good and what is not.

>> It would be an extremely implausibly metaphysical coincidence that our welfare just happens to be good from the point of view of the universe, separately from us caring about it.

Yeah, this is the basic intuition that seems so compelling to me; I am not a moral realist, and TBH, I don't find your version of "intrinsically valuable" for non-moral-realists very compelling; but I think your example above points out that consistency might require me to assign value to the void in a certain way to be in accordance with what human beings actually currently value.

And while I think there's room to get around this, most of the avenues for doing so open the door to other things we disvalue based on our current value system being re-ranked as well.

My intuition is something like, in a world without humans, there would be no humans to be unhappy about that, so who cares? But this is basically the same as, in a world with anti-humans, anti-humans would be the measure of goodness, so who cares? And while I think that is very plausible, I still feel *we shouldn't create a world full of anti-humans*--because it conflicts with my values *now*. I don't have that feeling for *we shouldn't create a world devoid of people*, but I think on a first glance, at least, it's not clear why I should treat these cases differently.

Expand full comment