5 Comments
⭠ Return to thread

"One might infer that good states above a sufficient level have diminishing marginal value"

Can't one just restate the original gamble, but now with the Utopia stipulated to have arbitrarily large value, instead of whatever other good it was measured in before? If value itself is the unit of evaluation, then shouldn't a non-risk-averse person be indifferent between a decent world, and a 50/50 gamble with outcomes + N value, - N value, for any N?

Even if you think there is a maximum possible value (which as you note in the other post, has its own problems), it doesn't seem outrageous to me that the maximum would be large enough to admit a gamble of this form that would still be very counterintuitive for most people to actually accept over the alternative.

To the general point: I made a similar argument in the comments to an earlier post on a similar topic, but isn't it enough to note that most people have a preference for Utopia over the void, and argue that Utopia is better on the grounds that it satisfies our current preferences more? Does there need to be an *intrinsic* reason why Utopia is better than the void?

In general, the idea of intrinsic value seems odd to me. What appeals to me about consequentialism and utilitarianism is that they are very person-centric: utility is about what's good *for people*, unlike deontology or divine command or whatever, which center "goodness" somewhere else, somewhere outside of what actually affects and matters to people.

Obviously the above is too naive a conception of utilitarianism to be all that useful: we often face dilemmas where we have to decide how to evaluate situations that are good for some people but not for others, or where we face uncertainty over how good something is, or whether it's good at all, and so we need a more complex theory to help us deal with these issues.

But when contemplating the void, it feels to me like we aren't in one of these situations: there are no people in the void, and so no one for whom it to be good or bad; the only people for whom it can be good or bad are the people now who are contemplating it, and so we should be free to value it however we want, with no worry of our values coming into conflict with those of the people who live in that world. As it happens, we (mostly) currently very strongly dis-prefer the void--but there's no intrinsic reason we have to, and if we were to collectively change our minds on the point, that would be fine.

Expand full comment

You could also restate the gamble in terms of *risk-adjusted value*, where +/- N risk-adjusted value is just whatever it takes for a risk-averse agent to be indifferent to the gamble. But I think these restatements aren't so troubling, because we no longer have a clear idea of what the gamble is supposed to be (and hence whether it's really a bad deal). If the worry is just that, structurally, we're committed to accepting *some* 50/50 gamble, I guess I'd want to hear more about what view avoids this possibility, and what other problems it faces instead.

> In general, the idea of intrinsic value seems odd to me.

It sounds like you may be mixing up issues in normative ethics and metaethics here. Intrinsic value, as a normative concept, is just the idea of something's being non-instrumentally desirable. While I happen to be a moral realist, and think there are objective facts about this, you could just as well endorse my claims while being an expressivist. In that case, when you say "X is intrinsically valuable", you're just expressing that you're in favour of people non-instrumentally desiring X. So, in particular, an expressivist who wants there to be good lives in future, and favours others sharing this desire, could affirm this by calling good lives "intrinsically valuable". There's nothing metaphysical about it. It's just a normative claim.

> "[Why not] argue that Utopia is better on the grounds that it satisfies our current preferences more?"

Well, depending on who counts in the "we", just imagine it were otherwise. Suppose you were surrounded by anti-natalists. Mightn't you nonetheless oppose their view, and want utopia to come into existence? I sure would! As a moral realist, I happen to think this is also the *correct* attitude to have. But even if I were an expressivist, I wouldn't want my support for utopia to be conditional on others' attitudes (or even my own: I would be "pro-utopia, even on the hypothetical condition that I cease to be pro-utopia").

> "there are no people in the void, and so no one for whom it to be good or bad... and so we should be free to value it however we want"

This seems wrong. Suppose that Sally is considering having a child with a genetic condition that would cause it unbearable suffering. Clearly, it would be wrong to bring the miserable child into existence. The void is better. There's just no denying that negative welfare is impersonally and intrinsically bad: we have to oppose it, even if there isn't (yet) any definite person for whose sake we would be acting when we tell Sally not to have the miserable child.

By parity of reasoning, there's no basis for denying the parallel claims about positive welfare being intrinsically good. Just as the miserable child would be (non-comparatively) harmed by being brought into existence, and we should oppose that, so a wonderfully happy child would be (non-comparatively) benefited by being brought into existence, and we should support that. So, these conclusions are forced on us by a simple mixture of (i) basic decency and (ii) intellectual consistency.

Expand full comment

Thanks for the very good response!

> If the worry is just that, structurally, we're committed to accepting *some* 50/50 gamble, I guess I'd want to hear more about what view avoids this possibility, and what other problems it faces instead.

Oh sure, I agree that you can't avoid having to pick some gamble like that. I guess the question is, does the move to diminishing marginal value matter here, or do we just want to say something like, yes, expected-value-maximization says we should take some gamble of this form, but

a) your alternative pet theory probably does the same, a la your "Puzzles for Everyone" post, and

b) we shouldn't imagine we are correctly conceptualizing both ends of the gamble "correctly" so we should be wary of relying too heavily on our intuition here.

> So, in particular, an expressivist who wants there to be good lives in future, and favours others sharing this desire, could affirm this by calling good lives "intrinsically valuable".

I'm not sure I totally understand--how is this different from the expressivist just having a preference for future good lives? I suppose from their point of view, they would say "I don't think this is good just because it satisfies my preferences", but from an outside view, it seems to me hard to distinguish an opinion on "intrinsic value" from a preference, at least from the point of view of a non-moral-realist.

> I would be "pro-utopia, even on the hypothetical condition that I cease to be pro-utopia").

I guess this is where we disagree. I am basically fine with the idea of the anti-natalists winning, as long as they do so by honourable means.

> even if there isn't (yet) any definite person for whose sake we would be acting when we tell Sally not to have the miserable child.

> By parity of reasoning, there's no basis for denying the parallel claims about positive welfare being intrinsically good.

I agree that *if* we bring a child into the world, and they love their life, we can regard that as a benefit for the child...but only conditional on bringing them into the world. If I had to summarize the view I think I'm arguing for, it would be something like, "you only have to care about the benefits/harms to a person in the worlds where they actually exist"--so Sally's child is harmed by being "forced" to live in a world where they will suffer; and a person with a good life is benefited by being born in any of the worlds in which they are, in fact, born. But in the worlds where a person is not born, we don't have to weight their benefits/harms in our calculation of what to do. We can *choose* to do so, as a matter of our personal preferences, or for other instrumental reasons, but I don't see why there is any intrinsic reason to do so.

Expand full comment

Quick counterexample to your last claim: suppose Sally flips a coin to decide whether to create a miserable child. Fortunately, the coin directs her not to. But now your view implies that Sally needn't have taken into account the interests of the child who would've been miserable. But this seems wrong. Sally was wrong to flip a coin, and take a 50% risk of causing such suffering. She should have outright (100%) chosen not to have the miserable child, and she should have done it out of concern for that child's interests.

> from an outside view, it seems to me hard to distinguish an opinion on "intrinsic value" from a preference, at least from the point of view of a non-moral-realist.

Yeah, I'm no expert on expressivism, but a couple of possibilities:

(1) The relevant thing might be that it's a special kind of universal higher-order preference: they want *everyone* to have the relevant first-order preference.

(2) Alternatively, it might be that they're in favour of blaming or otherwise morally criticizing people who don't have the relevant first-order preference.

(There may be other options!)

Expand full comment

Sorry, I realized overnight that I missed the point that in the example where we don't create the child, the void is ranked against the world the miserable child is born; if we can do a comparison in that case, why not in the other case?

That actually feels pretty convincing to me; I still feel conflicted about this, but I think if I really want to believe that the void isn't worse than Utopia I really do need an explicit person-affecting view, or to have an explicit asymmetry between negative welfare and positive welfare.

Expand full comment