24 Comments

It seems plausible that someone whose life had negative hedonic value could overall be better off than someone whose life had positive hedonic value. Let me try to sketch out a comparison.

Let's imagine one person Sal, who is a perfectionist and overachiever. Sal involves herself in many worthy causes and because of the effort she puts in, she is mostly successful by any reasonable standard. Because she is such a perfectionist and has extremely high standards, she does not derive much pleasure from her achievements. She has friends and family and her life is contains many objectively good things, and while she is pleasant to be around, she obtains at most a mild degree of pleasure from her life. She lives a long life. And in the last five years of her life, even though she suffers from a painful and debilitating illness, she pushes on to do lots of good for others (e.g. she involves herself in effective charitable work). She does this out of a sense of duty and derives no pleasure from this. On her last day, while cooking a meal, she slips and falls in the kitchen, and breaks her hip. While she is lying there her house catches fire and she burns painfully to death. Overall, Sal seems to have lived a good life even though the last few years and especially her final moments put her total hedonic value on the negative end of the ledger.

Hal is a petty bully and drug addict. He lives till his twenties more or less on a constant high and dies painlessly of an overdose. Overall, netting off from any pain he might experience from going through periodic drug withdrawal (whenever he cannot get his hands on drugs), the hedonic value of his life is barely positive.

Let me make 3 points

a) I think Sal's life is better than Hal's all things considered.

b) a life with negative hedonic value is not necessarily worse off than one with positive hedonic value (all things considered) and hence avoiding such results cannot be a motivation for the multiplicative account.

c) Your multiplicative account (or some account like yours) can explain why Sal's life is better than Hal's. Sal's pleasures in her life are derived from a genuine if understated appreciation of objective goods and hence are greatly valuable. A good deal of the pain in her life especially that which is due to her illness is borne by her in order to do something that she takes to be worthwhile. Hence it's disvalue is less than it otherwise would have been. Hal's pleasures are empty and superficial and his pains arise from his pursuit of empty pleasure.

Expand full comment

There are a few smaller problems, but these are the big ones.

1) It implies hypersensitivity. Let's say that pleasure that you get from friends is more valuable than the pleasure you get from other sources. Presumably if your friends were slowly replaced by zombies, that would make it so that they no longer multiply your pleasure value. However, as the amount of pleasure you get from your friends tends towards infinity, that would mean that each milisecond of zombification decreases your well-being by an arbitrarily large amount. This is really implausible; if zombification takes 10 years, it doesn't seem like one second of your friends becoming slightly more zombieish could decrease your well-being by any arbitrarily large amount -- having a more deleterious effect on utility than the worst crimes in human history. The same idea can be applied to the other things that are on the objective list.

Maybe I'm misunderstanding and it caps out at 1 where 1 is just a very good life. But if that's true then you have to accept that there's some pleasure cap, which seems implausible. You also still get hypersensitivity, because when a life is at the positive extreme, increasing the multiplying factor could be arbitrarily good (or maybe good till it reaches 1, but at 1 things are very good). While this doesn't look like hypersensitivity, in that it only changes the final score from 0-1 by a very small amount, remember, 1 is the best possible life and 0 is the worst, so small differences in the final scores will still be enormous.

2) It seems to result in equal pleasures and pains not offsetting. Suppose that everytime I see someone I get a headache, such that the overall quality of interacting with them is neutral. The pleasure precisely cancels out the badness of the headache. In this case, it seems strange to say that it's actively good to interact with them. However, on an OLT account, it seems it would be, because the pain's badness is left unchanged by the interaction, but the pleasure's goodness increases because I'm interacting with another person. You could get around this though by taking it to be about momentary net hedonic value, so that pleasures and pain's cancel out.

3) These views also violate the following intuitively plausible constraint.

Pleasure and Non-Hedonic Dominance: For any two lives, if one life contains both more pleasure and more non-hedonic goods than the other life, that life is better.

This violates it in the following way. Suppose that the only two goods on the objective list are knowledge and pleasure and pleasure from knowledge is twice as good. Suppose that, to dramatically simplify things, the relevant feature in regards to knowledge is the number of facts one knows. Person 1 knows 10,000 facts and has 8,000 units of pleasure. Person 2 knows 5,000 facts and has 6,000 units of pleasure. However, all of person 2’s pleasure comes from their acquisition of facts (they are an avid reader of the dictionary and encyclopedia!), while non of person A’s pleasure comes from their acquisition of facts. Person 1 would have 8,000 units of well-being, while person 2 would have 12,000 units of pleasure.

4) This gets a really wrong result when it comes to simultaneous experience of both pleasure and pain. Suppose a person simultaneously experiences some vast amount of pleasure from a source that’s on the objective list and some far greater amount of pain. Suppose the multiplier is 2 times. Suppose additionally that they experience 2 billion units of pain, and 1.5 billion units of pleasure from friendship. Additionally suppose that vicious torture causes overall about 100 million units of pain. Thus, their mental states, considered in isolation, are far worse than that of a person being tortured. The objective list theorist who adopts the multiplier view has to think that this person is very well off -- for their pleasure is multiplied to be greater than the pain. However, the notion that a person is well off who every second has experiences that are hedonically far worse than torture, is totally absurd. You can get around this though

5) Not sure if I'm missing something, but this seems to hold that if you're at 0 on the objective list score, then being at 1 on the hedonism score would be equivalent to being at zero on the hedonism score, which is wildly implausible. Your later solution avoids this though by taking it to be about momentary net hedonic value, so that pleasures and pain's cancel out.

I'm glad you started writing about well-being, so that we finally have something to disagree about. One clarifying question: is this over the course of lifetimes or moments, when we add up the values to multiply?

One other question: are the intervals regular? So, for example, is the difference between .55 and .65 the same as between .65 and .75?

Expand full comment

Thanks for these objections!

re: 1, yeah, I was implicitly assuming diminishing asymptotic returns to pleasure. (My intuitions vacillate, but this seems plausible to me about half the time.) If small differences in score between 0 - 1 are truly "enormous" when translated into value, then won't correspondingly small-seeming non-hedonic scores indicate correspondingly "enormous" non-hedonic differences? Especially if we restrict the scale, e.g. so that the non-hedonically worst and best lives only differ by 0.5 or less in non-hedonic score. So I think hypersensitivity is avoided.

re: 2, I think I'm OK with that implication! I don't think interacting with a stranger necessarily has non-hedonic value, but for someone who's really important to you, I think such interactions are (at least in moderation) plausibly positive even if you get enough of a headache that it's hedonically neutral. I don't think this sort of value aggregates additively.

re: 3, I guess the implications here depend on whether the multiplicative model is applied to lifetime scores or momentary/episodic scores. But for someone who has "hybrid"-style intuitions (that objective value only has welfare value when subjectively appreciated in some way), it seems natural for them to reject the dominance constraint when the putatively "dominating" life doesn't actually *secure* the objective value (due to the lack of necessary appreciation).

re: 4, agreed, seems important to formulate the view in a way that avoids that implication! Seems like their hedonic score should count as super low, and sufficient to outweigh the positive boost from the appreciated objective goods.

re: 5, probably best to exclude zero from the range of allowed scores! Take (0,1) as an open rather than closed interval (if I'm remembering my math terms right). Also most plausible, I think, to restrict non-hedonic scores further, as mentioned in a bullet point.

> "is this over the course of lifetimes or moments, when we add up the values to multiply?"

Good question! Not sure -- would have to think more about which option is least costly overall.

> "are the intervals regular?"

Not for the composite scores, at least. Maybe for the input scores? Not sure.

Expand full comment

Re 1 If at one there is infinite pleasure, or some unfathomable amount of pleasure, then any change around one will have unfathomable significance -- even very small changes. You can't hold that there's a multiplicative effect for immense pleasure when you have lots of objective list goods and also that small amounts of goods doesn't produce unfathomable boosts to well-being (they produce a lot if they multiply it at all). This is hard to see because we don't have good intuitions about these numbers (particularly because of vagueness), but if it multiplies the value at all of something arbitrarily great, then it would violate hypersensitivity.

Re 2 Seems to me intuitively like a cost to the theory.

Re 3 If they have those intuitions, that's true, but that does seem very implausible.

Re 4 What if we make the pleasures and pains spread out across moments. So you have an unfathomably pleasurable experience for one second that is good on the objective list and an unfathomably painful experience the next moment, it seems that you'd be badly off if the painful experience was twice as painful as the pleasurable experience was pleasurable. If we accept that making the pain you experience happen in a second when you would otherwise be experiencing pleasure doesn't make you worse off -- in other words, spacing the pain a second apart from the pleasure rather than having them occur simultaneously -- then if we accept that if they occurred simultaneously and you were net horrifically miserable each second that you'd be poorly off, the same would be true if you spaced them a second apart. Thus, your view commits you to really implausible things in cases where you have oscillating pleasure of unfathomable goodness, before even greater pain, but where the pleasure is on the objective list.

The momentary view seems a lot better than the lifetime view, especially if we accept reductionism about personal identity -- which is pretty obvious.

If the intervals are regular for the input scores, then the pleasure scores would be undefined if we accept that pleasure can scale up to infinity.

Expand full comment

I lay out what is I think one of the biggest problems for such a view here.

https://benthams.substack.com/p/the-agony-challenge-for-objective

Expand full comment

Some possible objections:

1. It seems reverse-prioritarian in a sense. If A is better off than B non-hedonically, then increasing A's hedonic welfare by x is better than increasing B's hedonic welfare by x, but this seems backwards to me. This is especially bad if there are non-hedonic bads, maybe being a victim of mistreatment/injustice, being hated, having sufficiently inaccurate beliefs (not just ignorance, but strongly believing wrong things, and possibly because of deception or manipulation), so we can have a life that's overall non-hedonically bad. Maybe you can just stack standard prioritarianism on top of the overall welfare to try to fix this, though.

2. It requires value to be bounded, at least per "moment" of value in each individual.

3. It seems wrong to me for an overall miserable life where someone doesn't appreciate their non-hedonic goods to ever count as good. Restricting the range of non-hedonic value can help (like you suggested to ensure positive hedonic lives are good), but it seems ad hoc and hard to motivate independently. Hybrid views might help, but this might involve double-counting subjective value: if someone appreciates friendship, you multiply their subjective (preference-based or hedonic?) appreciation of friendship by their subjective hedonic appreciation of friendship.

Expand full comment

Thanks, some good objections here!

re: 1, do you think it's possible to come up with an alternative formula for incentivizing "well-roundedness" that doesn't have this reverse-prioritarian implication?

re: 2, I wonder if that might be considered a feature rather than a bug? Seems to help with "double or nothing" existence gambles, for example -- https://rychappell.substack.com/p/double-or-nothing-existence-gambles -- and in general, unbounded value seems to create significant difficulties for decision theory (fanaticism, etc.). Are there comparable problems for bounded value? I guess it eventually starts to seem *insufficiently* sensitive to additional increments of dis/value -- maybe especially problematic in the negative direction. E.g. if it implied that someone with a sufficiently terrible life should be willing to take a 50/50 gamble that would either bring them back to neutral or extend their suffering by a zillion times as long, that sure couldn't be right.

3. Yeah, I definitely feel the appeal of hybrid views that take *appreciation* of objective goods to be what's really valuable here. If we understand "appreciation" in terms of positive value *judgments* (rather than hedonically positive feelings), I wonder if that might help avoid the double-counting worry?

FWIW, I do think a hedonically mildly-negative life could, with appropriate appreciation of non-hedonic goods, count as positive overall.

Expand full comment

Sorry for the late response.

Did you intend for this to be lifetime welfare or momentary welfare?

Re 1, I think Josh's suggestion avoids the reverse-prioritarian problem and promotes well-roundedness. If I understand correctly, it's (equivalent to) the Euclidean distance between the empty life/moment and the maximal life/moment minus the Euclidean distance between a maximal life/moment:

Where x is hedonic value and y is non-hedonic, and each is 0 when absent the corresponding goods and bads

-sqrt((x-x_max)^2 + (y-y_max)^2) + sqrt(x_max^2 + y_max^2)

It also works intuitively for miserable lives with neutral hedonic-value: such individuals should focus on their hedonic welfare.

I was also thinking

-(x-x_max)*(y-y_max) + x_max*y_max, which basically reverses your function, but I think Josh's distance function is a cleaner solution to the problem of y not mattering when x=x_max and x not mattering when y=y_max.

Re 2, I assumed that you were assuming pleasure and suffering were cardinally measurable and bounded per experience, and this seems to be an empirical claim we might not want to commit to (e.g. maybe suffering can be unbounded). If you're instead taking pleasure and suffering and squashing their lifetime totals to be bounded for each individual (e.g. with a sigmoid function https://en.wikipedia.org/wiki/Sigmoid_function) and summing across individuals, then this worsens the Repugnant Conclusion and replaceability: given a positive life, it's better to have two positive lives each with half the total (unsquashed) hedonic and non-hedonic welfare. If you're only squashing the value per person per moment, then this doesn't solve the double or nothing problem.

I agree maximizing EV is problematic whether value is bounded or unbounded. Your example with the negative life is interesting. I'm sympathetic to maximizing the EV of a bounded function of the difference, and I think that can avoid fanaticism, implausible insensitivity, and is more psychologically plausible, but it's problematic in other ways, too. There's also just stochastic dominance instead of EV maximization: https://arxiv.org/abs/1807.10895

There's also a broader objection here, which is that once we start dealing with such arbitrary functional forms, it seems we've either given up moral realism (not that I was ever very sympathetic) or accepted moral indeterminacy.

Re 3, "If we understand "appreciation" in terms of positive value *judgments* (rather than hedonically positive feelings), I wonder if that might help avoid the double-counting worry?"

I would guess a hedonically positive feeling is actually a kind of positive value judgement, if we tried to define pleasure in functional terms. But I suppose there could be other kinds of positive value judgements.

Also, maybe it's just fine anyway: double-counting is just amplifying the value. If something is an objective good, then appreciating it hedonically should count more than appreciating something that isn't an objective good hedonically.

Expand full comment

Thanks, those papers look interesting!

Expand full comment

Applying a logarithm to both sides, we get u'(a, b) = log(a) + log(b), with -inf <= log(a), log(b) <= 0, positive lives having u'(a, b) > log(0.25) and negative u'(a, b) < log(0.25).

Ultimately this is just the additive approach with bounded positive contribution for each term.

Expand full comment

"Applying a logarithm to both sides, we get u'(a, b) = log(a) + log(b), with -inf <= log(a), log(b) <= 0, positive lives having u'(a, b) > log(0.25) and negative u'(a, b) < log(0.25)."

This is correct.

"Ultimately this is just the additive approach with bounded positive contribution for each term."

Huh? How are you drawing that conclusion? All you did was convert u(a, b)=ab into u(a,b)=10^[long(a)+log(b)]. Just having a plus sign in some version of the equation doesn't make it the additive approach in any meaningful way.

The substance of the additive approach is that any increase of x in one objective list value (a) will always result in an increase of x in total wellbeing (u), but for this model, an increase of x in a will only result in an increase of x in u if b is already maxed out at 1. Otherwise, increases in total wellbeing will always be <x and will be weighted depending on the initial values of a and b. Even with a bounded positive contribution for each term, the additive approach does not factor well-roundedness into overall wellbeing the way this model does.

Expand full comment

To clarify, those approaches give the same ordinal utility function. Take a world-state w = (a, b), and u(w) = u_a(a) * u_b(b), where 0 <= u_{a, b} <= 1. Then for u'_a(a) = log u_a(a), u'_b(b) = log u_b(b), u'(w) = u'_a(a) + u'_b(b), you always have -inf <= u'_{a, b} <= 0 and u(w_1) > u(w_2) if and only if u'(w_1) > u'(w_2). Therefore, up to a monotone transformation of base utility functions, you will run into exactly the same ethical issues with both theories.

I don't believe there is any objective way to assign a specific amount of utilons to pain/friendship/whatever: at best you can say you prefer this feeling of pain to that. Thus, we can pick any arbitrary function that preserves this order. Note the assignment in original version is just as arbitrary: pain is not a number in [0, 1].

Expand full comment

Ah. Yes, obv all ordinal utility functions can be transformed into literally any other monotonically increasing utility function (assuming the same preference relation), but this model is clearly using cardinal utility because a large aspect of it is trying to represent the intuition of well-roundedness (diminishing MRS is meaningless under ordinal utility theory). So your underlying contention with the model is just its assumption of cardinal utility.

I agree that utility can't be objectively measured, but measurability =/= cardinality. Surely the order of preferences isn't the only meaningful message of utility functions; it seems intuitively true to me that, without measuring utility values, we can have a rough understanding that the difference between states of the world A and B is larger than the difference between B and C, yet that is a meaningless statement under ordinal utility

Expand full comment

Logarithmic diminishing marginal value is a version of diminishing marginal value that specifically *doesn't* have a bounded positive contribution for each term.

Expand full comment

this has nothing to do with logarithmic diminishing marginal value. they were just applying a log to both sides to demonstrate that the utility function can be monotonically transformed to an additive function with a constant slope (with the implicit assumption of ordinal utility theory)

Expand full comment

Conceptually, you are right, but formally, this is identical to logarithmic diminishing marginal value.

Expand full comment

One other worry; this dramatically underspecifies population ethics. Presumably two lives with net score 1/2 wouldn't be as good as one with score 1. This seems especially weird with utilitarian aggregation that happens if we have the intuition that one should act as they would if they lived everyone's life and experienced everything that was experienced.

Expand full comment

Yeah, I flagged some related issues with the 'capped' view in my 'double or nothing' post: https://rychappell.substack.com/p/double-or-nothing-existence-gambles

I think the "act as if you were to sequentially live everyone's life" heuristic implicitly assumes a simple aggregative (total) view. But one who rejects this axiology could nonetheless build it into an "as if" clause for the sake of the heuristic. I don't have any independent intuition that the boundaries between people don't matter. Quite the opposite: it seems obvious that we've distinctive reasons to regret uncompensated harms (e.g. to the child in Omelas) that we wouldn't have if it was just a passing harm to one stage of a super-person that was more than compensated by larger benefits to other stages.

Expand full comment

This is basically a Cobb–Douglas utility function, isn't it?

https://en.wikipedia.org/wiki/Cobb%E2%80%93Douglas_production_function?wprov=sfla1

Expand full comment

well both functions involve convex indifference curves with diminishing MRS, but this model is constrained to 0 <= {x,y} <= 1, which is very important to the intuitions behind the design

Expand full comment

This is intriguing. What data could we test the predictive power on ?

Expand full comment

It doesn’t really predict anything. It aggregates some assumptions allowing you to compare things. Maybe the prediction would be, this thing should score higher than that thing.

Since it is making interpersonal comparisons, it probably is not falsifiable.

Expand full comment