"Applying a logarithm to both sides, we get u'(a, b) = log(a) + log(b), with -inf <= log(a), log(b) <= 0, positive lives having u'(a, b) > log(0.25) and negative u'(a, b) < log(0.25)."
This is correct.
"Ultimately this is just the additive approach with bounded positive contribution for each term."
"Applying a logarithm to both sides, we get u'(a, b) = log(a) + log(b), with -inf <= log(a), log(b) <= 0, positive lives having u'(a, b) > log(0.25) and negative u'(a, b) < log(0.25)."
This is correct.
"Ultimately this is just the additive approach with bounded positive contribution for each term."
Huh? How are you drawing that conclusion? All you did was convert u(a, b)=ab into u(a,b)=10^[long(a)+log(b)]. Just having a plus sign in some version of the equation doesn't make it the additive approach in any meaningful way.
The substance of the additive approach is that any increase of x in one objective list value (a) will always result in an increase of x in total wellbeing (u), but for this model, an increase of x in a will only result in an increase of x in u if b is already maxed out at 1. Otherwise, increases in total wellbeing will always be <x and will be weighted depending on the initial values of a and b. Even with a bounded positive contribution for each term, the additive approach does not factor well-roundedness into overall wellbeing the way this model does.
To clarify, those approaches give the same ordinal utility function. Take a world-state w = (a, b), and u(w) = u_a(a) * u_b(b), where 0 <= u_{a, b} <= 1. Then for u'_a(a) = log u_a(a), u'_b(b) = log u_b(b), u'(w) = u'_a(a) + u'_b(b), you always have -inf <= u'_{a, b} <= 0 and u(w_1) > u(w_2) if and only if u'(w_1) > u'(w_2). Therefore, up to a monotone transformation of base utility functions, you will run into exactly the same ethical issues with both theories.
I don't believe there is any objective way to assign a specific amount of utilons to pain/friendship/whatever: at best you can say you prefer this feeling of pain to that. Thus, we can pick any arbitrary function that preserves this order. Note the assignment in original version is just as arbitrary: pain is not a number in [0, 1].
Ah. Yes, obv all ordinal utility functions can be transformed into literally any other monotonically increasing utility function (assuming the same preference relation), but this model is clearly using cardinal utility because a large aspect of it is trying to represent the intuition of well-roundedness (diminishing MRS is meaningless under ordinal utility theory). So your underlying contention with the model is just its assumption of cardinal utility.
I agree that utility can't be objectively measured, but measurability =/= cardinality. Surely the order of preferences isn't the only meaningful message of utility functions; it seems intuitively true to me that, without measuring utility values, we can have a rough understanding that the difference between states of the world A and B is larger than the difference between B and C, yet that is a meaningless statement under ordinal utility
"Applying a logarithm to both sides, we get u'(a, b) = log(a) + log(b), with -inf <= log(a), log(b) <= 0, positive lives having u'(a, b) > log(0.25) and negative u'(a, b) < log(0.25)."
This is correct.
"Ultimately this is just the additive approach with bounded positive contribution for each term."
Huh? How are you drawing that conclusion? All you did was convert u(a, b)=ab into u(a,b)=10^[long(a)+log(b)]. Just having a plus sign in some version of the equation doesn't make it the additive approach in any meaningful way.
The substance of the additive approach is that any increase of x in one objective list value (a) will always result in an increase of x in total wellbeing (u), but for this model, an increase of x in a will only result in an increase of x in u if b is already maxed out at 1. Otherwise, increases in total wellbeing will always be <x and will be weighted depending on the initial values of a and b. Even with a bounded positive contribution for each term, the additive approach does not factor well-roundedness into overall wellbeing the way this model does.
To clarify, those approaches give the same ordinal utility function. Take a world-state w = (a, b), and u(w) = u_a(a) * u_b(b), where 0 <= u_{a, b} <= 1. Then for u'_a(a) = log u_a(a), u'_b(b) = log u_b(b), u'(w) = u'_a(a) + u'_b(b), you always have -inf <= u'_{a, b} <= 0 and u(w_1) > u(w_2) if and only if u'(w_1) > u'(w_2). Therefore, up to a monotone transformation of base utility functions, you will run into exactly the same ethical issues with both theories.
I don't believe there is any objective way to assign a specific amount of utilons to pain/friendship/whatever: at best you can say you prefer this feeling of pain to that. Thus, we can pick any arbitrary function that preserves this order. Note the assignment in original version is just as arbitrary: pain is not a number in [0, 1].
Ah. Yes, obv all ordinal utility functions can be transformed into literally any other monotonically increasing utility function (assuming the same preference relation), but this model is clearly using cardinal utility because a large aspect of it is trying to represent the intuition of well-roundedness (diminishing MRS is meaningless under ordinal utility theory). So your underlying contention with the model is just its assumption of cardinal utility.
I agree that utility can't be objectively measured, but measurability =/= cardinality. Surely the order of preferences isn't the only meaningful message of utility functions; it seems intuitively true to me that, without measuring utility values, we can have a rough understanding that the difference between states of the world A and B is larger than the difference between B and C, yet that is a meaningless statement under ordinal utility