I think your analogy here on probability as shuffling prospects is quite interesting, and somewhat useful in considering possible worlds.
But I also believe that it's not prima facie obvious to maximize expected value in terms of aggregating utility. Maximizing the sum of expected utilities does not immediately imply maximizing the sum of…
I think your analogy here on probability as shuffling prospects is quite interesting, and somewhat useful in considering possible worlds.
But I also believe that it's not prima facie obvious to maximize expected value in terms of aggregating utility. Maximizing the sum of expected utilities does not immediately imply maximizing the sum of expected values.
In individual cases, this becomes quite clear. Many people do not take a guaranteed $50 as equal to a 50% lottery between $100 and $0. In practice, people don't put all of their financial portfolio in stocks, even if stocks tend to have higher expected returns than other assets (e.g., bonds), because the former has a greater variance of returns unaccounted for just by looking at the mean value (expected return). There's nothing innately wrong with these preferences; I think utilitarians should take them as "valid" as any other set of preferences so long as they are consistent (VNM rational).
If one takes "expected value" as the aggregated expected utilities, I would be more inclined to agree. There's no obvious reason why the utility function of the social planner, i.e., the aggregated utility function considering all preferences, should take on a risk-averse form. However, I think it's reasonable for individual utility functions to take on any form they like.
In this sense, my intuitions regarding maximizing utility converge significantly towards maximizing EV in the aggregate sense but not necessarily at the individual level. Anyways, thank you for your post!
Right, I agree that the sort of argument I offered for maximizing EV at mid-sized population levels doesn't automatically carry over to the individual case. If we imagine starting with $50 in each of two boxes (representing each side of a coin flip, or whatever), we could reasonably be opposed to clustering all of the potential payoff into a single box. This is because the relevant locus of moral concern is *us*, and our interests, not *each separate $50* (and its chance of being awarded to us).
So we have a principled explanation of the intuitive difference that you're pointing to here.
I think your analogy here on probability as shuffling prospects is quite interesting, and somewhat useful in considering possible worlds.
But I also believe that it's not prima facie obvious to maximize expected value in terms of aggregating utility. Maximizing the sum of expected utilities does not immediately imply maximizing the sum of expected values.
In individual cases, this becomes quite clear. Many people do not take a guaranteed $50 as equal to a 50% lottery between $100 and $0. In practice, people don't put all of their financial portfolio in stocks, even if stocks tend to have higher expected returns than other assets (e.g., bonds), because the former has a greater variance of returns unaccounted for just by looking at the mean value (expected return). There's nothing innately wrong with these preferences; I think utilitarians should take them as "valid" as any other set of preferences so long as they are consistent (VNM rational).
If one takes "expected value" as the aggregated expected utilities, I would be more inclined to agree. There's no obvious reason why the utility function of the social planner, i.e., the aggregated utility function considering all preferences, should take on a risk-averse form. However, I think it's reasonable for individual utility functions to take on any form they like.
In this sense, my intuitions regarding maximizing utility converge significantly towards maximizing EV in the aggregate sense but not necessarily at the individual level. Anyways, thank you for your post!
Right, I agree that the sort of argument I offered for maximizing EV at mid-sized population levels doesn't automatically carry over to the individual case. If we imagine starting with $50 in each of two boxes (representing each side of a coin flip, or whatever), we could reasonably be opposed to clustering all of the potential payoff into a single box. This is because the relevant locus of moral concern is *us*, and our interests, not *each separate $50* (and its chance of being awarded to us).
So we have a principled explanation of the intuitive difference that you're pointing to here.