I'm not seeing why it's problematic to only be able to say that an action is good relative to a specific concrete counterfactual or relative to a counterfactual expectation, rather than be able to say that an action is good end stop.
I don't know that it's "problematic"; but it would seem *limited*. Ordinarily, we think we can distinguish good and bad acts, not just better or worse ones. (Compare pressing '9' in Button Pusher vs saving 1 life in Fire Rescue. These seem importantly different!) It seems an advantage for a view if it can make sense of more of our pre-theoretic thought.
But ordinarily we also think we can distinguish between tall and short people, not just taller and shorter ones! Why not just say our pre-theoretic thought is missing a contextual factor?
Take a 'tall' (in context A) person and a 'short' (in context B) person of equal height. They don't, on reflection, seem "different" in the way that pressing '9' in Button Pusher and saving 1 in Fire Rescue seem importantly different.
Consider a 6'3" tall person who is "tall" in ordinary contexts and "short" in a basketball context. There is one underlying amount of height they have, and for most purposes it's a lot, but for basketball purposes it's not nearly enough.
Consider the act of saving one person in the Button Pusher context and saving one person in the Fire Rescue context. There is one underlying amount of good that is produced in both contexts, but we call it "good" in the Fire Rescue context and "bad" in the Button Pusher context. I think the important difference is that we usually mean to be commenting on the quality of character or intentions or will, rather than the act itself. And the kind of character or intentions or will that produces a saving of one in the Fire Rescue context is the kind of character or intentions or will that is likely to produce a saving of 9 in the Button Pusher context.
Maybe a more natural comparison is comparing an ordinary person off the street who saves one in Fire Rescue and a firefighter who saves one in Fire Rescue. For a person off the street, that's really good, but for a firefighter, we might think this person was shirking. There's something importantly different about the two people who save the same number, but there's also something importantly the same about them, just as with height when a person is trying to help someone put a suitcase in the overhead or trying to land a slam dunk in basketball.
It's tricky to pin down the disagreement here, because we seem to agree that there is an important difference, and that it has something to do with reasonable expectations regarding quality of character etc. Though I will say that I don't think the assessment is *directly* about character: someone who saves one in Fire Rescue for the wrong reasons (hoping to get their name in the local paper or whatever) might lack virtue, but the act is still good. So, while I think the relevant baseline for assessing acts is set relative to what would be expected of a minimally decent person, I am still evaluating *acts*, not *characters* or intentions or what have you.
Here's a possible test: I take it that contextualists are committed to different conversational contexts yielding different verdicts about whether to apply a predicate to a particular token (e.g. person). The *very same person* can be judged both 'tall' and 'not tall', according to speakers in different contexts. And I take it that Norcross' contextualism similarly implies that the *very same act* of passerby S running into the burning building to save exactly one person can be assessed as both 'good' and 'not good' by speakers in different contexts, such as more or less altruistic subcultures. But my view implies that the normative facts are fixed by the circumstances of the act itself, not by speaker context. If a bunch of ultra-ambitious effective altruists deride S's saving one as "bad", I think there's an important sense in which they're *misjudging* it.
Good point on the acts vs characters. (Though I also want to say that the character trait that leads someone to do good when it will get their name in the local paper is a better trait than the one that leads people to sit by when a building is burning.)
I think I'll disagree with you on that last paragraph.
An off-duty firefighter saves one person in the Fire Rescue case. When we are thinking of him as a random person on the street, we think of it as a good act, but when we are thinking of him as a firefighter who regularly saves three or four in similar situations, we think of it as a bad act.
Maybe another version that gets the intuitions going - consider an early 19th century slaveowner who emancipates each of the people they enslaved when they turn 50. Evaluated in a context mainly consisting of other contemporary slaveowners in their state, they seem to be doing something good. Evaluated in a context that considers 19th century abolitionists, or more modern people, they seem to be doing something very bad by waiting until the enslaved people turn 50. Both of these judgments seem importantly right.
I think the ultra-ambitious people in your example are probably focusing on the wrong thing for many kinds of conversations. But I don't think there's anything *absolutely* wrong about their judgments, as long as they still appropriately recognize how much better the act is than the alternative of saving none.
From the beginning: "While such a comparative theory would suffice for purposes of choice, it seems like a disadvantage that it would not find any legitimate meaning in the statement that slavery is unjust ..."
"Similar variation is found in our use of ‘harm’: intuitively, Agent harms nine people by pushing the button that kills them when he could have saved them, even though they still would have died had he done nothing."
I don't think that's right—and it's certainly not intuitive—although it might turn on the mechanism of action. Ten bullets are heading to each of ten individuals. Pressing button n lowers 10-n shields in front of 10-n individuals, protecting them from the bullet heading in their directions. Pressing button 9, would lower a shield in front of a single person, leaving the other shields out of the way. If that's the set up, then the person who presses button 9 doesn't kill anyone—nor do they harm anyone. They let the 9 be harmed / lets them die.
Something more like causal overdetermination: pressing button n pulls n gun triggers (and prevents any other triggering), but if you do nothing then another cause will trigger all 10.
If that's how it works though, you're just killing the n, and harming them merely in virtue of the fact that you're killing them. At least, it's sufficient, under those circumstances, that you killed them for it to be that you harmed them. The comparative stuff is by-the-by?
If the numbers only went down to 1 rather than 0, I don't think I'd be inclined to say that pressing '1' harms anyone in the morally important sense. (It doesn't make anyone worse off than they would've been in the relevant alternative, for example.)
Harming as worse-off making is a view, but I don't think it much connects up with any natural language sense of harm. Which is fine, define your terms as you like. But then using 'harm' without qualification, just reads as the natural sense of the term, with all its baggage, etc., which I don't think a technical notion (worse-off making) is entitled to.
Is it not true that if act consequentialism finds moral equilibrium at some minimally fitting attitude then global consequentialism doesn’t necessarily follow from it (objects and such can’t have attitudes). Objects would have to have equilibrium at something more concrete. My understanding is that you believe global consequentialism is not actually different from act. How would you reconcile those two views?
If the utilitarian equilibrium that separates good and bad is a minimally fitting attitude, then act utilitarianism seems to be fundamentally distinct from global utilitarianism. This is because objects can’t have attitudes so to decide whether an object is good or bad on utilitarian grounds would require a tool other than FA. I was pointing out that you reject that global and act utilitarianism are different, which, if true, would rule out fitting attitudes as equilibrium for act util.
Ok, so for assessing *actions* I've suggested that we could identify a principled baseline expectation (or moral "zero point") based on something like what a minimally decent person could do (without warranting blame). When assessing mere objects, e.g. whether a lightning strike is good or bad, I'm more inclined to just go with the "natural" account where the baseline is just *what would have happened otherwise*. E.g., "it's good that this injured animal was struck by lightning (dying instantly rather than suffering a slow and miserable death)."
Whatever the right way to evaluate "mere objects" is, it'll be the same for both act and global consequentialism.
I wasn't saying that act and global util would have different procedures for evaluating the moral worth of an object, rather that because objects would have a different "moral zero point" than acts, the theory that evaluates objects seems distinct from the theory that evaluates acts. Really though, I agree that global and act util are the same. However, I just don't think that either theory could tell us anything beyond what the optimal action or object is. It seems like whatever moral zero point you choose to use is just personal opinion. No single moral zero point is implied by consequentialist axiology. I wrote a post about this a while back (https://substack.com/home/post/p-142647816?r=38263l&utm_campaign=post&utm_medium=web). Although I don't evaluate the minimally decent person here, it addresses the same general issue.
Hi Richard! This is a bit unrelated to the post, but I have a few questions on the utilitarianism. I have very strong consequentialist/utilitarian intuitions. However, it seems like every utilitarian theory I come across has some counterexamples.
What is your personal favorite utilitarian theory? Would you be willing to defend it against some counterexamples?
I'm not seeing why it's problematic to only be able to say that an action is good relative to a specific concrete counterfactual or relative to a counterfactual expectation, rather than be able to say that an action is good end stop.
I don't know that it's "problematic"; but it would seem *limited*. Ordinarily, we think we can distinguish good and bad acts, not just better or worse ones. (Compare pressing '9' in Button Pusher vs saving 1 life in Fire Rescue. These seem importantly different!) It seems an advantage for a view if it can make sense of more of our pre-theoretic thought.
But ordinarily we also think we can distinguish between tall and short people, not just taller and shorter ones! Why not just say our pre-theoretic thought is missing a contextual factor?
Take a 'tall' (in context A) person and a 'short' (in context B) person of equal height. They don't, on reflection, seem "different" in the way that pressing '9' in Button Pusher and saving 1 in Fire Rescue seem importantly different.
Here's the analogy I'm seeing.
Consider a 6'3" tall person who is "tall" in ordinary contexts and "short" in a basketball context. There is one underlying amount of height they have, and for most purposes it's a lot, but for basketball purposes it's not nearly enough.
Consider the act of saving one person in the Button Pusher context and saving one person in the Fire Rescue context. There is one underlying amount of good that is produced in both contexts, but we call it "good" in the Fire Rescue context and "bad" in the Button Pusher context. I think the important difference is that we usually mean to be commenting on the quality of character or intentions or will, rather than the act itself. And the kind of character or intentions or will that produces a saving of one in the Fire Rescue context is the kind of character or intentions or will that is likely to produce a saving of 9 in the Button Pusher context.
Maybe a more natural comparison is comparing an ordinary person off the street who saves one in Fire Rescue and a firefighter who saves one in Fire Rescue. For a person off the street, that's really good, but for a firefighter, we might think this person was shirking. There's something importantly different about the two people who save the same number, but there's also something importantly the same about them, just as with height when a person is trying to help someone put a suitcase in the overhead or trying to land a slam dunk in basketball.
It's tricky to pin down the disagreement here, because we seem to agree that there is an important difference, and that it has something to do with reasonable expectations regarding quality of character etc. Though I will say that I don't think the assessment is *directly* about character: someone who saves one in Fire Rescue for the wrong reasons (hoping to get their name in the local paper or whatever) might lack virtue, but the act is still good. So, while I think the relevant baseline for assessing acts is set relative to what would be expected of a minimally decent person, I am still evaluating *acts*, not *characters* or intentions or what have you.
Here's a possible test: I take it that contextualists are committed to different conversational contexts yielding different verdicts about whether to apply a predicate to a particular token (e.g. person). The *very same person* can be judged both 'tall' and 'not tall', according to speakers in different contexts. And I take it that Norcross' contextualism similarly implies that the *very same act* of passerby S running into the burning building to save exactly one person can be assessed as both 'good' and 'not good' by speakers in different contexts, such as more or less altruistic subcultures. But my view implies that the normative facts are fixed by the circumstances of the act itself, not by speaker context. If a bunch of ultra-ambitious effective altruists deride S's saving one as "bad", I think there's an important sense in which they're *misjudging* it.
Good point on the acts vs characters. (Though I also want to say that the character trait that leads someone to do good when it will get their name in the local paper is a better trait than the one that leads people to sit by when a building is burning.)
I think I'll disagree with you on that last paragraph.
An off-duty firefighter saves one person in the Fire Rescue case. When we are thinking of him as a random person on the street, we think of it as a good act, but when we are thinking of him as a firefighter who regularly saves three or four in similar situations, we think of it as a bad act.
Maybe another version that gets the intuitions going - consider an early 19th century slaveowner who emancipates each of the people they enslaved when they turn 50. Evaluated in a context mainly consisting of other contemporary slaveowners in their state, they seem to be doing something good. Evaluated in a context that considers 19th century abolitionists, or more modern people, they seem to be doing something very bad by waiting until the enslaved people turn 50. Both of these judgments seem importantly right.
I think the ultra-ambitious people in your example are probably focusing on the wrong thing for many kinds of conversations. But I don't think there's anything *absolutely* wrong about their judgments, as long as they still appropriately recognize how much better the act is than the alternative of saving none.
Perhaps related: David Estlund's "Just and Juster". https://philpapers.org/archive/ESTJAJ.pdf
From the beginning: "While such a comparative theory would suffice for purposes of choice, it seems like a disadvantage that it would not find any legitimate meaning in the statement that slavery is unjust ..."
"Similar variation is found in our use of ‘harm’: intuitively, Agent harms nine people by pushing the button that kills them when he could have saved them, even though they still would have died had he done nothing."
I don't think that's right—and it's certainly not intuitive—although it might turn on the mechanism of action. Ten bullets are heading to each of ten individuals. Pressing button n lowers 10-n shields in front of 10-n individuals, protecting them from the bullet heading in their directions. Pressing button 9, would lower a shield in front of a single person, leaving the other shields out of the way. If that's the set up, then the person who presses button 9 doesn't kill anyone—nor do they harm anyone. They let the 9 be harmed / lets them die.
What mechanism are you imaginging?
Something more like causal overdetermination: pressing button n pulls n gun triggers (and prevents any other triggering), but if you do nothing then another cause will trigger all 10.
If that's how it works though, you're just killing the n, and harming them merely in virtue of the fact that you're killing them. At least, it's sufficient, under those circumstances, that you killed them for it to be that you harmed them. The comparative stuff is by-the-by?
If the numbers only went down to 1 rather than 0, I don't think I'd be inclined to say that pressing '1' harms anyone in the morally important sense. (It doesn't make anyone worse off than they would've been in the relevant alternative, for example.)
Harming as worse-off making is a view, but I don't think it much connects up with any natural language sense of harm. Which is fine, define your terms as you like. But then using 'harm' without qualification, just reads as the natural sense of the term, with all its baggage, etc., which I don't think a technical notion (worse-off making) is entitled to.
Is it not true that if act consequentialism finds moral equilibrium at some minimally fitting attitude then global consequentialism doesn’t necessarily follow from it (objects and such can’t have attitudes). Objects would have to have equilibrium at something more concrete. My understanding is that you believe global consequentialism is not actually different from act. How would you reconcile those two views?
Sorry, I don't follow. You might need to expand on what you have in mind.
If the utilitarian equilibrium that separates good and bad is a minimally fitting attitude, then act utilitarianism seems to be fundamentally distinct from global utilitarianism. This is because objects can’t have attitudes so to decide whether an object is good or bad on utilitarian grounds would require a tool other than FA. I was pointing out that you reject that global and act utilitarianism are different, which, if true, would rule out fitting attitudes as equilibrium for act util.
Ok, so for assessing *actions* I've suggested that we could identify a principled baseline expectation (or moral "zero point") based on something like what a minimally decent person could do (without warranting blame). When assessing mere objects, e.g. whether a lightning strike is good or bad, I'm more inclined to just go with the "natural" account where the baseline is just *what would have happened otherwise*. E.g., "it's good that this injured animal was struck by lightning (dying instantly rather than suffering a slow and miserable death)."
Whatever the right way to evaluate "mere objects" is, it'll be the same for both act and global consequentialism.
I wasn't saying that act and global util would have different procedures for evaluating the moral worth of an object, rather that because objects would have a different "moral zero point" than acts, the theory that evaluates objects seems distinct from the theory that evaluates acts. Really though, I agree that global and act util are the same. However, I just don't think that either theory could tell us anything beyond what the optimal action or object is. It seems like whatever moral zero point you choose to use is just personal opinion. No single moral zero point is implied by consequentialist axiology. I wrote a post about this a while back (https://substack.com/home/post/p-142647816?r=38263l&utm_campaign=post&utm_medium=web). Although I don't evaluate the minimally decent person here, it addresses the same general issue.
Hi Richard! This is a bit unrelated to the post, but I have a few questions on the utilitarianism. I have very strong consequentialist/utilitarian intuitions. However, it seems like every utilitarian theory I come across has some counterexamples.
What is your personal favorite utilitarian theory? Would you be willing to defend it against some counterexamples?
Sure, have a look at my post on 'Bleeding Heart Consequentialism' - https://www.goodthoughts.blog/p/bleeding-heart-consequentialism - and feel free to leave a comment there with your favorite counterexamples.