39 Comments
Apr 21, 2023Liked by Richard Y Chappell

Since that doesn't seem to be anything about it on the utilitarianism.net website either, I think it would be good to say something about how your debunking arguments work for virtue ethics as well as deontology. In this case, I think it's because utilitarianism can just swallow virtue ethics whole.

The main point is that utilitarianism focuses on completely different moral concepts than virtue ethics. It's not about right character, but about what's overall worth promoting or worthwhile. The general argument from good to ethicist goes that the other theories are too narrowly focused on actions and there's more broader focused on character, but in fact really utilitarianism is much broader than virtue ethics as you've explained If this category of thing, what's overall worthwhile, exists at all, then there's no real conflict with virtue ethics.

The real debate isn't about what the right actions/character is, but whether there's something higher that justifies which characters, dispositions, and actions are instrumentally right because they promote some outside good or not.

I'm pretty sure that's the actual difference between utilitarianism and virtue ethics, and it's super obvious. I don't get why most intro philosophy stuff misses this and says utilitarianism is a theory of right action like deontology but virtue ethics is broader.

On the virtue ethics view, some dispositions are just good in themselves. But to a utilitarian, this idea is just a straight up mistake, accidentally looking at things that are instrumental goods and thinking that they are fundamental. It doesn't even make sense because what does it mean for a disposition to be good in itself? You're not saying it should be promoted objectively or that the greatest expression of it is best. But that's because virtue ethics doesn't believe that this higher level of thing even exists.

So, the consequentialist has this easy way of taking in virtue ethics, right? They can just accept all the bits about practical wisdom, not getting too hung up on objective morality when deciding stuff, and focusing on growing virtues instead of using utilitarian calculations all the time. Consequentialists are cool with these ideas and treat them as fictional things that help them promote the overall objective good. It fits in pretty well with their own way of thinking.

But virtue ethicists aren't having it. They say the parts of virtue ethics that consequentialists treat as fictional are actually way more real and closer to how we think and live. Virtue ethicists want these ideas to guide our actions since they're more down-to-earth and relatable than those big, abstract moral principles. I think the virtue ethicist would probably draw some kind of appeal intuition or debunking argument here and say that there's just way less track record for the concept of what's overall worthwhile than for the concept of virtue. So we should treat that as the most real thing and the thing we derive our normative concepts from that, but then I'm not a virtue ethicist.

Expand full comment
author
Apr 21, 2023·edited Apr 21, 2023Author

Yeah, that all sounds plausible. I'm a little wary of making strong claims about virtue ethics because I don't feel like I have a good grasp of precisely what the view is meant to be. But the basic intuition that *character matters* is one I certainly agree with, and think can be accommodated very well from within consequentialism -- see my two-dimensional account here: https://rychappell.substack.com/p/consequentialism-beyond-action

Stronger claims that would be in conflict with my view would be claims like (i) virtue is *fundamental*, and not explicable in terms of *either* aiming at *or* systematically promoting the good; or (ii) virtuous agents wouldn't give much weight to beneficence, or would otherwise reject utilitarian prescriptions about what matters and what ought to be done. But I don't have much sense of why one would think either of those things. I'd need to hear the arguments for such an anti-utilitarian conception of virtue ethics in order to productively engage with it. (For point (ii) specifically, I imagine my objections to standard deontology would carry over pretty straightforwardly.)

Expand full comment
Jun 13Liked by Richard Y Chappell

I've read your post. I am unsure if your view is totalist or not. However, from what I gathered you value both happiness and suffering. In this case, I believe this thought experiment is quite interesting.

Suppose there are 2 worlds: World X and World Y.

World X - There is some amount of people who live neutral lives.

World Y - One person suffers very horrible tortures, but everyone else has very slightly positive lives. If there are enough people in World Y, then would your theory say that it is better than World X?

Expand full comment
author

Yeah, I'm not fully sold on totalism, and so don't have a firm view on how best to deal with quantity-quality tradeoffs in extreme cases. Intuitively, I'd prefer a moderately-sized utopian world A over a gargantuan world Z of just slightly positive lives, for example. But I am inclined to think that either could outweigh the badness of one very bad life. I don't think there's any principled reason to give absolute priority to avoiding bad lives in the way those pushing your sort of intuition sometimes suggest. And it seems like the force of the intuition is easily debunked in terms of the "one person" being disproportionately psychologically salient to us.

Expand full comment
Jun 13Liked by Richard Y Chappell

"And it seems like the force of the intuition is easily debunked in terms of the "one person" being disproportionately psychologically salient to us."

Ok, wow, I think you are on the spot actually. I always imagine myself in the place of that one person.

I do have a different worry, though. What do you think about attempts to redefine happiness in terms of avoiding suffering? For example, some people say that we eat to avoid suffering of hunger, we entertain ourselves to avoid the suffering of boredom, we visit new places, to avoid dull life, etc. Such attempts are usually done by negative utilitarians.

Expand full comment
author

They're clearly false? I mean, avoiding unpleasant hunger pangs is indeed *one* reason to eat. But also, it's possible to have positively *pleasant* experiences -- some foods taste better than others, after all.

If all we wanted to do was avoid bad experiences, we could achieve that by killing ourselves. But I don't regret existing, or see it as nothing but potential harms to (at best) avoid. Quite the opposite: I find much of my life to be good, and the things that tend to bother me most are the *lacks* of other good things that I want and hope for. (Very much a "first world problems" situation! But it's worth appreciating what a privilege it is to be in such a situation.)

See also: Don't Valorize the Void https://www.goodthoughts.blog/p/dont-valorize-the-void

Expand full comment

What do you think about "Creating Hell to Please the Blissful" example, as presented here (https://centerforreducingsuffering.org/comparing-repugnant-conclusions/#The_corresponding_implication_of_offsetting_views_is_more_repugnant)

Expand full comment
author

It's very hard to really picture a scenario in which the extra bliss factually outweighs the suffering of those in Hell. And if we struggle to picture a scenario in which the former factually outweighs the latter, it's unsurprising that we're intuitively reluctant to claim that the former will morally outweigh the latter.

I find some plausibility to the view that extra bliss has diminishing marginal value. So that could make it *even harder* to outweigh intense suffering. But any harm can, in principle, be outweighed by sufficient goods. As a general rule, I think it's silly (and bad philosophical methodology) to play intuition games comparing unimaginably vast quantities of some minor good vs smaller quantities of more salient harms. It's entirely predictable that intuition will favor the latter, simply due to cognitive biases.

Expand full comment

I think I found what was bugging me the entire time. Someone put it in the words on his blog. It's a sort of a denial of experiences with positive hedonic tone.

"I find it plausible that negative experiences have negative value for an individual; more specifically, it is plausible that some experiences have a negative hedonic tone (quality) and that they are bad for the individual who has the experiences. A related form of hedonism is that experiences have a negative, neutral, or positive hedonic tone. Now, someone might object, if there are experiences with a negative hedonic tone that are bad for an individual, why are experiences with a positive hedonic tone not good for an individual? Because there are no experiences with a positive hedonic tone, and there will never be any. To be clear, I do not deny the existence of pleasure in the Epicurean sense of katastematic (static) pleasure, which includes tranquillity and the absence of pain, trouble and worry, and which ‘can be varied, though not increased’ (Annas 1993, 188, 336). In daily life, it is common to use phrases such as ‘this is very pleasant,’ which is fine; I do not object to that usage of words. If I were at a spa, I would perhaps say that my experience is pleasant, but I would not mean that the experience has a positive hedonic tone or quality or that it is above neutral. I would be comparing it to other experiences that I often have, which have more negative aspects, such as feelings of discomfort. When I carefully consider my experiences, I cannot detect that I have ever experienced anything that I would say is or warrants being called positive, on the plus side, above neutral or the like. This includes what are commonly considered peak events in life, such as major accomplishments. At such times, my main feeling has been relief, sometimes combined with excitement about what I will do in the future, but the feeling has not had a positive quality (being excited need not feel positive). In contrast, I often have decidedly negative experiences. Actually, the phrases ‘negative well-being’ and ‘negative experiences’ are unfortunate because if something is negative, it sounds as if there is a positive counterpart. Better names may be ‘problematic moments in life’ and ‘problematic experiences,’ because unproblematic, which seems to be the opposite of problematic, does not imply positive."

Expand full comment
Jun 17, 2023Liked by Richard Y Chappell

Fantastic post - and I agree wholeheartedly with 99% of this! I would, however, love your take on the my intuitive justification for a more motive consequentialist approach, and why I think deontology and consequentialism are actually completely compatible beliefs if you start with consequentialism. Or maybe you aren't arguing against deontology so much as you're arguing for consequentialism? Anyways:

Motive consequentialism: It seems to me that there is a strong utilitarian argument for considering the intention behind actions if we consider consequences across time to be of equal importance. Showing intention to do bad is an indication that a person has a high probability to do more bad in the future, regardless of the present consequences of their actions (ie. a failed murder attempt resulting in the would-be-victim to accidentally meet the love of their life is consequentially a good thing). So attempted murder is bad, even though there are no bad outcomes.

Deontology: However it's impossible to determine intent definitively (with currently technology) so the best we can do as a society is to create rules that prevent actions that *typically* result in consequentially bad outcomes, like don't murder and don't steal. So rules make perfect sense, so long as they are created with consequentialist foundations (and I think most of the laws we have are). Then the only realistic way to operate throughout life is to judge people based on whether their actions follow these rules/principles. Every situation is, of course, case by case, so given the existence of a moral Oracle, we would not need rules at all as every action could be judged independently from a consequentialist point of view, but the fact is that no such thing exists. That's why courts exist, to create precedents for judging situations that are ambiguous given our current rules, ie. killing in self defense. So when it comes to practically judging the morality of an action, deontology makes the most sense.

So to me, the most sensible/practical view in normative ethics is deontology with rules created based on motive consequentialist foundations.

Expand full comment
author

I think this is just a verbal disagreement. I agree that it's good to have (and follow) generally-reliable rules (unless one can someone *know* that one is in the "exceptional" case, but we should generally be very skeptical of people's ability to really know this in practice). This is effectively R.M. Hare's "two-level consequentialism", which I defend here:

https://www.utilitarianism.net/utilitarianism-and-practical-ethics/#respecting-commonsense-moral-norms

I don't call this "deontology" (or "motive utilitarianism", for that matter), because those are names for competing *moral theories*, not for the moral practice of following good rules. One and the same practice can be defended on different theoretical grounds, and I think act consequentialism gives the correct explanation of why it's worth following that practice. Those other theories mistakenly posit *non-instrumental* reasons for following rules, or the best motives, even in cases when it's not actually ideal.

Expand full comment
Jun 17, 2023Liked by Richard Y Chappell

That does make sense, thank you! So if I understand, what I'm suggesting is actually just consequentialism and act consequentialism with a rules based decision procedure, whereas deontology and motive utilitarianism inherently encompass views that ignore utilitarian consequences in favor of other values.

Expand full comment
author

Yes, precisely.

Expand full comment
Jun 1, 2023Liked by Richard Y Chappell

It just doesn’t seem to me that this targets the most plausible forms of deontology. Sure, do or allow distinctions might seem weird when the stakes are 5 whole lives, it makes more sense for smaller stakes. It seems worse to trip someone then to prevent them from being tripped. Both same outcome but the fact that it was done by someone does make it worse. Even if by a tiny amount or only in certain contexts. Same with just general deontic constraints. Sure, keeping a promise can’t nearly outweigh saving lives but perhaps it can have some force against it. After all, it’s clear that we’d rather they did both. If you can keep a promise to a loved one or betray it for a single utility to yourself, it doesn’t just seem like thats totally worth it, it seems obligatory in a way that chasing that 1 utility even in another context doesn’t.

Expand full comment
author
Jun 1, 2023·edited Jun 1, 2023Author

I agree that keeping promises "seems obligatory". It is, after all, a part of our commonly-accepted "morality system". The question is what normative status this system has, and whether we should take its claimed authority at face value. Most plausibly, I think it has a kind of general instrumental value, and it's not worth undermining generally beneficial rules for a tiny one-off benefit. I don't think it's plausible to attribute *fundamental*, non-instrumental significance to promises, doing-allowing, or other paradigmatically "deontological" concepts or properties, so if you can construct a case where the usual instrumental value would be missing, I think it would be most plausible to conclude that there's really no normative reason there at all. (Though again, it's important to stress that this doesn't undermine our actual commitment to following the best rules even when it *seems*, prima facie, that we'd better promote the good by breaking them, because we can appreciate that such appearances are apt to be misleading in real life.)

Expand full comment

"so if you can construct a case where the usual instrumental value would be missing, I think it would be most plausible to conclude that there's really no normative reason there at all" This just doesn't at all seem obvious to me. The exact opposite. It seems crazy that you should break a promise to a passed loved one for say 1 utility to yourself. Keeping the promise would lose the world a single utility but THAT SEEMS WORTH IT!

Expand full comment
author

It's a bit hard to imagine the case, since I'd ordinarily expect breaking a promise to a passed loved one to be painful, e.g. to your feelings of integrity/loyalty/whatnot. If it undermined your self-conception of the relationship, that would presumably be a harm greater than 1 utile. Maybe best to imagine that the agent never had such feelings to begin with -- suppose you never intended to keep the promise (maybe it was a nonsensical thing requested in delirium, and you merely acceded to it in order to give your delirious loved one peace in their final moments), and the question is just whether a situation arises in which you have the opportunity to break it. If the opportunity arises, you get a slight benefit. Is it better if the opportunity arises? Maybe; intuitively the "harm" is already done when you lack any commitment to the "promise" to begin with.

That said, if you really have the intuition that promises have non-instrumental value, you can always add that into the consequentialist mix of values to be promoted (so long as you agree that you should break one promise to prevent your future self from breaking five others of comparable weight, for example).

Expand full comment
Jun 1, 2023Liked by Richard Y Chappell

Maybe break 1 promise to prevent 5 broken promises but break 1 promise (-500 utility overall) for 501 utility is more the point I’m getting across. That still seems kinda whacky even if promises is included in “utility”. But perhaps it wouldn’t if I saw the consequentialist route fleshed out. I’m more than sympathetic to that but I don’t think you do go that route and for me, as a particularist, I’m just on a quest for acceptance of far greater pluralism than we see today.

Expand full comment
author
Jun 1, 2023·edited Jun 1, 2023Author

Yeah, I think the tricky issue is just getting a vivid sense of what that "501 utility" consists in, such that it's recognizably *better overall for the people involved* than keeping the promise would be. My sense is that a lot of intuitions here involve implicitly excluding indirect costs (and so straw-manning the consequentialist view, since it obviously doesn't endorse ignoring indirect costs), but once you really consider all the costs it's much harder to make sense of the intuition that promise-keeping is "worth" bringing about an overall *worse* state of affairs.

Expand full comment
Apr 17, 2023Liked by Richard Y Chappell

A pessimistic response: I have exactly the opposite methodological starting point as you. I feel extremely comfortable with the notions of "ought" or "right/wrong action," but I have no understanding of what it means for something to be "morally worthwhile." To the extent I can understand this concept, it is purely by attempting to reduce the notion to one about the strength of reasons that I ought to act in one way or another. [I would say the same about the idea of "the good."]

I suppose I would be somewhat relieved to find out that much debate in normative theory is based on philosophers talking at cross-purposes vis-a-vis "rightness" and "worthwhileness" (though It seems to me that plenty of consequentialist and utilitarian philosophers have felt very comfortable with making their theories into theories of right and wrong), but I don't know how debates in ethics should proceed if we're really starting at such different places.

Expand full comment
author

Interesting! Just to clarify: is the notion of *preferability*, or *what you have reason to care about* similarly obscure to you? (I take those to be roughly synonymous with "worthwhile".)

Expand full comment

Moral preferability is actually very non-obscure to me, in that we can talk about State 1 being preferred to State 2 if I ought to ensure that State 1 obtains rather than State 2. I think preferability has to be reduced to choice in this way for it to make sense in the moral context, though.

In a similar vein, I have no objections to saying that State 1 generates more utility to Patient than State 2, but only insofar as utility is a function of well-structured preferences on the part of Patient as to which states of the world Patient would choose to be in. I think that the early economic theorists may have said much the same thing: utility measures are just a nice way of representing a rational individual's preferences and have no independent significance.

At least for now, it seems to me that the same thing obtains in the realm of moral claims about "the good" or "expected good." if this concept has any significance it is only in representing moral choices about which states of affairs we should bring about over others. This isn't to say that I necessarily disagree with the conclusions utilitarians or consequentialists come to about which states of affairs to choose--only that these arguments don't make sense to me as involving claims about freestanding notions of what is "best" or most "good."

Expand full comment
author

Hmm, that's definitely not what I mean by 'preferability'. As I put it in sec 1.3 of this paper - https://www.dropbox.com/s/dxv8vusnf6228aw/Chappell-NewParadoxDeontology.pdf?dl=0 - I instead mean 'preferability' in the *fitting attitudes* sense. I assume that among our familiar mental states (beliefs, desires, etc.) are items describable as *all things considered preferences*, and that such states can be (or fail to be) warranted by their objects. Further:

"This claim is *not* conceptually reducible either to claims about value or to claims about permissible choice. It's a conceptually wide open question *what it is fitting to prefer*, and how these fittingness facts relate to questions of permissibility and value. Deontologists typically hold that we should care about things other than promoting value, whereas consequentialists may doubt whether we should care about deontic status at all. If one were to employ only subscripted, reducible concepts of preferability-sub-value and preferability-sub-deontic-status, it would be difficult to even articulate this disagreement. But we can further ask about preferability in the irreducible *fitting attitudes* sense, on which it is a conceptually open (and highly substantive) normative question what we should all-things-considered prefer."

I hope it's clear that we should prefer some possible worlds (e.g. those containing pareto improvements) over others, and that this is not *just* to say that we should choose them. As I discuss in the linked paper, some deontologists argue that fitting choice and fitting preference can diverge (though I think that's a very strange position, and arguably commits one to a form of instrumental irrationality). More importantly, preferences can range over a much wider scope of possible worlds than choices can -- we can have preferences over completely inaccessible states of affairs that we could never possibly choose or even influence, e.g. whether or not lightning strikes a child on the opposite side of the world tonight. And they tie in with a whole range of other emotional responses -- e.g., it makes sense to feel *disappointed* if a dispreferred outcome occurs, and to *hope* that preferred ones eventuate. None of this has anything essentially to do with choices. Someone totally incapable of action (due to a magic spell causing targeted mental paralysis blocking the formation of any practical intentions, say) could still have preferences over possible worlds, and these preferences could be more or less fitting.

Does none of that make sense to you?

Expand full comment

I would prefer a state of the world in which a child is not struck by lightning, but this feels explained by the fact that were I given the ability to choose which state of the world obtained, I would choose the state of the world in which the child is not injured.

I also have a grip on feeling disappointed on when certain outcomes come about, and hoping that certain outcomes come about, but I'm not sure how these feelings ties to anything related to morality.

Expand full comment
author

Suppose an evil demon tells you that if you ever *choose* the state of affairs in which the child is not injured, he'll torture everyone for eternity. I hope you would then no longer choose that. But you should certainly still *prefer* that the child escapes injury (just without any input from your cursed agency). So preferences aren't reducible to choices.

Expand full comment
Apr 19, 2023Liked by Richard Y Chappell

Ah okay, got it--it sounds to me like what you mean states of the world I "prefer" is what I would ordinarily call states of the world I "hope" for. To me, preference is a more choice-laden term than hope. In your example I think I would probably not say that I "prefer" for the child to escape injury, since what I would really want is for the child to escape injury conditional on my not being involved in the child escaping injury--an odd preference, I think. It would feel more natural in that context to say that I *hope* the child escapes injury.

Similarly, I might say that I "prefer" to date potential partners with attribute X, and might say that I "hope" my friend dates potential partners with the same attribute X, since it isn't really my place to intervene in my friend's choice of partners. If I stated that I "preferred" for my friend to date people with attribute X, I think that might come across as controlling [who am I to make that kind of statement?] precisely because it indicates an effort to affect their choices with my own.

Granted, I see that there's overlap between the concepts. I might both prefer that the candidate I vote for wins her election and hope that she does.

Expand full comment
Apr 17, 2023Liked by Richard Y Chappell

I suspect some people feel that the options are either 100% utilitarian or something radically different. Your "utilitarian-ish" position falls outside this. They think something like, once you concede a slight bit of ground around the edges to deontology, view-point relativity, or value pluralism, you will quickly be forced to slip into a more standard common-sense morality. Why do people think this? I find that sort of thinking somewhat intuitive, but I don't see a good reason for it.

Expand full comment

It does, thanks! Though I don't see immediately see why it matters at all what states of the world I or any other person hope for, independent of actions people take to bring them about. I guess I'll have to read your paper more carefully.

Expand full comment
author

It may not much matter what anyone's actual hopes are. But I think it matters a great deal what we *ought* to hope for, as this could plausibly constrain or even explain what we ought to do.

Expand full comment
deletedApr 15
Comment deleted
Expand full comment
author

Have you considered the possibility that what you find "obvious" might actually be wrong? I can generally tolerate commentators who are either arrogant or ignorant, but the combination of the two is extremely grating.

Here's a demonstrable error: you conflate "what matters" with "axiology", but these are not the same thing. (Axiology is theorizing about value; but a deontologist could hold that respecting rights *matters more* than promoting value.) There is nothing "narrow" about the question of what fundamentally matters.

Expand full comment
deletedMay 9, 2023Liked by Richard Y Chappell
Comment deleted
Expand full comment
author

Yes, that's my view! You start from each (individual) sentient being mattering, and build up the view from there.

Expand full comment

I'm kinda confused by this. Utilitarianism doesn't say maximise everyone's welfare but total (or average.. bla bla but you know what I mean) welfare. Utility monster surely laughs in the face of everyone mattering. Utilitarian impartiality exactly functions by pretending moral boundaries between persons don't exist. That seems to fundamentally oppose the idea that there even could be a morally mattering us.

Expand full comment
author

Have you read my 'Value Receptacles' paper?

https://philpapers.org/rec/CHAVR

A general theme of my work is that utilitarianism says to maximize overall welfare precisely *because* that's how you respond to valuing everyone equally (rather than valuing something other than persons, like distributions):

https://rychappell.substack.com/p/theses-on-mattering

On utility monsters specifically, you might want to check out: https://philpapers.org/rec/CHANUM

Expand full comment
Jun 3, 2023Liked by Richard Y Chappell

Been reading through and this is a really useful reply, thx

Expand full comment
deletedApr 30, 2023·edited Apr 30, 2023
Comment deleted
Expand full comment
author

Evil pleasures don't seem good, which strikes me as an excellent reason to switch away from classical utilitarianism and towards a form of desert-adjusted utilitarianism (or welfarist consequentialism) instead: https://www.utilitarianism.net/theories-of-wellbeing/#the-evil-pleasures-objection

That said, if the deeper issue is just that you have lexicalist intuitions that no amount of good (even completely innocent, unrelated goods, not repugnant divine sadism or anything) could outweigh the badness of one person suffering, then I think that's just not compatible with properly valuing the positive things in life. ("Don't valorize the void!") Certainly the one person suffering could reasonably wish to not exist, and it would be understandable (albeit, strictly speaking, immoral) for them to fail to care about anyone else and just wish the whole world's destruction if that was what it took to put an end to their agony. But that's just to note that agony can make us very self-centered. We certainly shouldn't defer to the moral perspective of a person being tortured, any more than we should defer to that of a person reeling from grief, or suffering from major depression, or any other obviously distorting state of mind.

Expand full comment
Comment deleted
Expand full comment
author

One choice point is whether you ascribe desert to *people* or to specific *interests* of theirs. The case of evil pleasure merely motivates discounting evil interests; it doesn't necessarily mean that you can disregard other (more innocent) interests that the person has, such as their interest in avoiding suffering.

But if you go for a full-blown retributivist account on which people can deserve to suffer, a simple proportional account would still imply that a finite being (who presumably could not have been *infinitely* bad) does not deserve infinite suffering, but merely enough suffering to balance out their (net) badness.

(If someone claims that "offending God's dignity" is infinitely bad, you should just reject that claim as ridiculous.)

Expand full comment
deletedMay 1, 2023Liked by Richard Y Chappell
Comment deleted
Expand full comment
author

Ha, well, until you read another philosopher, anyway!

Best wishes, in any case :-)

Expand full comment