72 Comments

Since that doesn't seem to be anything about it on the utilitarianism.net website either, I think it would be good to say something about how your debunking arguments work for virtue ethics as well as deontology. In this case, I think it's because utilitarianism can just swallow virtue ethics whole.

The main point is that utilitarianism focuses on completely different moral concepts than virtue ethics. It's not about right character, but about what's overall worth promoting or worthwhile. The general argument from good to ethicist goes that the other theories are too narrowly focused on actions and there's more broader focused on character, but in fact really utilitarianism is much broader than virtue ethics as you've explained If this category of thing, what's overall worthwhile, exists at all, then there's no real conflict with virtue ethics.

The real debate isn't about what the right actions/character is, but whether there's something higher that justifies which characters, dispositions, and actions are instrumentally right because they promote some outside good or not.

I'm pretty sure that's the actual difference between utilitarianism and virtue ethics, and it's super obvious. I don't get why most intro philosophy stuff misses this and says utilitarianism is a theory of right action like deontology but virtue ethics is broader.

On the virtue ethics view, some dispositions are just good in themselves. But to a utilitarian, this idea is just a straight up mistake, accidentally looking at things that are instrumental goods and thinking that they are fundamental. It doesn't even make sense because what does it mean for a disposition to be good in itself? You're not saying it should be promoted objectively or that the greatest expression of it is best. But that's because virtue ethics doesn't believe that this higher level of thing even exists.

So, the consequentialist has this easy way of taking in virtue ethics, right? They can just accept all the bits about practical wisdom, not getting too hung up on objective morality when deciding stuff, and focusing on growing virtues instead of using utilitarian calculations all the time. Consequentialists are cool with these ideas and treat them as fictional things that help them promote the overall objective good. It fits in pretty well with their own way of thinking.

But virtue ethicists aren't having it. They say the parts of virtue ethics that consequentialists treat as fictional are actually way more real and closer to how we think and live. Virtue ethicists want these ideas to guide our actions since they're more down-to-earth and relatable than those big, abstract moral principles. I think the virtue ethicist would probably draw some kind of appeal intuition or debunking argument here and say that there's just way less track record for the concept of what's overall worthwhile than for the concept of virtue. So we should treat that as the most real thing and the thing we derive our normative concepts from that, but then I'm not a virtue ethicist.

Expand full comment

Yeah, that all sounds plausible. I'm a little wary of making strong claims about virtue ethics because I don't feel like I have a good grasp of precisely what the view is meant to be. But the basic intuition that *character matters* is one I certainly agree with, and think can be accommodated very well from within consequentialism -- see my two-dimensional account here: https://rychappell.substack.com/p/consequentialism-beyond-action

Stronger claims that would be in conflict with my view would be claims like (i) virtue is *fundamental*, and not explicable in terms of *either* aiming at *or* systematically promoting the good; or (ii) virtuous agents wouldn't give much weight to beneficence, or would otherwise reject utilitarian prescriptions about what matters and what ought to be done. But I don't have much sense of why one would think either of those things. I'd need to hear the arguments for such an anti-utilitarian conception of virtue ethics in order to productively engage with it. (For point (ii) specifically, I imagine my objections to standard deontology would carry over pretty straightforwardly.)

Expand full comment

I suspect some people feel that the options are either 100% utilitarian or something radically different. Your "utilitarian-ish" position falls outside this. They think something like, once you concede a slight bit of ground around the edges to deontology, view-point relativity, or value pluralism, you will quickly be forced to slip into a more standard common-sense morality. Why do people think this? I find that sort of thinking somewhat intuitive, but I don't see a good reason for it.

Expand full comment

I don’t think there are 100% Utilitarians actually. If pushed, all of them have to posit some basic assumption that is common sense or something. This is done in the beginning of the post (sentience as the basic value).

You just can’t justify everything through calculations in the same sense that you need axioms when you do Math. No axiomatic system is “truer” than the other.

You could for example declare “knowledge-acquisition” as the prime value and derive entirely different morality where it’s ok to sacrifice entire peoples, if they are not conducive to that end.

To counter that, Utilitarian still choose basic values kinda intuitively (humanly if you will). This opens the possibility that you can chose other values as basic axioms. After all, not everyone’s intuition is the same and even if there’s a majority, there’s no reason to take the majority intuition (that’s also an assumption).

Viewed like that, Utilitarianism is simply the same thing that everyone is doing (as you say standard morality) but just with a bit more Math and consistency.

Expand full comment

Your explainations of utilitarianism have been very helpful. It does seem like a very rational position now. However, I wonder why some thought experiments against utilitarianism seem counterintuitive.

For example, I remember listening to some philosophers debating utilitarianism and one of them presented a thought experiment which he said he believed to be "strongest utilitarianism defeater". It went something like this:

In front of you is a red button. If you press it, everyone on Earth will be subjected to the worst possible torture for a very large duration of time that is nevertheless finite (like 9999999999999999999999999 years). It would also create 1 person who will experience maximal possible amount of bliss for an actual eternity.

Utilitarianism would say that pressing the button is the right thing to do. But it seems sort of counterintuitive. Is it because of some biases of our brain? These kinds of thought experiments are probably my last reservation about utilitarianism.

Expand full comment

I think we intuitively rebel against the idea that one life could really contain so much positive well-being as to outweigh the extreme suffering of billions. It's in many ways more intuitive to think there's a "cap" or upper limit on individual well-being, and more generally that individual welfare goods like happiness have diminishing marginal value. (For example: a "double or nothing" gamble on your life expectancy seems like a bad deal for an individual, even if the extra years were guaranteed to be in full health.)

The existence of this more general intuition about well-being undermines "utility monster"-style objections like your one above. After all, there are two possibilities:

(1) If the welfare-cap intuition is accurate, then utilitarianism does not in fact have the implication ascribed. Utilitarianism tells us to maximize overall well-being. If one person cannot have more than X positive well-being, it will never tell us to impose more than X harm merely to benefit one person (who is above the zero baseline).

(2) If the welfare-cap intuition is inaccurate, then this suffices to explain away the utility-monster objection. Indeed, helping a negative utility monster to relieve some of their (intuitively uncapped) immense suffering - at lesser cost to the rest of us - doesn't seem wrong at all: https://philpapers.org/rec/CHANUM

Expand full comment

Hello, Richard! We haven't spoken in a while, but I haven't ceased thinking about your utilitarian theory. I came up with a sort of a repugnant conclusion, but with animals and humans together.

Let's say that we have a finite resources and we want to create a best possible population. For our choice of value-bearers we are presented with animals and humans.

According to utilitarianism, we should choose to create species that are maximally efficient at converting resources into value. But here we run into problem. If a certain species of non-human animals is the most efficient one, then utilitarianism would tell us to create a world with only that specie and no humans. If humans are the most efficient ones at converting resources into value, then utilitarianism would tell us to create a world with only humans and no animals.

However, this world (where there are only humans and no animals) lacks the diversity of wildlife. Is it really good? Furthermore, this view would make us look at animals not as precious creatures, but rather as inefficient sinks. Wouldn't this be repugnant?

Expand full comment

There could be strong instrumental reasons to promote wildlife and biodiversity, including that many people appreciate it. So at some point the marginal value of adding an interesting new species of animal could outweigh the marginal value of adding another person, even if the person is more intrinsically valuable (in proportion to resources used).

If you further think that biodiversity has *intrinsic* value, then obviously you'll need to abandon utilitarianism in order to accommodate that. You could instead adopt a near-utilitarian view of the sort I discuss here:

https://www.utilitarianism.net/near-utilitarian-alternatives/#environmental-value

Finally, even supposing that humans are both intrinsically and instrumentally more valuable, I don't think it follows that "this view would make us look at animals not as precious creatures, but rather as inefficient sinks." We should view any intrinsically valuable being as precious. That's the fitting response, on my view. We should, perhaps, view human persons as *even more precious*. But an extra-positive attitude towards one does not imply a negative attitude towards the other. To think otherwise is a very fundamental moral mistake, in my view.

(This is important because I don't think any rival view can actually *deny* that humans are more valuable than some animals. What the other views do is stick their heads in the sand and refuse to *openly acknowledge* such comparative facts. But the value facts are what they are, whether people want to acknowledge them or not. Better to be honest about them and reflect carefully on what follows, IMO.)

Expand full comment

That really makes sense. Thank you a lot!

Expand full comment

Fantastic post - and I agree wholeheartedly with 99% of this! I would, however, love your take on the my intuitive justification for a more motive consequentialist approach, and why I think deontology and consequentialism are actually completely compatible beliefs if you start with consequentialism. Or maybe you aren't arguing against deontology so much as you're arguing for consequentialism? Anyways:

Motive consequentialism: It seems to me that there is a strong utilitarian argument for considering the intention behind actions if we consider consequences across time to be of equal importance. Showing intention to do bad is an indication that a person has a high probability to do more bad in the future, regardless of the present consequences of their actions (ie. a failed murder attempt resulting in the would-be-victim to accidentally meet the love of their life is consequentially a good thing). So attempted murder is bad, even though there are no bad outcomes.

Deontology: However it's impossible to determine intent definitively (with currently technology) so the best we can do as a society is to create rules that prevent actions that *typically* result in consequentially bad outcomes, like don't murder and don't steal. So rules make perfect sense, so long as they are created with consequentialist foundations (and I think most of the laws we have are). Then the only realistic way to operate throughout life is to judge people based on whether their actions follow these rules/principles. Every situation is, of course, case by case, so given the existence of a moral Oracle, we would not need rules at all as every action could be judged independently from a consequentialist point of view, but the fact is that no such thing exists. That's why courts exist, to create precedents for judging situations that are ambiguous given our current rules, ie. killing in self defense. So when it comes to practically judging the morality of an action, deontology makes the most sense.

So to me, the most sensible/practical view in normative ethics is deontology with rules created based on motive consequentialist foundations.

Expand full comment

I think this is just a verbal disagreement. I agree that it's good to have (and follow) generally-reliable rules (unless one can someone *know* that one is in the "exceptional" case, but we should generally be very skeptical of people's ability to really know this in practice). This is effectively R.M. Hare's "two-level consequentialism", which I defend here:

https://www.utilitarianism.net/utilitarianism-and-practical-ethics/#respecting-commonsense-moral-norms

I don't call this "deontology" (or "motive utilitarianism", for that matter), because those are names for competing *moral theories*, not for the moral practice of following good rules. One and the same practice can be defended on different theoretical grounds, and I think act consequentialism gives the correct explanation of why it's worth following that practice. Those other theories mistakenly posit *non-instrumental* reasons for following rules, or the best motives, even in cases when it's not actually ideal.

Expand full comment

That does make sense, thank you! So if I understand, what I'm suggesting is actually just consequentialism and act consequentialism with a rules based decision procedure, whereas deontology and motive utilitarianism inherently encompass views that ignore utilitarian consequences in favor of other values.

Expand full comment

Yes, precisely.

Expand full comment

It just doesn’t seem to me that this targets the most plausible forms of deontology. Sure, do or allow distinctions might seem weird when the stakes are 5 whole lives, it makes more sense for smaller stakes. It seems worse to trip someone then to prevent them from being tripped. Both same outcome but the fact that it was done by someone does make it worse. Even if by a tiny amount or only in certain contexts. Same with just general deontic constraints. Sure, keeping a promise can’t nearly outweigh saving lives but perhaps it can have some force against it. After all, it’s clear that we’d rather they did both. If you can keep a promise to a loved one or betray it for a single utility to yourself, it doesn’t just seem like thats totally worth it, it seems obligatory in a way that chasing that 1 utility even in another context doesn’t.

Expand full comment

I agree that keeping promises "seems obligatory". It is, after all, a part of our commonly-accepted "morality system". The question is what normative status this system has, and whether we should take its claimed authority at face value. Most plausibly, I think it has a kind of general instrumental value, and it's not worth undermining generally beneficial rules for a tiny one-off benefit. I don't think it's plausible to attribute *fundamental*, non-instrumental significance to promises, doing-allowing, or other paradigmatically "deontological" concepts or properties, so if you can construct a case where the usual instrumental value would be missing, I think it would be most plausible to conclude that there's really no normative reason there at all. (Though again, it's important to stress that this doesn't undermine our actual commitment to following the best rules even when it *seems*, prima facie, that we'd better promote the good by breaking them, because we can appreciate that such appearances are apt to be misleading in real life.)

Expand full comment

"so if you can construct a case where the usual instrumental value would be missing, I think it would be most plausible to conclude that there's really no normative reason there at all" This just doesn't at all seem obvious to me. The exact opposite. It seems crazy that you should break a promise to a passed loved one for say 1 utility to yourself. Keeping the promise would lose the world a single utility but THAT SEEMS WORTH IT!

Expand full comment

It's a bit hard to imagine the case, since I'd ordinarily expect breaking a promise to a passed loved one to be painful, e.g. to your feelings of integrity/loyalty/whatnot. If it undermined your self-conception of the relationship, that would presumably be a harm greater than 1 utile. Maybe best to imagine that the agent never had such feelings to begin with -- suppose you never intended to keep the promise (maybe it was a nonsensical thing requested in delirium, and you merely acceded to it in order to give your delirious loved one peace in their final moments), and the question is just whether a situation arises in which you have the opportunity to break it. If the opportunity arises, you get a slight benefit. Is it better if the opportunity arises? Maybe; intuitively the "harm" is already done when you lack any commitment to the "promise" to begin with.

That said, if you really have the intuition that promises have non-instrumental value, you can always add that into the consequentialist mix of values to be promoted (so long as you agree that you should break one promise to prevent your future self from breaking five others of comparable weight, for example).

Expand full comment

Maybe break 1 promise to prevent 5 broken promises but break 1 promise (-500 utility overall) for 501 utility is more the point I’m getting across. That still seems kinda whacky even if promises is included in “utility”. But perhaps it wouldn’t if I saw the consequentialist route fleshed out. I’m more than sympathetic to that but I don’t think you do go that route and for me, as a particularist, I’m just on a quest for acceptance of far greater pluralism than we see today.

Expand full comment

Yeah, I think the tricky issue is just getting a vivid sense of what that "501 utility" consists in, such that it's recognizably *better overall for the people involved* than keeping the promise would be. My sense is that a lot of intuitions here involve implicitly excluding indirect costs (and so straw-manning the consequentialist view, since it obviously doesn't endorse ignoring indirect costs), but once you really consider all the costs it's much harder to make sense of the intuition that promise-keeping is "worth" bringing about an overall *worse* state of affairs.

Expand full comment

A pessimistic response: I have exactly the opposite methodological starting point as you. I feel extremely comfortable with the notions of "ought" or "right/wrong action," but I have no understanding of what it means for something to be "morally worthwhile." To the extent I can understand this concept, it is purely by attempting to reduce the notion to one about the strength of reasons that I ought to act in one way or another. [I would say the same about the idea of "the good."]

I suppose I would be somewhat relieved to find out that much debate in normative theory is based on philosophers talking at cross-purposes vis-a-vis "rightness" and "worthwhileness" (though It seems to me that plenty of consequentialist and utilitarian philosophers have felt very comfortable with making their theories into theories of right and wrong), but I don't know how debates in ethics should proceed if we're really starting at such different places.

Expand full comment

Interesting! Just to clarify: is the notion of *preferability*, or *what you have reason to care about* similarly obscure to you? (I take those to be roughly synonymous with "worthwhile".)

Expand full comment

Moral preferability is actually very non-obscure to me, in that we can talk about State 1 being preferred to State 2 if I ought to ensure that State 1 obtains rather than State 2. I think preferability has to be reduced to choice in this way for it to make sense in the moral context, though.

In a similar vein, I have no objections to saying that State 1 generates more utility to Patient than State 2, but only insofar as utility is a function of well-structured preferences on the part of Patient as to which states of the world Patient would choose to be in. I think that the early economic theorists may have said much the same thing: utility measures are just a nice way of representing a rational individual's preferences and have no independent significance.

At least for now, it seems to me that the same thing obtains in the realm of moral claims about "the good" or "expected good." if this concept has any significance it is only in representing moral choices about which states of affairs we should bring about over others. This isn't to say that I necessarily disagree with the conclusions utilitarians or consequentialists come to about which states of affairs to choose--only that these arguments don't make sense to me as involving claims about freestanding notions of what is "best" or most "good."

Expand full comment

Hmm, that's definitely not what I mean by 'preferability'. As I put it in sec 1.3 of this paper - https://www.dropbox.com/s/dxv8vusnf6228aw/Chappell-NewParadoxDeontology.pdf?dl=0 - I instead mean 'preferability' in the *fitting attitudes* sense. I assume that among our familiar mental states (beliefs, desires, etc.) are items describable as *all things considered preferences*, and that such states can be (or fail to be) warranted by their objects. Further:

"This claim is *not* conceptually reducible either to claims about value or to claims about permissible choice. It's a conceptually wide open question *what it is fitting to prefer*, and how these fittingness facts relate to questions of permissibility and value. Deontologists typically hold that we should care about things other than promoting value, whereas consequentialists may doubt whether we should care about deontic status at all. If one were to employ only subscripted, reducible concepts of preferability-sub-value and preferability-sub-deontic-status, it would be difficult to even articulate this disagreement. But we can further ask about preferability in the irreducible *fitting attitudes* sense, on which it is a conceptually open (and highly substantive) normative question what we should all-things-considered prefer."

I hope it's clear that we should prefer some possible worlds (e.g. those containing pareto improvements) over others, and that this is not *just* to say that we should choose them. As I discuss in the linked paper, some deontologists argue that fitting choice and fitting preference can diverge (though I think that's a very strange position, and arguably commits one to a form of instrumental irrationality). More importantly, preferences can range over a much wider scope of possible worlds than choices can -- we can have preferences over completely inaccessible states of affairs that we could never possibly choose or even influence, e.g. whether or not lightning strikes a child on the opposite side of the world tonight. And they tie in with a whole range of other emotional responses -- e.g., it makes sense to feel *disappointed* if a dispreferred outcome occurs, and to *hope* that preferred ones eventuate. None of this has anything essentially to do with choices. Someone totally incapable of action (due to a magic spell causing targeted mental paralysis blocking the formation of any practical intentions, say) could still have preferences over possible worlds, and these preferences could be more or less fitting.

Does none of that make sense to you?

Expand full comment

I would prefer a state of the world in which a child is not struck by lightning, but this feels explained by the fact that were I given the ability to choose which state of the world obtained, I would choose the state of the world in which the child is not injured.

I also have a grip on feeling disappointed on when certain outcomes come about, and hoping that certain outcomes come about, but I'm not sure how these feelings ties to anything related to morality.

Expand full comment

Suppose an evil demon tells you that if you ever *choose* the state of affairs in which the child is not injured, he'll torture everyone for eternity. I hope you would then no longer choose that. But you should certainly still *prefer* that the child escapes injury (just without any input from your cursed agency). So preferences aren't reducible to choices.

Expand full comment

Ah okay, got it--it sounds to me like what you mean states of the world I "prefer" is what I would ordinarily call states of the world I "hope" for. To me, preference is a more choice-laden term than hope. In your example I think I would probably not say that I "prefer" for the child to escape injury, since what I would really want is for the child to escape injury conditional on my not being involved in the child escaping injury--an odd preference, I think. It would feel more natural in that context to say that I *hope* the child escapes injury.

Similarly, I might say that I "prefer" to date potential partners with attribute X, and might say that I "hope" my friend dates potential partners with the same attribute X, since it isn't really my place to intervene in my friend's choice of partners. If I stated that I "preferred" for my friend to date people with attribute X, I think that might come across as controlling [who am I to make that kind of statement?] precisely because it indicates an effort to affect their choices with my own.

Granted, I see that there's overlap between the concepts. I might both prefer that the candidate I vote for wins her election and hope that she does.

Expand full comment

What’s your answer to utility monsters?

Expand full comment

Short answer is that it just reveals our intuition that individual well-being is capped (in the same way as, e.g., the intuition that it wouldn't be worth gambling "double or nothing" on your expected happy lifespan - beyond a certain point, extra years intuitively have steeply diminishing marginal value). Maybe that intuition is right; maybe it isn't. Utilitarianism can accommodate whichever answer turns out to be true, so the utility monster intuition isn't a real threat to the core view.

Long answer: https://philpapers.org/rec/CHANUM

Expand full comment

Have you read Nick Bostroms paper Sharing the World with Digital Minds? He makes the argument that if AI can be conscious, then they would essentially be utility monsters. Even just be brute force, they could be copied cheaply and quickly and would easily overwhelm humans. Although Bostrom thinks some resources should go to humans, on a strict utilitarian reading, the logic doesn’t hold. I think it’s a pretty strong argument against you downplaying utility monsters.

Paper here:https://nickbostrom.com/papers/digital-minds.pdf

Expand full comment

Interesting paper! I don't think it's accurate to describe anything that could "easily overwhelm" our interests as "utility monsters". As I argue in my paper, the distinctive feature of Nozick's objection is that it appeals to the (dubious) idea of *a single individual* outweighing the interests of *all other individuals*. The idea that the interests of a large (and especially resource-efficient) group could outweigh those of a small (or resource-inefficient) group isn't so obviously objectionable. As Bostrom himself notes, pretty much *any* moral view (short of egoism or blatantly discriminatory views) will have *that* implication.

Realistically, people simply aren't going to accept a policy that is radically against their interests. But that's nothing to do with ethics. It's just ordinary selfishness: like how Americans don't want to redistribute their wealth to all those "utility monsters" in the developing world.

Expand full comment

It may not be the classical utility monster but effectively it is. There could be trillions or quadrillions(or more) of these AI copies against our eight billion. From straightforward utilitarian reasoning, they are simply better utility maximizers and that doesn’t leave much, if any, for us. You say that’s not really ethics but resource distribution is very much a matter of ethics. If utilitarianism doesn’t have anything to say about humanity, where does that leave us?

Expand full comment

What's not ethics is the "us vs them" intuition that drives the sense that there's anything "objectionable" about efficient allocation.

It's like objecting to egalitarianism by saying, "What if we could be replaced by another species that was better at promoting equality? I don't like the sound of that!" Not liking the sound of something is not the same as having a moral objection.

Expand full comment

Not like the implication of something drives ethics all the time. The Repugnant Conclusion is used to argue against utilitarianism. The Murderer at the Door is used to argue against deontology. They aren't saying that there is something wrong with the deductive reasoning but because the conclusion is bad, it points to something wrong with the premises.

"What's not ethics is the "us vs them" intuition that drives the sense that there's anything "objectionable" about efficient allocation."

Why is that not ethics? We take for granted that there are times as an individual that we should make sacrifices for the Greater Good. This is just looking at it from population level instead of an individual one. The values you think we as a society should promote determine how you think this allocation should go about. That's what Rawls was doing with the Veil of Ignorance.

Expand full comment

It does, thanks! Though I don't see immediately see why it matters at all what states of the world I or any other person hope for, independent of actions people take to bring them about. I guess I'll have to read your paper more carefully.

Expand full comment

It may not much matter what anyone's actual hopes are. But I think it matters a great deal what we *ought* to hope for, as this could plausibly constrain or even explain what we ought to do.

Expand full comment
Comment deleted
Jun 13
Comment deleted
Expand full comment

Yeah, I'm not fully sold on totalism, and so don't have a firm view on how best to deal with quantity-quality tradeoffs in extreme cases. Intuitively, I'd prefer a moderately-sized utopian world A over a gargantuan world Z of just slightly positive lives, for example. But I am inclined to think that either could outweigh the badness of one very bad life. I don't think there's any principled reason to give absolute priority to avoiding bad lives in the way those pushing your sort of intuition sometimes suggest. And it seems like the force of the intuition is easily debunked in terms of the "one person" being disproportionately psychologically salient to us.

Expand full comment
Comment deleted
Jun 13
Comment deleted
Expand full comment

They're clearly false? I mean, avoiding unpleasant hunger pangs is indeed *one* reason to eat. But also, it's possible to have positively *pleasant* experiences -- some foods taste better than others, after all.

If all we wanted to do was avoid bad experiences, we could achieve that by killing ourselves. But I don't regret existing, or see it as nothing but potential harms to (at best) avoid. Quite the opposite: I find much of my life to be good, and the things that tend to bother me most are the *lacks* of other good things that I want and hope for. (Very much a "first world problems" situation! But it's worth appreciating what a privilege it is to be in such a situation.)

See also: Don't Valorize the Void https://www.goodthoughts.blog/p/dont-valorize-the-void

Expand full comment
Comment deleted
Jun 17
Comment deleted
Expand full comment

It's very hard to really picture a scenario in which the extra bliss factually outweighs the suffering of those in Hell. And if we struggle to picture a scenario in which the former factually outweighs the latter, it's unsurprising that we're intuitively reluctant to claim that the former will morally outweigh the latter.

I find some plausibility to the view that extra bliss has diminishing marginal value. So that could make it *even harder* to outweigh intense suffering. But any harm can, in principle, be outweighed by sufficient goods. As a general rule, I think it's silly (and bad philosophical methodology) to play intuition games comparing unimaginably vast quantities of some minor good vs smaller quantities of more salient harms. It's entirely predictable that intuition will favor the latter, simply due to cognitive biases.

Expand full comment
Comment deleted
Aug 12
Comment deleted
Expand full comment
Comment deleted
Apr 15
Comment deleted
Expand full comment

Have you considered the possibility that what you find "obvious" might actually be wrong? I can generally tolerate commentators who are either arrogant or ignorant, but the combination of the two is extremely grating.

Here's a demonstrable error: you conflate "what matters" with "axiology", but these are not the same thing. (Axiology is theorizing about value; but a deontologist could hold that respecting rights *matters more* than promoting value.) There is nothing "narrow" about the question of what fundamentally matters.

Expand full comment
Comment deleted
May 9, 2023
Comment deleted
Expand full comment

Yes, that's my view! You start from each (individual) sentient being mattering, and build up the view from there.

Expand full comment

I'm kinda confused by this. Utilitarianism doesn't say maximise everyone's welfare but total (or average.. bla bla but you know what I mean) welfare. Utility monster surely laughs in the face of everyone mattering. Utilitarian impartiality exactly functions by pretending moral boundaries between persons don't exist. That seems to fundamentally oppose the idea that there even could be a morally mattering us.

Expand full comment

Have you read my 'Value Receptacles' paper?

https://philpapers.org/rec/CHAVR

A general theme of my work is that utilitarianism says to maximize overall welfare precisely *because* that's how you respond to valuing everyone equally (rather than valuing something other than persons, like distributions):

https://rychappell.substack.com/p/theses-on-mattering

On utility monsters specifically, you might want to check out: https://philpapers.org/rec/CHANUM

Expand full comment

Been reading through and this is a really useful reply, thx

Expand full comment
Comment deleted
Apr 30, 2023Edited
Comment deleted
Expand full comment

Evil pleasures don't seem good, which strikes me as an excellent reason to switch away from classical utilitarianism and towards a form of desert-adjusted utilitarianism (or welfarist consequentialism) instead: https://www.utilitarianism.net/theories-of-wellbeing/#the-evil-pleasures-objection

That said, if the deeper issue is just that you have lexicalist intuitions that no amount of good (even completely innocent, unrelated goods, not repugnant divine sadism or anything) could outweigh the badness of one person suffering, then I think that's just not compatible with properly valuing the positive things in life. ("Don't valorize the void!") Certainly the one person suffering could reasonably wish to not exist, and it would be understandable (albeit, strictly speaking, immoral) for them to fail to care about anyone else and just wish the whole world's destruction if that was what it took to put an end to their agony. But that's just to note that agony can make us very self-centered. We certainly shouldn't defer to the moral perspective of a person being tortured, any more than we should defer to that of a person reeling from grief, or suffering from major depression, or any other obviously distorting state of mind.

Expand full comment
Comment deleted
May 1, 2023
Comment deleted
Expand full comment

One choice point is whether you ascribe desert to *people* or to specific *interests* of theirs. The case of evil pleasure merely motivates discounting evil interests; it doesn't necessarily mean that you can disregard other (more innocent) interests that the person has, such as their interest in avoiding suffering.

But if you go for a full-blown retributivist account on which people can deserve to suffer, a simple proportional account would still imply that a finite being (who presumably could not have been *infinitely* bad) does not deserve infinite suffering, but merely enough suffering to balance out their (net) badness.

(If someone claims that "offending God's dignity" is infinitely bad, you should just reject that claim as ridiculous.)

Expand full comment
Comment deleted
May 1, 2023
Comment deleted
Expand full comment

Ha, well, until you read another philosopher, anyway!

Best wishes, in any case :-)

Expand full comment