89 Comments
Apr 23Liked by Richard Y Chappell

I believe you are most wrong about moral realism. I don't know exactly where you stand on every auxiliary issue, but:

(a) I don't think there are any good arguments for moral realism, and I think much of moral realism's appeal stems from misleading implications about the supposed practical consequences of antirealism. I think many of these misleading ways of framing antirealism are rhetorical in nature and rooted in biases and misunderstandings of antirealism, and that this is quite similar to how people critical of utilitarianism uniquely frame it as some kind of monstrous and absurd position.

(b) I don't think most nonphilosophers are moral realists or antirealists, and more generally I don't think moral realism enjoys any sort of presumption in its favor, e.g., I don't grant that it's "more intuitive" or that it does a better job of capturing how people are typically disposed to speak or think than antirealism (though I also don't think people are antirealists).

You ask what the most helpful or persuasive point I can make to start you down the right track may be. I don’t know for sure, since I am not super familiar with many of your background beliefs, but I’d start with this: I think there is little good reason to think that most nonphilosophers speak, think, or act like moral realists. Rather, I think moral realism is a position largely confined to academics and people influenced by academics. I think the questions about whether people are moral realists or not are empirical questions, and that the empirical data simply doesn’t support the notion that moral realism is a “commonsense” view. I don’t know where you stand on this issue, but I think it’s an important place to start.

I came to this conclusion after many years of specifically focusing on the psychology of metaethics and in particular the question of whether nonphilosophers are moral realists or not. Early studies suggested that most people would give realist responses to some questions about metaethics and antirealist responses to other questions. However, I came to question the methods used in these studies and launched a large project (which culminated in my dissertation) to evaluate how participants interpreted these questions. I came to the conclusion that most people were not interpreting them as researchers intended (and frequently didn’t interpret them as questions about metaethics at all). I suspect the best explanation for this is that ordinary people don’t have explicit stances or implicit commitments to metaethical theories, and that metaethics has very little to do with ordinary moral thought and language. The case for this is largely derived from my own data and my critiques and analyses of research on this topic. It’d be very difficult to summarize it but I could elaborate on specific points.

The more general takeaway is that I don’t think moral realism enjoys any special kind of intuitive priority, and I suspect that the reason why some people are disposed towards moral realism has more to do with path dependent idiosyncrasies in their particular cultural backgrounds and educations.

Expand full comment
author
Apr 23·edited Apr 23Author

I'm not personally at all deferential to what non-philosophers think about philosophical questions, so that isn't a crux for me. I'm more concerned about anti-realism struggling to accommodate fundamental fallibility. It just seems to me that we should all regard ourselves (both individually and collectively) as potentially mistaken about what's truly important in life.

I also don't see much motivation for anti-realism about ethics that wouldn't quickly spill out into the rest of philosophy. Like, it just seems undeniable that there can be abstract truths (e.g. in answer to philosophical questions) about which people can disagree and any of us could be wrong about the answers.

Expand full comment
Apr 24Liked by Richard Y Chappell

I don’t really take it to be a data point in need of explanation that our moral views “could be wrong” in the sense of being factually mistaken. What does seem pressing, and sort of in the same vicinity, is that we be able to explain/accommodate the fact that we take our moral judgements to be revisable through reflection, and that there’s some concern about our current moral judgements not being the ones that we would make upon full reflection. I think that can be accommodated by almost any anti realist view. For example if a value is just a desire or some non cognitive state, I might well be concerned about the possibility that I wouldn’t possess that same desire or non cognitive state upon fuller reflection, and so I subject such states/their subjects to rigorous reflection.

Even if you do take “our moral views could be factually mistaken” to be a data point, I think some anti realist views could accommodate that- for example, a David Lewis type view where what’s valuable is just what id be disposed to value under ideal conditions. Surely I can be wrong about what’s valuable on a construal like that.

Expand full comment
author

Yeah, that's fair. We might distinguish "weak" and "strong" forms of fallibility. I agree both that (i) weak fallibility, such that moral reflection is worthwhile, is the most important component to accommodate, and (ii) various anti-realists can accommodate this.

But I also find strong fallibility -- such that even our procedurally idealized selves are not guaranteed to get things right (cf. "ideally coherent Caligula") -- so plausible that it strikes me as a strong reason to favor moral realism.

Expand full comment

I think the ideally coherent eccentric type thought experiments are interesting. I think that my own intuition about such cases is that such people are acting irrationally, wrongly, not doing what they ought to do, etc. but I don’t think that tells me that they’re getting any stance independent facts wrong- rather, that tells me that I use words such as “irrational”, “wrong”, etc in ways that refer to my own standards/values, the norms of behavior that I accept, etc. So when I say that Caligula is irrational, I’m saying or expressing something regarding how her behavior isn’t in line with my standards. On such an account, it may be pointed out that when Caligula calls herself rational, then she’s not actually disagreeing with me- but I don’t have any intuition that this is a big issue, definitely not as big of an issue as not being able to call Caligula irrational.

Expand full comment

I see it somewhat similarly, though I think I'd just dispense with calling them irrational. I find nothing irrational about a person having different values from me and acting in accordance with those values, no matter how weird I find those values. I suspect that, insofar as people do judge such beings as "irrational," this could be due to an inability to fully distance ourselves from our own values. When I imagine *myself* counting blades of grass forever, or whatever, that seems really unappealing. Or I may think of a typical human doing so, and recognize that they may be suffering or missing out by their own lights.

I suspect if we regularly encountered aliens or AGIs with genuinely weird values, we'd have time to internalize and become comfortable with beings who genuinely just don't care about the things we care about, and insofar as those beings acted efficaciously towards achieving their ends, we may find their actions weird, or threatening, or a big waste of time (relative to our values), but I can't see myself judging them to be making mistakes. I just don't see any reason to think that other beings are "incorrect" if they don't care about the things I care about. That strikes me as a category mistake. What we value just isn't the sort of thing I think we could be incorrect about.

Expand full comment
author
Apr 25·edited Apr 25Author

Do you draw any distinction between values and tastes/preferences, or are they all equivalent to you? I can certainly distance myself from my own tastes/preferences, and even entertain hypotheticals where I or others value different things and are perfectly happy in doing so. There's no failure of imagination there. Still, in *evaluating* those hypotheticals, I draw upon my (actual) values (which are not purely hedonistic), so some will strike me as worse than the imagined agent realizes.

I see part of the appeal of realism is that it can explain how this practice is reasonable: if my values are "getting it right" (like I hope my beliefs, in general, are) then I can continue to apply them even to situations in which the state of *my having those values* is no longer present. (An anti-realist *could* also endorse this practice--they could simply *insist* that their values have universal applicability in this way--but it seems a bit parochial or arbitrary-seeming if there's no chance that their values are objectively correct, such that the targets of their negative evaluations are actually making any *mistake*. Too much like criticizing someone for having a mere difference in taste, say.)

Expand full comment

Okay, that's fair if that's not a crux.

I'm not sure I understand your concern about antirealism struggling to accommodate fundamental fallibility. I don't see my own view as having any struggle with this at all because I suspect that conditional on my view there wouldn't be anything to be fallible or infallible about. Could you clarify what you mean by "truly"?

I probably am an antirealist or a quietist about most other things, too, so that probably wouldn't be a big concern for me.

Expand full comment
author

Like, suppose we were all wondering whether wild animal suffering matters. And suppose we concluded that it doesn't. So we're unbothered by the prospect of wild animal suffering, and we don't bother to euthanize badly injured animals, etc. It seems to me that we could be mistaken in this case: maybe we *should* be more bothered by wild animal suffering, and we should euthanize badly injured animals rather than leave them to a long drawn-out death. But anti-realism, I take it, cannot accommodate the possibility that this could all be true even though (as stipulated in the scenario) none of us accept it.

If you're an anti-realist about the truth of anti-realism itself, does that mean that realists make no mistake in rejecting your view?

Expand full comment

I appreciate the example, but I'm still a bit concerned about your prior use of "truly." I think things "truly" matter on an antirealist framework. I just don't think "truly" mattering requires stance-independence. I wouldn't say, for instance, that my family doesn't "truly matter" to me. I don’t think either of us want to get caught up in terminological disputes, so I’m happy to set this term aside, but I want to flag its use not because I want to get hung up on the word “truly” but because much of my defense of antirealism and objections to moral realism center on what I take to be incautious use of language that gives what I consider a false impression of something undesirable or objectionable about antirealism. I think the use of “truly” could be leveraged, if unintentionally, for such purposes.

For instance, I would object to someone saying that, as an antirealist, I do not think anything “truly matters.” Since I think antirealism is the correct position, the sense in which I think things truly matter is the antirealist sense, not the realist sense. If anything, I might even say I think that realists don’t think things truly matter. I’m not being glib about that: I don’t like the “realist” and “antirealist” framing. I do think value is real and that things really and truly matter. I just don’t think this involves stance-independence.

To address the example you gave: my main concern is that your objection turns on antirealists being insensitive to a problem that would only be a problem if realism were true. For comparison, imagine you were having a dispute with a person who believed in Bigfoot. Supposing you don’t believe in Bigfoot, they could say the following:

“I'm concerned about Bigfoot skeptics struggling to accommodate fundamental fallibility. It just seems to me that we should all regard ourselves (both individually and collectively) as potentially mistaken about Bigfoot’s shoe size.”

As a Bigfoot skeptic, one would not think there was a being such that they had a shoe size, and thus one couldn’t be mistaken about whether Bigfoot’s shoe size was, say, 26, rather than 31.

The Bigfoot believer may point out that Bigfoot skeptics seem to be infallible regarding judgments about Bigfoot’s shoe size. This seems like a strange objection: of course they can’t be wrong about what Bigfoot’s shoe size is conditional on Bigfoot not existing: there isn’t anything for them to be wrong about!

Just so, I don’t think there are stance-independent moral facts. The type of fallibility you’re concerned with seems to be the possibility of being mistaken about the stance-independent normative moral facts. One moral realist may think that X is wrong, and another may think X is not wrong. And both may acknowledge their fallibility on the matter. However, as an antirealist, I acknowledge no such fallibility. Yet the antirealist isn’t “struggling” in this regard. One cannot struggle with a problem if there is no problem to struggle with.

As an antirealist, I would be fallible about what the stance-independent moral facts were, if moral realism were true. And I am fallible about whether or not moral realism is true. So I simply don’t see a struggle here: I would be fallible about what the stance-independent moral facts were, if there were such things.

I’m not an antirealist about the truth of antirealism (though I may be an antirealist in some respects; I wouldn’t typically call myself one though). I think antirealism about certain realist positions are true. With respect to what I’m an antirealist about: I’m an antirealist about all stance-independent normativity, and at least most metaphysics.

Expand full comment
author

I guess this is just a bedrock disagreement: it seems clear to me that there is a real question whether we ought to care about (e.g.) wild animal suffering. I don't expect to be able to persuade you on this point. But it is the main reason why I am a moral realist: because I think these sorts of questions are real questions, and not like asking about Bigfoot's shoe size.

Expand full comment

Do you think there's a way for us to navigate or resolve that disagreement? I don't know if it's simply the result of incorrigible and perhaps inscrutable psychological differences, or whether we approach philosophical questions from different starting points or employ different methods that we could discuss.

I suspect many philosophical problems remain intractable due to differences in more fundamental beliefs, commitments, or attitudes. And it may be worthwhile to discuss what those might be.

Expand full comment

I'm not sure how you come to the conclusion that most nonphilosophers are not moral realists. To start with, most nonphilosophers believe in a Creator God that endowed the universe with moral properties.

Expand full comment
author

Many theists seem to be divine subjectivists (cf. "divine command theory" -- a cultural relativist might similarly call their view "cultural command theory").

Expand full comment

I don't see any good reason to think most theists are moral realists. Simply because a person believes in a creator god doesn't mean that they think God endowed the universe with, specifically *stance-independent* moral properties. I don't think theism entails or even strongly implies moral realism; it's consistent with both antirealism and with taking no stance on the matter at all.

I've seen little indication that most religious systems explicitly endorse moral realism, and even if they did, it would still be an open empirical question whether most laypeople endorsed that particular article of faith, since the fact that something is a part of official religious doctrine doesn't tell us what proportion of adherents to the religion believe it, or are even aware of it. Because of this, I don't think it's reasonable to infer from the fact that people are theists that they subscribe to any specific doctrines or philosophical positions. For instance, if someone says they're a "Catholic" on a survey, it doesn't follow that they believe all official Catholic doctrine. What any given individual actually believes is an empirical question and the only way to know whether theists are moral realists is by conducting relevant empirical research.

With respect to the empirical research that has been done, there is little good evidence most of the populations that have been studied are moral realists. So at present I think we're dealing with an empirical question for which there is little empirical evidence that convincingly establishes that most people are moral realists.

Expand full comment
Apr 23·edited Apr 23Liked by Richard Y Chappell

I'll preface this by saying I'm a new reader, so if you have written on this topic elsewhere, I apologize. Can you explain to me how utilitarians think about utility and preferences?

It is intuitive to me that an individual can have a complete and transitive preference ordering across the infinite possible states of the world. But from Arrow, we know that you cannot aggregate individual preferences into a social preference ranking without violating unacceptable conditions. So in order to determine the "greatest good for the greatest number of people," I think you have to accept that cardinal utility exists.

Unlike rankings (ordinal utility), I find the notion of cardinal utility, in the sense of preferring world-state X over world-state Y by some amount z percent, much less intuitive. I suppose you might be able to deduce your own cardinal utility over world states by ranking gambles across states, but I don't know how this can be applied across people.

For a concrete concern, what determines the pleasure and pain scale? Suppose a painless death, or the state of non-existence (absent considerations of an afterlife), is assigned zero. Then a slightly painful death might be assigned negative one, which is infinitely(?) worse than a painless death. A slightly more painful death is negative two, which is twice(?) as bad as a slightly painful death. I suppose a state of infinite pain could be assigned zero, but that is problematic because a state of infinite pain doesn't exist, in the sense that there can always be a worse state of pain.

This is less an objection and more an expression of my own curiosity over how utilitarians think about this stuff.

Expand full comment
author

I tend to think more about well-being than about preferences: https://www.utilitarianism.net/theories-of-well-being/

The idea that X headaches might add up to be equal in aggregate harm to the loss of 1 quality-adjusted life year seems pretty intuitive to me. (Loss of 1 QALY would then be X times worse than 1 headache.) As Parfit said, there's no deep puzzle of interpersonal comparisons: we can be very confident that one person is harmed less by a papercut than another person is by being beheaded.

Expand full comment

Harm alone is not sufficient. A surgeon harms the patient, in expectation of bringing about a long term benefit. Boxers harm each other, but are not considered morally blameworthy unless they violate the rules.

But perhaps this is a quibble. All other things held equal, beheadings are worse than paper cuts. Does comparison of harm give us interpersonal comparisons?

The comment specifically mentioned cardinal utility. If we stipulate that beheadings are worse than paper cuts, that might establish ordinal utility. What is the step to get to cardinal utility? Or is ordinal good enough?

Expand full comment
Apr 23·edited Apr 23Liked by Richard Y Chappell

Personally, I think several approaches are needed in combination.

1) Approach One: Risk. i.e. gambles as you said (also known as lotteries). This is basically the Harsanyi school, developed further by Broome's Weighing Goods and Weighing Lives. Other papers in this spirit are:

- Mongin and Pivato 2015 https://www.sciencedirect.com/science/article/abs/pii/S0022053115000022

- McCarthy et al Utilitarianism with and Without Expected Utility https://www.sciencedirect.com/science/article/pii/S0304406820300045

- Wakker et al 2023 https://personal.eur.nl/wakker/pdf/deu.pdf

[Note: Peter Wakker is an authority on this sort of thing.]

- Stéphane Zuber has new papers I haven't read yet e.g. https://www.iza.org/publications/dp/16561/utilitarianism-is-implied-by-social-and-individual-dominance

2) Approach Two: Time. One paper here is by Loren Fryxell https://lorenfryxell.com/papers/xu.pdf

3) Approach Three: "Introspected Utility Differences". This means introspected *ordinal* comparisons of the size of differences in utility. So like the I judge the difference between apple and orange (a-o) to be greater than the difference between orange and grape (o-g), etc..

- Ch. 4 of Krantz et al The Foundations of Measurement "Difference Measurement"

- A more recent paper is Kobberling "Strength of preference and cardinal utility". https://www.jstor.org/stable/25056023

I view cardinal utility as a separate issue from interpersonal comparisons. Comparison with non-existence is another issue separate from cardinality. For issues about scales (e.g. ordinal, cardinal, ratio, etc.), Foundations of Measurement is one good source (I think it even has videos on youtube), and the beginning of John E. Roemer's Theories of Distributive Justice covers some stuff from a different angle.

Expand full comment
Apr 23Liked by Richard Y Chappell

Hm, this is difficult as I probably disagree with you on almost everything. I guess I’ll start with moral realism. I always found the evidence (intuitions, progress, convergence) pretty weak and easy to socio-evolutionarily explain with anti-realism then winning out on parsimony grounds. What would you say is the best case for moral realism?

Expand full comment
author

Because the truth of moral realism is non-contingent, I don't think parsimony considerations weigh heavily at all. (Parsimony can't usually establish that something is *impossible*.) I expand upon this point a bit here:

https://www.philosophyetc.net/2013/03/the-possibility-of-moral-realism.html

It's hard to argue against error theory. (And anti-realism, I take it, is just error theory with sprinkles on top.) But I see five main reasons to favor realism:

1. The argument from obvious moral truths (gratuitous torture really is wrong)

2. The argument from intelligibility (there needs to be a possible property for moral claims to be *about*)

3. The “might as well” argument (you can't go *wrong* by believing in normativity, whereas it maximizes your chances of believing as you truly *ought*)

4. Companions in Guilt (global normative nihilism -- incl. about epistemic and instrumental rationality -- seems implausible)

5. The indispensability argument (implicit acceptance of normative reasons seems a precondition for rational inquiry and practice).

Expand full comment
Apr 23Liked by Richard Y Chappell

Isn’t the truth of moral realism contingent on objective moral facts existing? If you mean, per moral realism, these objective moral facts are non-contingent then sure but that doesn’t help for an anti-realist. I’m probably confused and will check that article out.

I’m not an error theorist. I want to say there can be moral facts and truths but only in the same way there are rules of chess or monopoly. They’re totally fictional and constructed. You seem to think that’s not enough given your argument from intelligibility but I don’t see that.

I’m also an anti-realist about normativity generally and honestly struggle to see the big deal. If anything, moral realism seems more plausible than realism about epistemic and instrumental rationality.

I can see the attraction of the might as well argument but ultimately I want to believe that which is true and I think that’s anti-realism (although ofc I might be wrong).

Gratuitous torture is wrong imo and I think both realists and anti-realists can say that. Realists think it’s true that it’s wrong in virtue of corresponding with stance independent facts. Anti-realists deny that. I don’t have or think I need a matter of fact theory of what I mean when I say it’s wrong. It will depend case by case. I might mean it’s wrong just almost analytically as by “wrong”, I’m importing a bunch of fictionalised morality or maybe it’s “wrong” in virtue of my preferences or general society deeming it such. When put like that, neither seem “obvious” to me (and anecdotally, most day to day people I speak to are confused, agnostic or leaning anti-realism).

I’ve never found the indispensability argument particularly convincing. There’s a lot to go into though. Maybe you have a specific paper or article or video in mind about it.

Expand full comment
author

Ok, putting aside error theory, and looking more specifically at realist vs anti-realist explanations of obvious moral truths: there my worry is that anti-realism makes the moral facts too parochial, and can't account for our (or general society's) fundamental fallibility. Like, we can imagine a world in which everyone is (bizarrely) pro-torturing-puppies. That wouldn't make it right. Anti-realists can respond by rigidifying on our actual attitudes: we're anti-torture, even as applied to people who are themselves pro-torture. But my 2-D argument explains why I find this unsatisfying:

https://philpapers.org/rec/CHAMSA-2

(In short: imagine you have temporary amnesia, and can't remember whether our actual society is pro- or anti-torture. You should *not* be certain that whatever we all think is *thereby* right. We could -- esp. if we turn out to be pro-torture -- be horribly mistaken!)

Expand full comment

Would torturing puppies be wrong if everyone thought otherwise? “Yes” doesn’t seem obvious to me. I can kinda see the intuition behind it but why think that’s reliable? Seems result of status quo bias and us being used to some social fictional but useful morality.

I also think we can say yes in a way that’s consistent with anti-realism anyways. It’s wrong in virtue of *potentially whatever normative theory* the realist believes in. The realist just thinks that theory tracks objective facts whereas the anti-realist doesn’t but don’t see how that helps. I’ll be sure to read your 2D argument article though, sounds interesting.

Likewise if everyone thought otherwise, that would include all the moral realists, who’d be as passionately as now defending how obvious it is that torturing puppies is objectively right.

Expand full comment
author

Moral realism is no guarantee against error, for sure. The key point is more that it leaves logical room for acknowledging error (and hence for fallibility).

Anti-realism seems to imply that I'm guaranteed to be right. For example, if the relevant normative standard is provided by *whatever normative theory I believe*, then I can be certain that I believe the true normative theory. But I shouldn't be certain of that.

Expand full comment

If the correct normative theory is whatever you believe it is, and you believe a particular normative theory, and we're infallible about what our own beliefs are, then it strikes me as trivially true that if you're an antirealist you're infallible about what your moral beliefs are. That seems to follow as a matter of stipulation. So I'm puzzled why you'd say that we shouldn't be certain of that. Under the circumstances, we should be!

As far as the claim that moral antirealism seems to imply we're guaranteed to be right: right about what, exactly? If taken in the abstract, without specifying what it is we're supposed to be right about, that may make the position sound really implausible. But *if* the position holds that moral facts are facts about what our preferences are, and that we have infallible access to what our preferences are, then it doesn't strike me as a problem at all.

After all, a realist may think that we do have infallible access to our preferences. In which case, what's the issue with the antirealist saying we're infallible about our moral values? If, on the other hand, the realist thinks we aren't infallible about what our preferences are, then it wouldn't even be true that the antirealist thinks we're infallible about our moral values. Either way, I fail to see an issue here.

[Edited to add]: Also, there are several forms of antirealism where we clearly can be wrong about our moral beliefs, or where right/wrong don't apply. Some noncognitivist views would deny that we are or even could be right, cultural relativists can be mistaken about what is right or wrong according to their culture, and proponents of constructivist positions could be mistaken about what is right or wrong according to whatever constructivist procedures they adhere to. A subjectivist may also think we can be mistaken about our own values or preferences. Given this, at best I think one might only say that some forms of antirealism imply that we're guaranteed to be right, but many don't imply this at all, and might even specifically hold that we are fallible.

Expand full comment

The argument from intelligibility seems odd. Wouldn't this imply that there are no unintelligible seemingly necessary properties?

Expand full comment
author

It's like conceivability arguments: There could be unintelligible properties, but we presumably couldn't positively conceive of them. The inference is in the other direction: that any apparent property we *can* intelligibly conceive of is thereby a genuinely possible property. If non-contingent, then it's necessary in virtue of being possible. But notice that we could be wrong about the property being non-contingent (you can't build the modal status into it and still have the conceivability argument work). And there could be illusions of intelligibility, where the idea in question doesn't ultimately make any sense at all (that's my tentative view of haecceities, for example; I could understand someone thinking the same thing of normativity). So I don't think it's an especially strong argument.

Expand full comment

Are you familiar with the argument? I'm not. On the face of it, moral claims could be nonpropositional or could be about desires or cultural standards, which would make them both intelligible and consistent with antirealism. So presumably the argument involves something specific that would exclude these views.

Expand full comment
Apr 23Liked by Richard Y Chappell

Thanks for this post. I am pretty much in agreement with all you wrote. I am not an academic, so I don't know if my thoughts will be as well honed as what you are looking for here. I hope they're still worthwhile to you.

The main question I had when reading your post was about your statement that reasoned inquiry is the best way to get to the truth. In absolute terms, I agree, but there is always someone with more mental capacity (i.e. more intelligent), more moral capacity (i.e. more sensitive to suffering) than me, and who has worked through their own emotional biases to do better moral reasoning than me. So I think that to a large extent the moral questions of life come down to who do I trust for guidance? I think that addressing this question is worthwhile. Do you have a set of criteria?

Currently, I judge someone holistically by how close their thinking and behavior approximates with behavior that I already have come to accept as moral. For example: Do they have faith based beliefs? That would be a con. Do they parent well? That would be a pro. Are they vegan? That would be a pro.

I'm curious if you could address heuristics that you use for this process.

Expand full comment
author

Yeah, that seems very reasonable! I once posted some more general thoughts on philosophical deference (and its limits) here:

https://www.philosophyetc.net/2018/02/philosophical-expertise-deference-and.html

The core idea being that you should try to find people who seem to do a good job of capturing what you *would* believe upon further reflection (and relevant capacities).

I don't know that I have any good heuristics to add to this. Except, maybe, signals of intellectual virtue: like, they seem open to considering objections and diverse perspectives, are neither too dogmatic nor too indecisive or excessively skeptical, etc. If you agree on a lot, and then find that when you pursue a rare disagreement, you most often end up concluding that they had the right of it, then that would obviously be a very good sign. But otherwise, yeah, the main guidance probably just has to be whether they seem to be *getting the most important things right*, by your lights.

Expand full comment
Apr 25Liked by Richard Y Chappell

Thank you your link to that other post of yours. Yes, what you wrote is a helpful way to think about it. Defer to people who you are like "You 2.0": You have reason to believe your views would be the same as theirs if you had a better-reasoned opinion.

Expand full comment

A few things that I disagree with you about, though I think you know about most of these things:

1) I'm confused as to why you don't take theism at all seriously. Yes the problem of evil is a big problem, but you compare that to the evidence for--fine-tuning, the existence of psychophysical laws, the anthropic stuff, the fact there are laws at all--and it's hard to be super confident that these together aren't enough to outweigh the problem of evil. I was making a spreadsheet where I had rough numbers about the Bayesian force of various considerations, and even when thinking that the POE favored atheism at 100,000 to 1, theism won out overall.

2) I think the case for hedonism about well-being is very compelling based on lopsided lives and that your objection plainly fails https://benthams.substack.com/p/lopsided-lives-a-deep-dive (I know that's a big topic so no need to reinvestigate it, but you did ask for disagreements).

3) Hmm...other than that you're basically correct about everything (except your villainous and dastardly tentative support for halfing in sleeping beauty).

Expand full comment
author
Apr 23·edited Apr 23Author

I think it's very unclear what to think about things like fine-tuning and "the fact there are laws at all". They seem like the kinds of things where trying to put numbers on how much they favor theism is more likely to lead one astray than to illuminate. I don't doubt that you can make a spreadsheet with numbers that end up favoring theism. When dealing with matters of great unclarity, numbers can be made to show almost anything. But it's garbage-in-garbage-out.

I just think the problem of evil is *very clearly* decisive, whereas none of the arguments for theism strike me as *clearly* any good at all. (Maybe they count for a little, or maybe not even that. They don't strike me as remotely in the same ballpark as the problem of evil.)

Expand full comment

Why don't you find the fine-tuning argument to be clearly strong evidence? The basic conceptual point that nearly all possible laws produce nothing interesting seems clearly correct. Theism has lots of evidence supporting it--see here https://benthams.substack.com/p/the-evidence-for-and-against-theism for 21 pieces of at least decent evidence. In contrast, atheism just has only one real argument going for it--the problem of evil. Even if you assign a low Bayes factor to all those pieces of evidence--say each only favors theism by 2 to 1--together they make theism over 2 million times likelier than it would have otherwise been, easily enough to swamp the problem of evil.

One very convincing argument can be outweighed by a dozen only slightly convincing arguments. It's simply bad Bayesianism to put almost all of your trust in the single best argument.

To be clear, I don't endorse getting all your views from a Bayes spreadsheet. But it can be at least illuminating, showing how even if you take some argument very seriously, it can be outweighed by enough contrary arguments. It's easy to get attached to a few arguments that seem obviously right, especially ones we spend a lot of time thinking and defending, but it's important to not let the argument you feel strongly about be the sole determinant of your beliefs.

Expand full comment
author

Two main thoughts:

(1) Those 21 pieces aren't *independent* of each other. About half are variations on the fine-tuning idea, and most likely rise or fall together. I'd give about a 50% chance that they all count for ~nothing. (This would be so, for example, if a world like ours has high intrinsic probability, even in the absence of theism.)

(2) Lots of weak (but definite) evidence can add up to strong evidence, but a multitude of flimsy arguments (that are not *clearly* any evidence at all) count for very little. Cf. conspiracy theories.

Expand full comment

What's the solution of fine-tuning that you think might explain the half or so of them? To me they seem mostly independent--I don't know how to explain laws other than just positing that they exist brutely.

2 is true but if there are a bunch of pieces of evidence then that can be strong evidence. If something might be evidence then it's evidence in expectation.

Expand full comment
author

The general solution would be that no solution is needed: these facts just are brute. E.g., it seems reasonably credible to think that there's just no sense to be given to the idea that the fundamental laws and constants of the universe really could have been different from what they are, or that there was in any sense a "low chance" that they are these ones rather than others. If the whole conceptual framework underlying fine-tuning concerns is confused, then that whole raft of arguments is undermined together.

Expand full comment
May 5·edited May 5

Wait, I thought you were a Bayesian? I don't see how what you just said is compatible with that. I thought the whole point of Bayesianism is that you have to distribute your priors over all the conceivable scenarios in some way that is (to the best of your ability) independent of the evidence, prior to updating. As a Bayesian, I don't think you are allowed to go all squishy and say things like, well maybe probabilities don't make sense if there are brute facts? You just gotta go ahead and have credences no matter what.

It also doesn't actually matter whether the values of the constants are necessary or contingent, from a metaphysical point of view. Because from an epistemology point of view, as long as you don't know *why* any specific constants would be necessary, you would still have to spread your priors over all the different possible ways they might "necessarily be". Or in other words, metaphysical necessity does not imply epistemological necessity. (That would only be so if we infallibly knew all facts about metaphysics.)

Now, maybe you do want to use a Bayesian framework, and you are just saying that your subjective Bayesian priors just happen to assign a 50% probability to the particular proposition:

P: "(It is a brute fact that:) it is is a fundamental law that the complex physical structures needed to support conscious life will exist no matter what. That is, no matter how tiny a sliver of parameter space of QFT that requires, those are the constants of nature that will in fact exist:"

But it's nuts to assign so high a probability to such a specific "brute fact" as P, when there are so many other possible law-selecting-rules that could have been considered. [I actually agree with you that Bentham's Bulldog's 21 facts are not as strongly independent as he thinks. But that doesn't mean it is reasonable to run to the opposite extreme and collapse them onto a single coin flip.]

It would also be quite easy for a theist to defeat the argument from evil if they are allowed to be equally unprincipled. I'd warm up by saying that, on the assumption of moral realism, it is "reasonably credible to think" that our human "conceptual framework underlying [argument-from-evil] concerns" is likely to be confused in various ways about good and evil and their relationship. I might express some doubt about whether there is really any sense to be made of the idea that we are in a position to assign probabilities to what God would do [actually I wouldn't, but I'm helping myself to it for the sake of the parody]. Then, I'd turn around and announce that I just happened to place 50% of my subjective credence (conditional on theism) on the proposition Q: "It is morally better for God to eventually bring greater good out of evils, rather than not having any evils in the first place." If it helps to make Q any more plausible, we could call it a brute fact. Perhaps I am missing something, but I genuinely don't see how this is any worse than the other argument.

Sorry if that comes across as more snarky than originally intended. Mostly I'm just surprised, since while I knew already you were confident of atheism, it seems I might have be mistaken in how committed you are to Bayesian methodology.

Expand full comment

But that's absurdly implausible! It's certainly conceivable that the laws would be different and not life-permitting, and it's what you'd expect prior to knowing that we exist (a world without laws is simpler). So therefore the prior of laws that generate interesting things must be low.

If it's licit to say that the probability of something we observe is high conditional on naturalism, even if there's no reason to think that, then the theist can say that evil on theism is just brute--it's now a low chance event.

Expand full comment
Apr 23Liked by Richard Y Chappell

Nice article. You're generally very reasonable and I appreciate your tone/open-mindedness.

I think that some forms of utilitarianism have some major counterintuitive implications like the double-or-nothing gamble and repugnant conclusion type implications.

Now, granted--I don't have a good answer to these. But these concern me because some of the infinite ethics, big gamble, population ethics type questions point toward fanaticism and a very different ethical orientation. For example:

1. If maxipok is the true path, it undermines a bunch of other ethical ideas. What if the average person increases x-risk? What if economic wealth and prosperity increase x-risk?

2. If meat eaters produce way more harm than good, it undermines a bunch of ethical perspectives about saving lives. Maybe even existing as a vegan causes a bunch of harm to animals. Do we want more people living happy lives if they hurt animals so much its a net negative? What does that say about the moral value of humanity as a whole?

Maybe it's best to avoid seriously considering such questions because you look like a crazy fanatic, but it's plausible to me that what we should be doing ethically could be way way different from what EA is doing.

Again, I don't have a really good answer, but if you said "how could Richard Chappell be maximally wrong?" it would be about one or two assumptions that flip our ethical world upside down. I don't know which ones. I don't know how to deal with these questions.

I've taken to comfort in focusing more narrowly on questions like "If a couple is going to pick an embryo, is it ethical to pick the one you expect to live the best life?" cause the big questions are so hard.

Expand full comment
author

Yeah, I'm actually pretty sympathetic to some form of axiological diminishing value in order to avoid fanaticism and double-or-nothing gambles:

https://rychappell.substack.com/p/double-or-nothing-existence-gambles

Note that this is actually a "puzzle for everyone", not something that is successfully avoided by non-consequentialists simply by virtue of averting their eyes. They owe an answer, just like the consequentialist does. https://rychappell.substack.com/p/puzzles-for-everyone

On the meat-eater problem, I remain moderately optimistic, for reasons outlined here: https://forum.effectivealtruism.org/posts/dmEwQZSbPsYhFay2G/ea-worldviews-need-rethinking?commentId=6fB9oLKCi4K2uWgSk

(But I grant it's at least *possible* that human lives could be bad on net. It seems important to work out whether or not that's true.)

Expand full comment
Apr 23Liked by Richard Y Chappell

1. re double/fanatacism: I think we already hashed it out in the comments of that post :). You might be right. My point is kind of "if you're wrong on something like this, you could be VERY wrong in terms of moral impact." I offer no real solution ha! But I appreciate you facing the tough questions and investigating it thoroughly seems like the best approach. I like Joe Carlsmiths idea about safely making it to superintelligence and then having it handle the tough questions.

2.re puzzles for everyone: I agree these are puzzles for everyone. I used to be very interested in the Huemer-style utilitarian critiques, but I feel like they aren't so relevant anymore because I think his worldview should converge on a very similar type of longtermist worldview because current rights violations pale in comparison to future expected utility.

3. re meat eaters: Seems plausible we'll get some solution like lab grown meat. I am thinking through a post about genetic engineering animals to experience bliss instead of suffering. I posted a rough draft on the EA forum, but I'm going to flesh it out much more. If we could have factor farms not filled with suffering, but filled with happy animals, then everything flips on it's head. https://forum.effectivealtruism.org/posts/ycBemPCfeJcFdx7kR/genetic-enhancement-to-turn-animal-suffering-to-animal-bliss

Thanks

Expand full comment

Re 1: Does anyone claim maxipok is "the true path"? Take e.g. from existential-risk.com/concept:

"At best, maxipok is a rule of thumb or a prima facie suggestion. It is not a principle of absolute validity, since there clearly are moral ends other than the prevention of existential catastrophe. The principle's usefulness is as an aid to prioritization. Unrestricted altruism is not so common that we can afford to fritter it away on a plethora of feel-good projects of suboptimal efficacy. If benefiting humanity by increasing existential safety achieves expected good on a scale many orders of magnitude greater than that of alternative contributions, we would do well to focus on this most efficient philanthropy."

Expand full comment

I'm not sure. I don't think it really undermines my point though if there are no true 100% adherents.

Expand full comment

I guess I'm not sure exactly what your point is wrt maxipok. "This rule of thumb could have bad consequences if followed to the extreme" seems true for most rules of thumb.

These seem like hard cases for which there's probably not enough empirical evidence to be highly confident in an answer. But the *principles* seem relatively clear: if the recommendations of maxipok start looking bad, then revert to the principle that maxipok was just a heuristic for, i.e. something like total utilitarianism. (We run into the normal cluelessness concerns here, which is an issue -- but that issue doesn't seem to have anything to do with maxipok.)

Does this address what you're getting at, or have I missed the point you're making?

Expand full comment

To clarify, my point is that the set of priorities and beliefs we have are highly dependent on our moral principles. Most people aren't highly dedicated toward longtermist concerns, but if they were it may radically change our _overall_ moral outlook. I'm not saying that maxipok is inherently good or bad, just that two utilitarians could come to extremely different conclusions depending on how seriously they take longterm risk. So, this is a way that Richard (and myself) could be radically wrong. There are lots of plausible ideas that could radically change our ethical worldviews, and I am probably wrong on some.

Does that clarify my perspective?

Expand full comment

Yes, that clarifies things. This sounds like Bostrom's idea of "crucial considerations" -- there are probably ideas out there that would cause us to dramatically change what we think of e.g. increasing population. What should we do in light of this? Seems tricky.

(I guess what I was getting at is that I'm still not sure maxipok is a great example of this, as it's not something I expect people to be fanatical about / put a huge amount of weight on? But your other examples illustrate it well, so I think I understand your point.)

Expand full comment

Yes, I am not sure what to do.

I'll have to look into that crucial considerations. Bostrom is incredibly insightful. Thanks for the convo.

Expand full comment

I am afraid I may not have the time to engage in a sustained back-and-forth on this point, but it seems to be the kind of thing you're inviting, so let me make a stab: while I more or less accept the three principles you've listed above, I would say that I do generally reject beneficentrism as you've defined it. I may misunderstand your definition, but I reject utilitarianism as an ethical theory.

I suppose the beneficentrism part hinges on what we mean by "general" welfare. If you simply mean *net* welfare - so that your project should be to make sure that you do more good than harm to whatever number of individuals you affect - it doesn't bother me so much. But I do have a problem with a view that sees us as having to promote "general" welfare in any broader sense of maximization: that everyone's life projects should include promoting the welfare of their polity or their world as a whole, in ways that involve benefitting as many people as possible (or providing the largest total benefit).

Most people in history have not held such a maximizing view, and it's not clear to me why they should. Instead we accept a relatively strong partialist account, in which one is obligated to promote the welfare of those one is directly engaged with - co-workers, family, friends, fellow organization members, maybe neighbours - but going beyond that is supererogatory. (Beyond that circle there are *harms* that one is obligated not to cause, but harm and benefit are not symmetrical.)

I think the case for this view (or contrariwise for utilitarianism) goes down to deep foundations, possibly including internalism vs. externalism on moral motivation. But an old blog post of mine lays out a starting position:

https://loveofallwisdom.com/blog/2015/01/of-drowning-children-near-and-far-ii/

Expand full comment
author
Apr 23·edited Apr 23Author

Beneficentrism is meant to be *much* weaker than utilitarianism!

Here's a claim of beneficentrism: we would do well, morally speaking, to dedicate at least 10% of our efforts or resources to doing as much good as possible (via permissible means). Whether this is obligatory or supererogatory doesn't much interest me. The more important normative claim is just that this is clearly a *very worthwhile* thing to do, very much better than largely ignoring utilitarian considerations.

That still leaves plenty of room (90%!) for partiality and personal projects. So it strikes me as pretty hard to deny.

Expand full comment

Interesting; I like this response. The obligatory/supererogatory distinction is huge for me in this case. As long as you can grant that the claim is supererogatory rather than obligatory, then I don't think I would feel a need to deny it.

Expand full comment
author
Apr 23·edited Apr 23Author

Glad to hear it! In case you're interested: a background reading on why you should care less about the obligatory/supererogatory distinction:

https://rychappell.substack.com/p/impermissibility-is-overrated

Expand full comment

I've been chewing this over and thinking the backgrounder may highlight where my deeper disagreement with you lies. I decided to write a post expressing that disagreement, which will go up on my Substack in a week.

Expand full comment
author

Thanks, Iooking forward to it!

Expand full comment
Apr 23Liked by Richard Y Chappell

I think that beneficentrism states that helping people generally is centrally important. I don't think it's committed to the idea that each of us *thinking* about helping people generally is the best way to accomplish general helping of people. It's quite compatible with the idea that, given limited information and limited ability to help, strongly partial reasoning and acting may well be the historically best way to help people generally. (You can be much more confident that the actions you take to benefit those who are near and dear to you will actually result in outcomes that they actually prefer - those that are farther away, you would likely have much less ability to do things for, and those that you have interacted with less, you would be more likely to get their preferences wrong.)

My personal thought (which may not be Richard's) is that in the modern world, with better information flow, greater access to information processing, and generally easier spread of travel and influence around the world (particularly by the wealthy), many of us might do better by explicitly considering those that we don't have partial connections to, than just following our intuitions. But that still leaves room for the optimal strategy to be one in which most people are engaging in lots of partiality most of the time.

Expand full comment

I also posted on the question earlier this week, though addressing a position more extreme than I take yours to be: https://loveofallwisdom.substack.com/p/you-dont-have-to-drop-philosophy

Expand full comment
Apr 26Liked by Richard Y Chappell

Hi! This isn't about something you're necessarily wrong about: I'm not sure and you may very well be right and I'm wrong. But I think you missed an important consideration in your New Paradox of Deontology (https://rychappell.substack.com/p/a-new-paradox-of-deontology) that can make rejecting 4 reasonable.

To start, I'd like to reformulate your thought experiment somewhat to bring it more in line with scenarios that motivated premise 1 in the first place:

New Organ Harvesting: a doctor has 5 patients, who are themselves victims of attempted murder, in need of organs. Coincidentally, they also have a healthy patient whose organs can save the five victims. The doctor decides to to murder the healthy patient to save the other five. How strongly you should hope their attempt to save the other five succeeds?

(Note: this isn't the only interpretation of your thought experiment; perhaps the protagonist can prevent the murder attempts in the first place. But let's stick with this one for now.)

I like this formulation because it actually illustrates what using people as means entails: I find it difficult to reason about situations where "using people as means" is just stipulated, because it does not actually bring out the relevant intuitions.

Now, I argue, given the illegitimate method through which the doctor saves lives, it can be reasonable to actually discount those benefits of the act as "tainted" by the murder of another person. Consider:

Saved by Organ Harvesting: you are a victim of a murder attempt, successfully saved by the doctor in the scenario above, taking the organ of the murdered patient. How thankful you should be for this?

Versus:

Saved by Organ Donation: you are a victim of a murder attempt. You are saved by the organ of another person, who signed up for organ donation and died in a car crash. How thankful you should be for this?

It seems to me that it's totally reasonable to not feel very thankful in the first scenario, and indeed to feel that the attempt to save you is "tainted" by using the healthy patient's organ. In contrast, I think you should feel very thankful in the second scenario, albeit sad at another person's untimely death.

This doesn't mean we should discount the benefits of saving people to zero. But perhaps some degree of discounting, such that the difference between Successful and Failed Prevention is about as bad as a generic killing, is justified.

Now, I'm actually not sure it's possible to coherently accommodate the discounting intuition. Presumably, the discount should be applied multiplicatively to the benefits of the action achieved through evil means. Presumably, it is applied only to the expected benefits of the action, not all future consequences: it seems like your future joy matters just as much even if you are saved by organ harvesting. Maybe just those assumptions are unsustainable.

What do you think?

Expand full comment
author

Interesting! I think the switch to first-personal feelings about moral "taint" is likely distorting. (It's a well known psychological bias that people associate objects with moral taint, e.g. would feel creeped out by learning that the second-hand sweater they just bought was previously worn by a serial killer. It seems part of our "purity" cognitive module that may serve a social purpose, but obviously isn't tracking fundamental moral truths.)

The crucial moral question isn't so much whether the beneficiaries might feel "tainted" (they might, human psychology is weird) but how strongly impartial spectators should want their lives to be saved (given that all the moral "costs" have already been paid, and cannot be refunded). Even in the Organ Harvesting case, it just seems really clear to me that it would be disrespectful to the five innocents to say that saving each life, at this point and using the tainted means, is worth less than 1/5 of saving an ordinary life.

I think it makes sense to feel a little conflicted about the means, because (at least as a deontologist) you should regret that the tainted means became available in the first place: you regret that the doctor killed the innocent one. But given that that's already done, I don't think it should *at all* weaken your desire that the five now be saved rather than the transplant operations be bungled and all die.

Expand full comment
Apr 26Liked by Richard Y Chappell

Yeah, I'm also suspicious of "moral taint" considerations. But I think this intuition is broadly applicable, and not just caused by ickiness of receiving an organ from a murdered patient. Consider that a gift of stolen property feels inherently less valuable than a honestly earned gift, and so on. Consider also that benefits of terror bombing feel less valuable due to methods taken to achieve them.

In fact, I was mostly motivated by doctrine of double effect in my reasoning: I think it nicely explains the difference between achieving a goal through good means that regretfully have evil byproducts (tactical bombing) and through outright evil means (terror bombing). The benefits of the first come from a good cause and may be straightforwadly weighted against the harms, while the benefits of the second are discounted by their evil cause.

Expand full comment
author

I guess I'm precisely trying to push back against the idea that benefits can ever reasonably be "discounted by their evil cause" in this way. On my view, reasonable opposition to terror bombing cannot stem from thinking that the benefits matter less, but only from thinking that [avoiding] the evil matters more. Once the evil is done, though, we should still want the good effects to happen as much as ever. (To have the evil *without* the good would be ever so much worse.) I'm afraid I'm just repeating myself now, though!

Expand full comment
Apr 26·edited Apr 26Liked by Richard Y Chappell

Yeah, I guess I disagree. I think "it matters not just what outcomes you achieve, but how you achieve them" and "achieving something through evil means inherently diminishes the value of it" are very appropriate beliefs for a deontologist. I also think you can't evaluate benefits and costs in those cases separately, even if the cost is already paid, because the benefit is logically and causally connected to the cost (as opposed to cases where the cost is an unfortunate side effect but not necessary).

I still see where are you coming from, and I'm also somewhat suspicious of my intuitions here. I guess I would be able to evaluate them against utilitarian intuitions if there was a complete decision theory incorporating them: then I could just run them against a bunch of cases and see which one produces the least bad results.

Generally, utilitarianism has the advantage of having a worked out decision theory: one can't as easily run into apparent paradoxes inside utilitarianism, instead the conclusions are just bullets you have to take.

Expand full comment
Apr 23Liked by Richard Y Chappell

Not a professional philosopher, and not something that I think you're wrong about, but something I often think about in your posts disputing deontology that I'd be interested to hear more about:

In your ethical theory vs practice post, and the linked utilitarianism.net article on rights-based objections, you discuss the case of the doctor who murders a patient to save 5. The argument as I understand it goes: for prudential reasons, the doctor is highly unlikely to be correct about the consequences of their actions; in particular taking into account the likely reaction of others, the murder is probably net negative.

But this has always struck me as rather pat, since there's no reason we have to take others' reaction to the doctor as fixed. After all, you could make similar objections to other (imo obviously correct) utilitarian arguments, for example "eliminating slavery might be net negative, not least when you consider the reaction of current slaveholders".

I think the obvious thing for a 19thC utilitarian faced with that argument to think would be: we should promote the utilitarian worldview until sufficiently many people agree with it sufficiently to no longer react negatively in a way that would negate the benefits of ending slavery.

So, I think you're still faced with the question, are people _wrong_ to react negatively to the doctor? Would it be better to convince people to accept murdering doctors? Even if the consequences of a doctor murdering a patient for their organs is very negative _now_ should utilitarians be working toward a world where it isn't? Where the doctor is recognized as a humanitarian?

And if not, why not? Simply practical reasons, such as the difficulty of actually convincing people?

More broadly, it feels like utilitarianism feels correct to me when I think on the margin, but when I think about enacting broad value change, I think there a bunch of different options that utilitarianism might recommend, some of which seem fine, some bizarre, and some horrifying. Do you think deontological ideas, or ideas from other ethical frameworks, have a role to play in deciding between different Schelling points far away from our current equilibrium?

Expand full comment
author

I think there's an important difference between (i) instituting an organ lottery [the "winner" gets harvested], and (ii) a random person violating rights.

There are plausibly good utilitarian reasons for favoring something like (i). It's not currently political feasible -- maybe it never will be -- but it seems reasonable for utilitarians to lament this and try to promote the utilitarian world view so that people better understand why this would actually be a good policy.

By contrast, I don't see a strong utilitarian case for promoting (ii). Rights are really useful! I like Gibbard's classic paper on this: https://philpapers.org/rec/GIBUAH

I just think it's a very robust fact about human nature that we can't trust people with the *discretion* to kill each other only when it would be for the utilitarian best. I think we (reliably and predictably) do much better supporting rights against wanton murder, and *opposing* discretionary killings. I expect this would still be true even if more people had more utilitarian attitudes.

Expand full comment

God and Hedonistic Act Utilitarianism. What Matthew Adelstein just said. Also, Simon Rosenqvist is fantastic - https://philpapers.org/rec/ROSHAU

I believe that God exists and I believe that Hedonistic Act Utilitarianism [or Classical (mostly Benthamite) Utilitarianism] is true.

Expand full comment

I haven't yet gone through _all_ of your writing, but one obvious point is "why sentient not sapient". I.e. why should we weigh, e.g., animals, fetera, infants, or critically-destroyed dementia patients the same way as we weigh sapient, capable-of-thinking beings.

Expand full comment
author

I don't know that we should weigh them the same way. I think the value of raw hedonic feels (pleasure vs suffering) plausibly doesn't depend upon what further capacities the being has. But I'm not a hedonist: I think a lot more matters beyond just hedonic feels. Most of the value in life, I think, is only accessible to more cognitively-advanced "persons" (or "sapient beings"). So in that sense, I would agree that persons matter more. It's better to save the live of a person than a goldfish, for example.

(The most principled view in this vicinity would also imply significant differences in moral status even within persons. If people tried to implement this in practice, I expect that they would do very badly. So moral equality might be something of a noble lie -- or at least a mere approximation rather than a literal truth.)

Expand full comment

"Here’s a recent example."

The example asserts that a specific comment of yours is a "dishonest way to describe" something and that specific comments instantiates "the ideological fanatic mindset". That's not saying that you the person is generally dishonest or generally behave like an ideological fanatic.

Your subsequent reply in that thread expresses a norm against proclaiming to know better than another person what that person thinks, yet you immediately break that norm by claiming to know (mistakenly) what I think. I never said or thought that what you wrote in that thread was "the full extent of your thoughts"; that I had a claim to your time; that I knew what you'd "considered at length". My comments were about the arguments written in that comment thread. The thread was prompted by you claiming (mistakenly) to know that those holding a different view "feel nothing" - another example of you breaking the norm you later expressed.

Since then I've read other texts by you here and on the pro utilitarian website. My objections to your view were not covered or answered. I will move my comments to a place where discussion of those kinds of objections to your view can proceed openly, something all available evidence indicactes can't happen here.

Expand full comment
author

You can write whatever you want elsewhere. Making a new account to evade my comment ban and continue to impose yourself in my space (after I've made it clear that I don't want to interact with you) is creepy stalker behavior. Please respect others' boundaries.

Expand full comment