I believe you are most wrong about moral realism. I don't know exactly where you stand on every auxiliary issue, but:
(a) I don't think there are any good arguments for moral realism, and I think much of moral realism's appeal stems from misleading implications about the supposed practical consequences of antirealism. I think many of thes…
I believe you are most wrong about moral realism. I don't know exactly where you stand on every auxiliary issue, but:
(a) I don't think there are any good arguments for moral realism, and I think much of moral realism's appeal stems from misleading implications about the supposed practical consequences of antirealism. I think many of these misleading ways of framing antirealism are rhetorical in nature and rooted in biases and misunderstandings of antirealism, and that this is quite similar to how people critical of utilitarianism uniquely frame it as some kind of monstrous and absurd position.
(b) I don't think most nonphilosophers are moral realists or antirealists, and more generally I don't think moral realism enjoys any sort of presumption in its favor, e.g., I don't grant that it's "more intuitive" or that it does a better job of capturing how people are typically disposed to speak or think than antirealism (though I also don't think people are antirealists).
You ask what the most helpful or persuasive point I can make to start you down the right track may be. I don’t know for sure, since I am not super familiar with many of your background beliefs, but I’d start with this: I think there is little good reason to think that most nonphilosophers speak, think, or act like moral realists. Rather, I think moral realism is a position largely confined to academics and people influenced by academics. I think the questions about whether people are moral realists or not are empirical questions, and that the empirical data simply doesn’t support the notion that moral realism is a “commonsense” view. I don’t know where you stand on this issue, but I think it’s an important place to start.
I came to this conclusion after many years of specifically focusing on the psychology of metaethics and in particular the question of whether nonphilosophers are moral realists or not. Early studies suggested that most people would give realist responses to some questions about metaethics and antirealist responses to other questions. However, I came to question the methods used in these studies and launched a large project (which culminated in my dissertation) to evaluate how participants interpreted these questions. I came to the conclusion that most people were not interpreting them as researchers intended (and frequently didn’t interpret them as questions about metaethics at all). I suspect the best explanation for this is that ordinary people don’t have explicit stances or implicit commitments to metaethical theories, and that metaethics has very little to do with ordinary moral thought and language. The case for this is largely derived from my own data and my critiques and analyses of research on this topic. It’d be very difficult to summarize it but I could elaborate on specific points.
The more general takeaway is that I don’t think moral realism enjoys any special kind of intuitive priority, and I suspect that the reason why some people are disposed towards moral realism has more to do with path dependent idiosyncrasies in their particular cultural backgrounds and educations.
I'm not personally at all deferential to what non-philosophers think about philosophical questions, so that isn't a crux for me. I'm more concerned about anti-realism struggling to accommodate fundamental fallibility. It just seems to me that we should all regard ourselves (both individually and collectively) as potentially mistaken about what's truly important in life.
I also don't see much motivation for anti-realism about ethics that wouldn't quickly spill out into the rest of philosophy. Like, it just seems undeniable that there can be abstract truths (e.g. in answer to philosophical questions) about which people can disagree and any of us could be wrong about the answers.
I don’t really take it to be a data point in need of explanation that our moral views “could be wrong” in the sense of being factually mistaken. What does seem pressing, and sort of in the same vicinity, is that we be able to explain/accommodate the fact that we take our moral judgements to be revisable through reflection, and that there’s some concern about our current moral judgements not being the ones that we would make upon full reflection. I think that can be accommodated by almost any anti realist view. For example if a value is just a desire or some non cognitive state, I might well be concerned about the possibility that I wouldn’t possess that same desire or non cognitive state upon fuller reflection, and so I subject such states/their subjects to rigorous reflection.
Even if you do take “our moral views could be factually mistaken” to be a data point, I think some anti realist views could accommodate that- for example, a David Lewis type view where what’s valuable is just what id be disposed to value under ideal conditions. Surely I can be wrong about what’s valuable on a construal like that.
Yeah, that's fair. We might distinguish "weak" and "strong" forms of fallibility. I agree both that (i) weak fallibility, such that moral reflection is worthwhile, is the most important component to accommodate, and (ii) various anti-realists can accommodate this.
But I also find strong fallibility -- such that even our procedurally idealized selves are not guaranteed to get things right (cf. "ideally coherent Caligula") -- so plausible that it strikes me as a strong reason to favor moral realism.
I think the ideally coherent eccentric type thought experiments are interesting. I think that my own intuition about such cases is that such people are acting irrationally, wrongly, not doing what they ought to do, etc. but I don’t think that tells me that they’re getting any stance independent facts wrong- rather, that tells me that I use words such as “irrational”, “wrong”, etc in ways that refer to my own standards/values, the norms of behavior that I accept, etc. So when I say that Caligula is irrational, I’m saying or expressing something regarding how her behavior isn’t in line with my standards. On such an account, it may be pointed out that when Caligula calls herself rational, then she’s not actually disagreeing with me- but I don’t have any intuition that this is a big issue, definitely not as big of an issue as not being able to call Caligula irrational.
I see it somewhat similarly, though I think I'd just dispense with calling them irrational. I find nothing irrational about a person having different values from me and acting in accordance with those values, no matter how weird I find those values. I suspect that, insofar as people do judge such beings as "irrational," this could be due to an inability to fully distance ourselves from our own values. When I imagine *myself* counting blades of grass forever, or whatever, that seems really unappealing. Or I may think of a typical human doing so, and recognize that they may be suffering or missing out by their own lights.
I suspect if we regularly encountered aliens or AGIs with genuinely weird values, we'd have time to internalize and become comfortable with beings who genuinely just don't care about the things we care about, and insofar as those beings acted efficaciously towards achieving their ends, we may find their actions weird, or threatening, or a big waste of time (relative to our values), but I can't see myself judging them to be making mistakes. I just don't see any reason to think that other beings are "incorrect" if they don't care about the things I care about. That strikes me as a category mistake. What we value just isn't the sort of thing I think we could be incorrect about.
Do you draw any distinction between values and tastes/preferences, or are they all equivalent to you? I can certainly distance myself from my own tastes/preferences, and even entertain hypotheticals where I or others value different things and are perfectly happy in doing so. There's no failure of imagination there. Still, in *evaluating* those hypotheticals, I draw upon my (actual) values (which are not purely hedonistic), so some will strike me as worse than the imagined agent realizes.
I see part of the appeal of realism is that it can explain how this practice is reasonable: if my values are "getting it right" (like I hope my beliefs, in general, are) then I can continue to apply them even to situations in which the state of *my having those values* is no longer present. (An anti-realist *could* also endorse this practice--they could simply *insist* that their values have universal applicability in this way--but it seems a bit parochial or arbitrary-seeming if there's no chance that their values are objectively correct, such that the targets of their negative evaluations are actually making any *mistake*. Too much like criticizing someone for having a mere difference in taste, say.)
I wouldn’t say they’re all equivalent. I have more fundamental goals (e.g., be a good person, be healthy), but I also have impulses, proximal desires, compulsions, and other mental states that I can take a step back from and regulate (or fail to do so) with respect to more fundamental goals. All of these could be described as expressions of preference or value. The phenomenology of all of these states isn’t the same, and they probably result from different cognitive systems. I also have views about which states I’d like to be in the driver’s seat, but I don’t always act in accordance with my more fundamental goals. I think phenomenology is at best only a rough guide to what’s going on cognitively and it’d probably be best to look at the relevant cognitive science on the matter.
I don’t see any appeal in realism. I don’t think the practice you describe is reasonable. I don’t think it makes sense to think of values as “getting it right.” It doesn’t seem to me that there’d be anything to get it right about. Since I don’t think such practices are reasonable, on my view a good account is one that says they’re not (or at least doesn’t say they are).
I don’t understand what the issue is with the antirealist that you describe in the parenthetical. There is nothing any more or less parochial or arbitrary about having preferences about how other people act than about what I eat. One can simply have preferences about anything. Your framing makes it seem like there’s something especially odd or objectionable about certain preferences or values as an antirealist, but it’s not clear to me what you mean or why you think that.
Most of us have an antecedent commitment to certain tolerance norms. E.g. it strikes us as perverse to form (let alone act upon) strong preferences about others' tastes. Liberal norms dictate that we "mind our own business" to an extent, and only see others' preferences or behavior as properly evaluable insofar as they affect others (or are otherwise *morally* significant). Of course, as a thorough-going anti-realist, you'll just say that this is a mere meta-preference we have, on a par with any other. But it reinforces my sense that anti-realism is deeply alien (at least to me, but in this respect I also suspect to most others).
I'm a bit puzzled by these remarks. Maybe you can help clarify the points you're making here. You make some remarks about what "most of us" think regarding tolerance norms, but I'm not clear on what you take most of us to think: are you making a claim about what most of us think with respect to normative ethics, metaethics, or both?
You end up leveraging these remarks to make a point about how you have a sense that antirealism is "deeply alien." But I'm not sure what it is about antirealism that you think is deeply alien, or to whom. You end by saying "at least to me" but you add "but in this respect I also suspect most others."
So is the claim that antirealism is alien to how most people think? Do you think your realist views are alien to how most people think, or do you think they aren't, and better accord with how most people think than antirealism?
I'm not sure I understand your concern about antirealism struggling to accommodate fundamental fallibility. I don't see my own view as having any struggle with this at all because I suspect that conditional on my view there wouldn't be anything to be fallible or infallible about. Could you clarify what you mean by "truly"?
I probably am an antirealist or a quietist about most other things, too, so that probably wouldn't be a big concern for me.
Like, suppose we were all wondering whether wild animal suffering matters. And suppose we concluded that it doesn't. So we're unbothered by the prospect of wild animal suffering, and we don't bother to euthanize badly injured animals, etc. It seems to me that we could be mistaken in this case: maybe we *should* be more bothered by wild animal suffering, and we should euthanize badly injured animals rather than leave them to a long drawn-out death. But anti-realism, I take it, cannot accommodate the possibility that this could all be true even though (as stipulated in the scenario) none of us accept it.
If you're an anti-realist about the truth of anti-realism itself, does that mean that realists make no mistake in rejecting your view?
I appreciate the example, but I'm still a bit concerned about your prior use of "truly." I think things "truly" matter on an antirealist framework. I just don't think "truly" mattering requires stance-independence. I wouldn't say, for instance, that my family doesn't "truly matter" to me. I don’t think either of us want to get caught up in terminological disputes, so I’m happy to set this term aside, but I want to flag its use not because I want to get hung up on the word “truly” but because much of my defense of antirealism and objections to moral realism center on what I take to be incautious use of language that gives what I consider a false impression of something undesirable or objectionable about antirealism. I think the use of “truly” could be leveraged, if unintentionally, for such purposes.
For instance, I would object to someone saying that, as an antirealist, I do not think anything “truly matters.” Since I think antirealism is the correct position, the sense in which I think things truly matter is the antirealist sense, not the realist sense. If anything, I might even say I think that realists don’t think things truly matter. I’m not being glib about that: I don’t like the “realist” and “antirealist” framing. I do think value is real and that things really and truly matter. I just don’t think this involves stance-independence.
To address the example you gave: my main concern is that your objection turns on antirealists being insensitive to a problem that would only be a problem if realism were true. For comparison, imagine you were having a dispute with a person who believed in Bigfoot. Supposing you don’t believe in Bigfoot, they could say the following:
“I'm concerned about Bigfoot skeptics struggling to accommodate fundamental fallibility. It just seems to me that we should all regard ourselves (both individually and collectively) as potentially mistaken about Bigfoot’s shoe size.”
As a Bigfoot skeptic, one would not think there was a being such that they had a shoe size, and thus one couldn’t be mistaken about whether Bigfoot’s shoe size was, say, 26, rather than 31.
The Bigfoot believer may point out that Bigfoot skeptics seem to be infallible regarding judgments about Bigfoot’s shoe size. This seems like a strange objection: of course they can’t be wrong about what Bigfoot’s shoe size is conditional on Bigfoot not existing: there isn’t anything for them to be wrong about!
Just so, I don’t think there are stance-independent moral facts. The type of fallibility you’re concerned with seems to be the possibility of being mistaken about the stance-independent normative moral facts. One moral realist may think that X is wrong, and another may think X is not wrong. And both may acknowledge their fallibility on the matter. However, as an antirealist, I acknowledge no such fallibility. Yet the antirealist isn’t “struggling” in this regard. One cannot struggle with a problem if there is no problem to struggle with.
As an antirealist, I would be fallible about what the stance-independent moral facts were, if moral realism were true. And I am fallible about whether or not moral realism is true. So I simply don’t see a struggle here: I would be fallible about what the stance-independent moral facts were, if there were such things.
I’m not an antirealist about the truth of antirealism (though I may be an antirealist in some respects; I wouldn’t typically call myself one though). I think antirealism about certain realist positions are true. With respect to what I’m an antirealist about: I’m an antirealist about all stance-independent normativity, and at least most metaphysics.
I guess this is just a bedrock disagreement: it seems clear to me that there is a real question whether we ought to care about (e.g.) wild animal suffering. I don't expect to be able to persuade you on this point. But it is the main reason why I am a moral realist: because I think these sorts of questions are real questions, and not like asking about Bigfoot's shoe size.
Do you think there's a way for us to navigate or resolve that disagreement? I don't know if it's simply the result of incorrigible and perhaps inscrutable psychological differences, or whether we approach philosophical questions from different starting points or employ different methods that we could discuss.
I suspect many philosophical problems remain intractable due to differences in more fundamental beliefs, commitments, or attitudes. And it may be worthwhile to discuss what those might be.
That's fair. Thanks for engaging with me on the issue. I would definitely be interested in having a chat some time, since I think an actual conversation would be a lot more efficient than an online exchange.
Personally, I am very interested in understanding the methods and metaphilosophical perspective of philosophers I disagree with, so I'd be curious to hear more about your perspective on those things. I've read your blog posts here and there over the years but not systematically to build up a good sense of your overall approach. If there's anything specific that would give me a sense of where you stand on foundational issues I'd be happy to check that out.
As an aside: I tend to emphasize disagreement, but the only normative moral theory I ever endorsed was utilitarianism and I've always felt it was unfairly maligned, so that may be a place where there'd be considerable agreement.
I'm not sure how you come to the conclusion that most nonphilosophers are not moral realists. To start with, most nonphilosophers believe in a Creator God that endowed the universe with moral properties.
Many theists seem to be divine subjectivists (cf. "divine command theory" -- a cultural relativist might similarly call their view "cultural command theory").
I don't see any good reason to think most theists are moral realists. Simply because a person believes in a creator god doesn't mean that they think God endowed the universe with, specifically *stance-independent* moral properties. I don't think theism entails or even strongly implies moral realism; it's consistent with both antirealism and with taking no stance on the matter at all.
I've seen little indication that most religious systems explicitly endorse moral realism, and even if they did, it would still be an open empirical question whether most laypeople endorsed that particular article of faith, since the fact that something is a part of official religious doctrine doesn't tell us what proportion of adherents to the religion believe it, or are even aware of it. Because of this, I don't think it's reasonable to infer from the fact that people are theists that they subscribe to any specific doctrines or philosophical positions. For instance, if someone says they're a "Catholic" on a survey, it doesn't follow that they believe all official Catholic doctrine. What any given individual actually believes is an empirical question and the only way to know whether theists are moral realists is by conducting relevant empirical research.
With respect to the empirical research that has been done, there is little good evidence most of the populations that have been studied are moral realists. So at present I think we're dealing with an empirical question for which there is little empirical evidence that convincingly establishes that most people are moral realists.
I believe you are most wrong about moral realism. I don't know exactly where you stand on every auxiliary issue, but:
(a) I don't think there are any good arguments for moral realism, and I think much of moral realism's appeal stems from misleading implications about the supposed practical consequences of antirealism. I think many of these misleading ways of framing antirealism are rhetorical in nature and rooted in biases and misunderstandings of antirealism, and that this is quite similar to how people critical of utilitarianism uniquely frame it as some kind of monstrous and absurd position.
(b) I don't think most nonphilosophers are moral realists or antirealists, and more generally I don't think moral realism enjoys any sort of presumption in its favor, e.g., I don't grant that it's "more intuitive" or that it does a better job of capturing how people are typically disposed to speak or think than antirealism (though I also don't think people are antirealists).
You ask what the most helpful or persuasive point I can make to start you down the right track may be. I don’t know for sure, since I am not super familiar with many of your background beliefs, but I’d start with this: I think there is little good reason to think that most nonphilosophers speak, think, or act like moral realists. Rather, I think moral realism is a position largely confined to academics and people influenced by academics. I think the questions about whether people are moral realists or not are empirical questions, and that the empirical data simply doesn’t support the notion that moral realism is a “commonsense” view. I don’t know where you stand on this issue, but I think it’s an important place to start.
I came to this conclusion after many years of specifically focusing on the psychology of metaethics and in particular the question of whether nonphilosophers are moral realists or not. Early studies suggested that most people would give realist responses to some questions about metaethics and antirealist responses to other questions. However, I came to question the methods used in these studies and launched a large project (which culminated in my dissertation) to evaluate how participants interpreted these questions. I came to the conclusion that most people were not interpreting them as researchers intended (and frequently didn’t interpret them as questions about metaethics at all). I suspect the best explanation for this is that ordinary people don’t have explicit stances or implicit commitments to metaethical theories, and that metaethics has very little to do with ordinary moral thought and language. The case for this is largely derived from my own data and my critiques and analyses of research on this topic. It’d be very difficult to summarize it but I could elaborate on specific points.
The more general takeaway is that I don’t think moral realism enjoys any special kind of intuitive priority, and I suspect that the reason why some people are disposed towards moral realism has more to do with path dependent idiosyncrasies in their particular cultural backgrounds and educations.
I'm not personally at all deferential to what non-philosophers think about philosophical questions, so that isn't a crux for me. I'm more concerned about anti-realism struggling to accommodate fundamental fallibility. It just seems to me that we should all regard ourselves (both individually and collectively) as potentially mistaken about what's truly important in life.
I also don't see much motivation for anti-realism about ethics that wouldn't quickly spill out into the rest of philosophy. Like, it just seems undeniable that there can be abstract truths (e.g. in answer to philosophical questions) about which people can disagree and any of us could be wrong about the answers.
I don’t really take it to be a data point in need of explanation that our moral views “could be wrong” in the sense of being factually mistaken. What does seem pressing, and sort of in the same vicinity, is that we be able to explain/accommodate the fact that we take our moral judgements to be revisable through reflection, and that there’s some concern about our current moral judgements not being the ones that we would make upon full reflection. I think that can be accommodated by almost any anti realist view. For example if a value is just a desire or some non cognitive state, I might well be concerned about the possibility that I wouldn’t possess that same desire or non cognitive state upon fuller reflection, and so I subject such states/their subjects to rigorous reflection.
Even if you do take “our moral views could be factually mistaken” to be a data point, I think some anti realist views could accommodate that- for example, a David Lewis type view where what’s valuable is just what id be disposed to value under ideal conditions. Surely I can be wrong about what’s valuable on a construal like that.
Yeah, that's fair. We might distinguish "weak" and "strong" forms of fallibility. I agree both that (i) weak fallibility, such that moral reflection is worthwhile, is the most important component to accommodate, and (ii) various anti-realists can accommodate this.
But I also find strong fallibility -- such that even our procedurally idealized selves are not guaranteed to get things right (cf. "ideally coherent Caligula") -- so plausible that it strikes me as a strong reason to favor moral realism.
I think the ideally coherent eccentric type thought experiments are interesting. I think that my own intuition about such cases is that such people are acting irrationally, wrongly, not doing what they ought to do, etc. but I don’t think that tells me that they’re getting any stance independent facts wrong- rather, that tells me that I use words such as “irrational”, “wrong”, etc in ways that refer to my own standards/values, the norms of behavior that I accept, etc. So when I say that Caligula is irrational, I’m saying or expressing something regarding how her behavior isn’t in line with my standards. On such an account, it may be pointed out that when Caligula calls herself rational, then she’s not actually disagreeing with me- but I don’t have any intuition that this is a big issue, definitely not as big of an issue as not being able to call Caligula irrational.
I see it somewhat similarly, though I think I'd just dispense with calling them irrational. I find nothing irrational about a person having different values from me and acting in accordance with those values, no matter how weird I find those values. I suspect that, insofar as people do judge such beings as "irrational," this could be due to an inability to fully distance ourselves from our own values. When I imagine *myself* counting blades of grass forever, or whatever, that seems really unappealing. Or I may think of a typical human doing so, and recognize that they may be suffering or missing out by their own lights.
I suspect if we regularly encountered aliens or AGIs with genuinely weird values, we'd have time to internalize and become comfortable with beings who genuinely just don't care about the things we care about, and insofar as those beings acted efficaciously towards achieving their ends, we may find their actions weird, or threatening, or a big waste of time (relative to our values), but I can't see myself judging them to be making mistakes. I just don't see any reason to think that other beings are "incorrect" if they don't care about the things I care about. That strikes me as a category mistake. What we value just isn't the sort of thing I think we could be incorrect about.
Do you draw any distinction between values and tastes/preferences, or are they all equivalent to you? I can certainly distance myself from my own tastes/preferences, and even entertain hypotheticals where I or others value different things and are perfectly happy in doing so. There's no failure of imagination there. Still, in *evaluating* those hypotheticals, I draw upon my (actual) values (which are not purely hedonistic), so some will strike me as worse than the imagined agent realizes.
I see part of the appeal of realism is that it can explain how this practice is reasonable: if my values are "getting it right" (like I hope my beliefs, in general, are) then I can continue to apply them even to situations in which the state of *my having those values* is no longer present. (An anti-realist *could* also endorse this practice--they could simply *insist* that their values have universal applicability in this way--but it seems a bit parochial or arbitrary-seeming if there's no chance that their values are objectively correct, such that the targets of their negative evaluations are actually making any *mistake*. Too much like criticizing someone for having a mere difference in taste, say.)
I wouldn’t say they’re all equivalent. I have more fundamental goals (e.g., be a good person, be healthy), but I also have impulses, proximal desires, compulsions, and other mental states that I can take a step back from and regulate (or fail to do so) with respect to more fundamental goals. All of these could be described as expressions of preference or value. The phenomenology of all of these states isn’t the same, and they probably result from different cognitive systems. I also have views about which states I’d like to be in the driver’s seat, but I don’t always act in accordance with my more fundamental goals. I think phenomenology is at best only a rough guide to what’s going on cognitively and it’d probably be best to look at the relevant cognitive science on the matter.
I don’t see any appeal in realism. I don’t think the practice you describe is reasonable. I don’t think it makes sense to think of values as “getting it right.” It doesn’t seem to me that there’d be anything to get it right about. Since I don’t think such practices are reasonable, on my view a good account is one that says they’re not (or at least doesn’t say they are).
I don’t understand what the issue is with the antirealist that you describe in the parenthetical. There is nothing any more or less parochial or arbitrary about having preferences about how other people act than about what I eat. One can simply have preferences about anything. Your framing makes it seem like there’s something especially odd or objectionable about certain preferences or values as an antirealist, but it’s not clear to me what you mean or why you think that.
Most of us have an antecedent commitment to certain tolerance norms. E.g. it strikes us as perverse to form (let alone act upon) strong preferences about others' tastes. Liberal norms dictate that we "mind our own business" to an extent, and only see others' preferences or behavior as properly evaluable insofar as they affect others (or are otherwise *morally* significant). Of course, as a thorough-going anti-realist, you'll just say that this is a mere meta-preference we have, on a par with any other. But it reinforces my sense that anti-realism is deeply alien (at least to me, but in this respect I also suspect to most others).
I'm a bit puzzled by these remarks. Maybe you can help clarify the points you're making here. You make some remarks about what "most of us" think regarding tolerance norms, but I'm not clear on what you take most of us to think: are you making a claim about what most of us think with respect to normative ethics, metaethics, or both?
You end up leveraging these remarks to make a point about how you have a sense that antirealism is "deeply alien." But I'm not sure what it is about antirealism that you think is deeply alien, or to whom. You end by saying "at least to me" but you add "but in this respect I also suspect most others."
So is the claim that antirealism is alien to how most people think? Do you think your realist views are alien to how most people think, or do you think they aren't, and better accord with how most people think than antirealism?
Okay, that's fair if that's not a crux.
I'm not sure I understand your concern about antirealism struggling to accommodate fundamental fallibility. I don't see my own view as having any struggle with this at all because I suspect that conditional on my view there wouldn't be anything to be fallible or infallible about. Could you clarify what you mean by "truly"?
I probably am an antirealist or a quietist about most other things, too, so that probably wouldn't be a big concern for me.
Like, suppose we were all wondering whether wild animal suffering matters. And suppose we concluded that it doesn't. So we're unbothered by the prospect of wild animal suffering, and we don't bother to euthanize badly injured animals, etc. It seems to me that we could be mistaken in this case: maybe we *should* be more bothered by wild animal suffering, and we should euthanize badly injured animals rather than leave them to a long drawn-out death. But anti-realism, I take it, cannot accommodate the possibility that this could all be true even though (as stipulated in the scenario) none of us accept it.
If you're an anti-realist about the truth of anti-realism itself, does that mean that realists make no mistake in rejecting your view?
I appreciate the example, but I'm still a bit concerned about your prior use of "truly." I think things "truly" matter on an antirealist framework. I just don't think "truly" mattering requires stance-independence. I wouldn't say, for instance, that my family doesn't "truly matter" to me. I don’t think either of us want to get caught up in terminological disputes, so I’m happy to set this term aside, but I want to flag its use not because I want to get hung up on the word “truly” but because much of my defense of antirealism and objections to moral realism center on what I take to be incautious use of language that gives what I consider a false impression of something undesirable or objectionable about antirealism. I think the use of “truly” could be leveraged, if unintentionally, for such purposes.
For instance, I would object to someone saying that, as an antirealist, I do not think anything “truly matters.” Since I think antirealism is the correct position, the sense in which I think things truly matter is the antirealist sense, not the realist sense. If anything, I might even say I think that realists don’t think things truly matter. I’m not being glib about that: I don’t like the “realist” and “antirealist” framing. I do think value is real and that things really and truly matter. I just don’t think this involves stance-independence.
To address the example you gave: my main concern is that your objection turns on antirealists being insensitive to a problem that would only be a problem if realism were true. For comparison, imagine you were having a dispute with a person who believed in Bigfoot. Supposing you don’t believe in Bigfoot, they could say the following:
“I'm concerned about Bigfoot skeptics struggling to accommodate fundamental fallibility. It just seems to me that we should all regard ourselves (both individually and collectively) as potentially mistaken about Bigfoot’s shoe size.”
As a Bigfoot skeptic, one would not think there was a being such that they had a shoe size, and thus one couldn’t be mistaken about whether Bigfoot’s shoe size was, say, 26, rather than 31.
The Bigfoot believer may point out that Bigfoot skeptics seem to be infallible regarding judgments about Bigfoot’s shoe size. This seems like a strange objection: of course they can’t be wrong about what Bigfoot’s shoe size is conditional on Bigfoot not existing: there isn’t anything for them to be wrong about!
Just so, I don’t think there are stance-independent moral facts. The type of fallibility you’re concerned with seems to be the possibility of being mistaken about the stance-independent normative moral facts. One moral realist may think that X is wrong, and another may think X is not wrong. And both may acknowledge their fallibility on the matter. However, as an antirealist, I acknowledge no such fallibility. Yet the antirealist isn’t “struggling” in this regard. One cannot struggle with a problem if there is no problem to struggle with.
As an antirealist, I would be fallible about what the stance-independent moral facts were, if moral realism were true. And I am fallible about whether or not moral realism is true. So I simply don’t see a struggle here: I would be fallible about what the stance-independent moral facts were, if there were such things.
I’m not an antirealist about the truth of antirealism (though I may be an antirealist in some respects; I wouldn’t typically call myself one though). I think antirealism about certain realist positions are true. With respect to what I’m an antirealist about: I’m an antirealist about all stance-independent normativity, and at least most metaphysics.
I guess this is just a bedrock disagreement: it seems clear to me that there is a real question whether we ought to care about (e.g.) wild animal suffering. I don't expect to be able to persuade you on this point. But it is the main reason why I am a moral realist: because I think these sorts of questions are real questions, and not like asking about Bigfoot's shoe size.
Do you think there's a way for us to navigate or resolve that disagreement? I don't know if it's simply the result of incorrigible and perhaps inscrutable psychological differences, or whether we approach philosophical questions from different starting points or employ different methods that we could discuss.
I suspect many philosophical problems remain intractable due to differences in more fundamental beliefs, commitments, or attitudes. And it may be worthwhile to discuss what those might be.
Hmm, not sure!
That's fair. Thanks for engaging with me on the issue. I would definitely be interested in having a chat some time, since I think an actual conversation would be a lot more efficient than an online exchange.
Personally, I am very interested in understanding the methods and metaphilosophical perspective of philosophers I disagree with, so I'd be curious to hear more about your perspective on those things. I've read your blog posts here and there over the years but not systematically to build up a good sense of your overall approach. If there's anything specific that would give me a sense of where you stand on foundational issues I'd be happy to check that out.
As an aside: I tend to emphasize disagreement, but the only normative moral theory I ever endorsed was utilitarianism and I've always felt it was unfairly maligned, so that may be a place where there'd be considerable agreement.
I'm not sure how you come to the conclusion that most nonphilosophers are not moral realists. To start with, most nonphilosophers believe in a Creator God that endowed the universe with moral properties.
Many theists seem to be divine subjectivists (cf. "divine command theory" -- a cultural relativist might similarly call their view "cultural command theory").
I don't see any good reason to think most theists are moral realists. Simply because a person believes in a creator god doesn't mean that they think God endowed the universe with, specifically *stance-independent* moral properties. I don't think theism entails or even strongly implies moral realism; it's consistent with both antirealism and with taking no stance on the matter at all.
I've seen little indication that most religious systems explicitly endorse moral realism, and even if they did, it would still be an open empirical question whether most laypeople endorsed that particular article of faith, since the fact that something is a part of official religious doctrine doesn't tell us what proportion of adherents to the religion believe it, or are even aware of it. Because of this, I don't think it's reasonable to infer from the fact that people are theists that they subscribe to any specific doctrines or philosophical positions. For instance, if someone says they're a "Catholic" on a survey, it doesn't follow that they believe all official Catholic doctrine. What any given individual actually believes is an empirical question and the only way to know whether theists are moral realists is by conducting relevant empirical research.
With respect to the empirical research that has been done, there is little good evidence most of the populations that have been studied are moral realists. So at present I think we're dealing with an empirical question for which there is little empirical evidence that convincingly establishes that most people are moral realists.