> I think it’s important that moral philosophers be rationalists (in the above sense). Sentimentalism (as understood here) is a kind of anti-philosophy, a refusal to reflect systematically on what matters. But we need such reflection, if we are to have any hope of uncovering new truths, or improving upon our untutored reactions. While it’s always possible for systematic thought to lead one further astray—an inconsistent Nazi is better than a consistent one!—careful philosophical inquiry remains our most reliable means of non-accidentally improving our epistemic position on moral matters. Or so I believe.
I know I've made this point in your comment section before, but the argument expressed in this paragraph still just seems bizarre to me.* Yes, your 'sentimentalism' is an opposition to systematic ethical theory (although it is not a 'refusal', as typically 'sentimentalists' have cogent arguments for why systematic theorising is a mistaken practice; they don't just declare that it's icky and for girls) - that much is true. But it simply does not follow that we must do such systematic theory 'if we are to have any hope of uncovering new truths, or improving upon our untutored reactions'. Moral thinking can proceed unaided by systematic theory - Arendt's 'thinking without a bannister' - and still allow us to improve upon naïveté and irrationality.
To give just one example, Williams, who you mention, famously made a quite important intervention to try to rescue the reputation of shame as a moral emotion, arguing that it could be much more productive and (indeed) conducive to self-respect and moral agency than has typically been assumed. None of his reasoning assumes anything like systematic moral theory, and his conclusions cannot be reconstructed as a systematic theory, but - if his arguments are sound - they offer a powerful 'uncovering' of truth and the possibility of 'improving upon our untutored reactions'. If you want to deny this, you have to deny that his arguments are sound (not an unreasonable position), and thus have to get into the first-order issues at play - which is already to admit that there is first-order substance to non-systematic ethical thinking.
Much the same, of course, could be said of Arendt or (certain readings of) Anscombe, and of many forms of Humeanism. Indeed, 'sentimentalism' is no more an 'anti-philosophy' than Humeanism is: arguing that a certain style of reasoning (viz., systematic moral theory) has limits is not the same as a refusal to think, and indeed is quite the opposite (https://personal.lse.ac.uk/ROBERT49/teaching/ph103/pdf/Hume_1748_Enquiry12_OnAcademicalOrSkepticalPhilosophy.pdf). (It is worth noting that Hume was a sentimentalist in the no-scare-quotes sense!)
There are, of course, arguments to be made for systematic moral theorising - for example, various full-blooded forms of moral realism seem to entail sufficiently strong deontic logics to render particularist 'sentimentalism' incoherent. But the idea that we should back systematic theory because without it we would have no way to get 'better' at morality** seems completely empty to me - first, because (as mentioned above) it's not the only way to think about ethics; second, because it's not clear that systematic moral philosophy actually has a particularly great track record of contributing to 'better' ethics.
*The last time I commented in this vein, you said you weren't making an argument, but in this paragraph it pretty clearly is an argument.
**I use this vague formulation to avoid the disputes about moral knowledge that follow from your use of the term 'epistemic position', which could very easily be accused of begging the question.
Fair enough. It's certainly true that sentimentalist anti-theorists can still offer incremental improvements upon our untutored reactions. But they do seem essentially limited in scope, and unwilling to countenance the more radical moral reforms that systematic theorizing might urge upon us. And I think we ought to be willing to at least consider such systematic revisions, even when they aren't emotionally appealing.
For some concrete examples of (what I see as) complacent sentimentalism, see the dismissive criticisms of EA from philosophers like Amia Srinivasan, Alice Crary, and Kathleen Stock:
It's easy enough to identify instances of what you'd see as complacent sentimentalism, but one can reject both realism and sentimentalism. I'm a moral antirealist and I simultaneously a proponent of longtermism and I am very disinclined to favor moral positions merely because they're emotionally appealing.
The same could be true for many other antirealists. I don't know. That seems like an empirical question. Which is why I'd want to emphasize a more general point:
questions about the psychological consequences of different metaethical positions are, ultimately, empirical questions. Even if certain types of sentimentalism led to complacency, that doesn't mean any and all rejections of rationalist approaches, or of moral realism, would lead to complacency.
There is, at the same time, a kind of reverse, potentially undesirable consequence of not favoring antirealist or sentimentalist views. What if we would consider ourselves, on reflection, to be better off if we endorsed normative moral positions that more closely matched our sentiments, or our subjective preferences, but we don't do so because we mistakenly believe that normative moral theories deserve serious attention, and ought potentially factor into our deliberations? In other words, if moral antirealists like myself are right, moral realists may be wasting a lot of time considering moral systems that they wouldn't, on reflect, endorse or want to endorse, if they were moral antirealists.
In any case, I'm skeptical antirealism will turn out to have substantive negative practical consequences - perhaps some associated views would, but not antirealism itself, or what I'd take to be a more generally accurate model of morality and normativity more generally.
Yes, agreed that it's an empirical question. I'm merely suggesting that there's a stronger rational connection between anti-realism and sentimentalist complacency, in that it *makes sense* for (at least certain kinds of) anti-realists to (often) be more complacent than realists. But even if I'm right about that, nothing immediately follows since people don't always make sense, and there might be other (e.g. sociological) factors that have more influence.
On the "reverse" potential risk that you raise, one interesting asymmetry here is that if realism is true, it *really matters* that we get it right. But if realism is false, it doesn't really matter that we get it wrong. At worst, we merely get less of what we want. But it then doesn't matter whether we get more or less of what we want. So the stakes are much lower if realism is false than if it is true. That might give us more practical reason to guard against the risks of messing up conditional on realism being true, compared to the risks of messing up conditional on it being false.
>"the stakes are much lower if realism is false than if it is true"
This seems exactly backwards to me, for reasons similar to those I mentioned in the other thread. If non-naturalist moral realism is true, then there must be some objectively correct axiology, so I have reason to find and adopt a self-consistent axiology that conforms as closely as possible to my basic intuitions about what "really matters." But the best candidate for such an axiology is (roughly) nihilism. So if I am spending time thinking about what "really matters," then I am spending time thinking about the fact that nothing really matters at all. But if I am thinking about what matters *to me*, then I am thinking about the rich variety of things I care about and can direct my actions towards.
So if realism is true, the "stakes" are precisely nothing; if realism is false, the stakes are everything I care about.
(I realize that many moral realists reject nihilism; however, I cannot see that they have any grounds for doing so that do not rest on arbitrary stipulations or linguistic conventions. My own desires and preferences seem to provide a stronger foundation for moral thinking, and a *much* stronger foundation for moral action, than anything "objective" could possibly do.)
The thing about it "really mattering" if realism is true is that things might "really" matter that don't matter-to-us, in the subjective sense that I, or sentimentalists, care about. What if the objective moral facts turn out to require us to do things radically at odds with what we want to do? What if they require us to scream at tables or destroy the universe? For me, it merely matters whether they align with what I want at all.
This notion of things "really" mattering is a big part of why I reject realism: It's not just that I think it's false, but that I wouldn't care if it were true. I don't care about what "really" matters, I care only about what matters to me. And I don't accept the framing that if realists are right, that they've captured what "really" matters.
What realism captures is what stance-independently matters, but why should I accept the verbal framing that this is what "really" matters, other than in the sense that by "really" we just mean stance-independently? To say the stance-independent moral facts are what "really" matters gives the impression that this kind of mattering is somehow more important or better in some way. I don't think it is.
For instance, if I endorse utilitarianism, and some form of deontology was “true” in a realist sense, I would have no more inclination to comply with the deontological moral facts than I do if antirealism were true.
When you say "But if realism is false, it doesn't really matter that we got it wrong,” it depends what you mean by “really matter.” It doesn’t stance-independently matter, but it matters to me. And it mattering to me is precisely the kind of mattering that I think really matters. That is, I think antirealist conceptions of things mattering are what really matters, and realist conceptions don’t really matter.
Likewise, when you say, “So the stakes are much lower if realism is false than if it is true,” I don’t agree. This frames the stakes in terms of stakes being high or low with respect to a realist’s conception of value, which the antirealist can (and I do) reject. As such, I also do not concede that the stakes are lower.
As a result, not only am I not willing to grant that there are stance-independent facts, I am also not willing to grant realists free reign to frame the dialectic in terms that sound favorable - that is, for realists to claim that if they're correct, they've identified what "really" matters. To say that if realism is true that things “really” matter is a framing device that makes realism seem more appealing for reasons other than its plausibility.
It's not clear to me at all what the sentimentalism / rationalism distinction has to do with the radical / incremental distinction. Surely systematic theorising can fall prey just as easily to an idealisation of our naïve intuitions? And on the other hand, an openness on emotional connections can create the potential for quite radical shifts in moral perspective - as, say, was experienced by many white Americans reading Uncle Tom's Cabin before the US Civil War (or, to give an example less bounded by time, as has been experienced by generations of smart, precocious young men reading Augustine for the first time). Your descriptions of how sentimentalism can be complacent and rationalism radical do indeed seem to map onto real mechanisms, but they're not the only mechanisms at play here. Zooming out, there just doesn't seem to be any general correlation between being radical and being systematic,* and indeed Srinivasan's objection to EA is precisely that it is far too complacent about some things. You might argue that it's radical about others, but now you and her are disagreeing on 'what are the most important things to be radical about in ethics' - which brings us back to the object level.
* Unless you're implicitly defining 'radical' to mean 'radical in a good way', in which case you can argue there's a connection but only in a question-begging way. I'm not saying this is what you mean, but this is quite a common way I see the word used so I just wanted to head it off explicitly.
I think that also accepting moral realism affects how sympathetic we'll be to various theoretical virtues. Simplicity doesn't matter if you're an anti-realist, it plausibly does if you're a realist. Similarly, anti-realists will be less moved by arguments like "your view entails a puzzling type of strongly emergent value," and by arguments that appeal to the historical track record of various moral theories. I also think anti-realists would be more likely to be particularists -- if there's no fact of the matter, we may expect our intuitions to be a hodge podge of different moral sentiments.
Overall, it seems like moral realism makes normative ethics more interesting and robust.
I'm not sure that anti-realists shouldn't be moved by simplicity considerations. Scientific anti-realists seem to take simplicity pretty seriously for reasons that have nothing to do with a theory's literal truth, such as fruitfulness. Denying that there is normative authority doesn't mean that nothing can matter to the anti-realist. I think that moral anti-realists who think science is up to something interesting (whether they're scientific anti-realists or not) are likely to care about simplicity at least a little bit.
I actually find those considerations fairly plausible, though I don't know if such considerations would result in substantive practical differences in how people act (for better or worse). I'm curious about the last remark you left though. You say:
"Overall, it seems like moral realism makes normative ethics more interesting and robust. "
Do you think that's a reason to think realism is more likely to be true?
Well, okay. But this also seems to reveal a potential bias. If moral realism would make normative theory more interesting and robust, that would provide an incentive for endorsing moral realism so that you can engage in an activity you find more interesting (and, I imagine, more meaningful).
In other words, if moral realism does make normative ethics more interesting and robust, this seems to be at least some evidence against realism and in favor of antirealism, because it represents a motivational bias that could prompt people to favor realism over antirealism for reasons unrelated to realism being more likely to be true.
In short: if philosophers would prefer realism is true and antirealism isn't true, we have reason to be at least a little bit more skeptical of people's commitment to realism, since they have an incentive to endorse for reasons other than its plausibility.
I don't think it's any sort of evidence *for* anti-realism. But I agree that it *undermines* the case for treating normative ethicists' moral realism as itself constituting "evidence" for realism. In the same way that philosophers of religion favouring theism is not evidence of theism's truth. (But nor is it positive evidence for atheism. It's instead to say that something people might have mistaken for evidence one way is instead evidentially neutral.)
More generally, though, "Philosophers believe P" is typically not much of a reason (if any reason at all) to believe P, at least when P is a first-order philosophical claim, rather than a higher-order claim about the state of the discourse. So I don't think there was much evidential force there in the first place, for the motivational bias factor to undermine!
I think it’s a little bit of indirect evidence for antirealism, only insofar as people believing a claim is evidence of the claim, and in particular people who specifically study a topic (philosophers in this case) believe the claim. I don’t take either to serve as much evidence for a claim, but I don’t take it to be no evidence at all.
At least one of the tasks I take on as an antirealist is explaining why people endorse moral realism, and in particular why philosophers would endorse realism. If I can show that realism is appealing for reasons other than it being true or there being good reasons to think it’s true, this goes some way in explaining why people would be moral realists even if moral realism isn’t true.
It may also be able to partially explain why moral realism is the most common view among philosophers: it could be due, in part, to selection effects. If people study a topic because it’s interesting, and being an antirealist makes moral philosophy less interesting, this could cause fewer people inclined towards realism to study moral philosophy, and it could cause those who do to be more inclined towards realism. While this is speculative, I suspect it is true. And I suspect at least part of the reason why views like mine are less popular is because people with views like mine lose interest in the field and move on to other topics.
In any case, I completely agree that “Philosophers believe P” is not much of a reason to believe P. I do, after all, hold positions so uncommon that almost nobody endorses them. As far as I know, Bentham’s Bulldog has claimed that most philosophers endorsing moral realism is good evidence for moral realism. Since I don’t think that it is, I hope we can convince them that it is, at best, not very good evidence.
I'm not sure that simplicity and realism interact the way you suggest! The best defenses of Occam's Razor that I'm aware of are ones that show that a methodology of always following Occam's Razor is more likely to eventually reach the truth with no more revision than necessary, compared to other methodologies, but don't show that simplicity is itself connected to truth (as in, simpler theories are no more likely to be true than more complex ones). It seems that an anti-realist could replace the role of truth in this argument with something else that is taken to be the telos of theorizing (we need some sort of normativity to explain why truth or anything else would be a telos of theorizing) but simplicity might still fall out as a characteristic of good theorizing, whether it's aimed at truth or something else.
Interesting! I knew there were important "exceptions", of course -- Dancy, Gibbard, etc. But it's surprising to learn that there's no apparent correlation here at all, at least in the latest philpapers survey.
(That doesn't strictly rule out the possibility that there's a weak rational connection of the sort I'm positing; it might just have been counterbalanced by other sociological factors, like the influence of Dancy as you suggest in your other comment. But still... curious!)
>"I worry that this metaethical view will swiftly lead to sentimentalist complacency, at least for most people."
Assuming you believe that metaethical views can be true or false, is it more important to adopt metaethical views that are true or metaethical views that have good psychological consequences (e.g. avoiding "sentimentalist complacency")? How much attention should we pay to such psychological consequences when deciding what metaethical views to accept?
Extreme cases aside, we should probably just try to work out what's true. The practical implications are more relevant to our choice of what to focus on discursively, in terms of pushing back against others' false beliefs. People believe all sorts of false things. Maybe most don't matter much. But if a belief is both false *and* harmful, then we've extra reason to spend some time arguing against it.
My reaction to Sebo’s repugnant conclusion is to think that there’s no fact of the matter. I don’t endorse the rationalist or the sentimentalist’s views, and I think the correct response is to view this as a false dichotomy.
I certainly don’t endorse the conclusion that we should be rationalists about the matter. Why should we be? When you say, “Sentimentalism (as understood here) is a kind of anti-philosophy, a refusal to reflect systematically on what matters.” I do not agree. I don’t think it is, or at least I don’t think that it must be, anti-philosophical.
I am sympathetic to the sentimentalist, but such sympathies need not stem from a “refusal” to “reflect systematically on what matters,” as though this is a legitimate project, but a rejection of the notion that there is anything substantive to systematically reflect on in the first place. And to insist that there is (not that you’re doing so) would beg the question against myself or anyone else who denies this on philosophical grounds. For comparison, it wouldn’t be anti-philosophical for an atheist to refuse to systematically reflect on the nature of the Trinity. One can, on philosophical grounds, deny theism, and, as a result, such questions become moot. Likewise from a potential sentimentalist’s perspective, or my own.
I’m a bit puzzled as to why you think sentimentalism could lead to complacency. Why do you think it would do so? And why would giving up moral realism have any implications for your normative values? To me, this sounds a bit like someone saying that if they came to believe their favorite food wasn’t objectively tasty, and was merely subjectively tasty, that they’d become less interested in eating it, and more indifferent between eating their favorite foods and foods they despise. And that would strike me as a very strange reaction. I don’t think food would taste any better if gastronomic realism were true, and I don’t think people’s lives would matter any more to me if moral realism were true.
Regarding normative authority: I find this to be one of the most perplexing features of realism of all. I am only interested in acting in accordance with my personal values. If the objective moral facts were inconsistent with my goals, I wouldn’t care at all, and would simply not comply with them. Realists insisting that I’d be making some kind of “mistake,” and could somehow show this were true, all this would lead me to conclude is that I am committed to making certain kinds of mistakes, which I’d then proceed to make. The kind of “authority” moral realists seem to want doesn’t have any teeth. We could just ignore it, and there are no meaningful consequences. The only consequence seems to be that my actions wouldn’t be anointed with particular terminological designations, as though the mere act of labeling an action “bad” or “wrong,” is some kind of cosmic sanction against it.
In your conclusion, you state, “Moral realism, with its associated belief in stance-independent moral truths, encourages uncomfortable yet intrinsically plausible principles like impartiality.”
What do you mean by “intrinsically” plausible?
“This seems an important point. For the truth (on robust realism) may diverge significantly from your personal values—and what’s more, you can appreciate that there’s some sense in which the true values matter more than your personal values do. “
This is the kind of remark I find truly perplexing. “True moral values,” don’t matter more *to me* than my personal values do. And my personal values are the only kind of mattering that matters to me. In other words, the only things that matter to me are precisely those things that matter to me, and it doesn’t matter to me whether something “matters” independently of how much it matters to me. I am baffled at the notion that anyone else would care how much things “matter,” rather than how much things matter to them. Why does it matter to you how much something “matters”?
By "complacency", I don't mean "moral disinterest". One might be a complacent moral fanatic (as indeed I think most fanatics are -- they're not exactly known for rigorously questioning their moral assumptions). I'm talking about a lack of concern for *improving* one's moral perspective, correcting for errors or oversights, etc. Just as we typically aren't concerned about trying to avoid "gastronomic error" (since we don't believe that there is any such thing).
Do you not have any conception of moral progress? If you can't even conceive of someone making a moral mistake (or coming to appreciate that they made a mistake in the past, and so being concerned to avoid similar mistakes in future), then I'm not really sure where to begin. I guess my writing on this topic will just remain incomprehensible to you!
Efforts to improve are at least somewhat zero sum. I can’t spend all day trying to improve everything. While I may not care about which moral theory is correct, I do care about having a better understanding of how best to act in accordance with my values, to understand the relevant nonmoral facts, to work on biases and errors in reasoning, and so on. As such, while I may be complacent towards rarefied normative theory, I am, if anything, less complacent about actually living up to my values. So sure, perhaps I’m complacent in the particular respect you outline here, but that’s a complacency I’m quite happy with.
I can think of ways of describing “moral realism” that I’d endorse, but they wouldn’t have anything to do with moral realism. I do think people can make moral mistakes as well, but again, my understanding of that is in a thoroughly antirealist form. So more generally, I’d be happy to say that there can be progress, that people can make mistakes, and so on; I just don’t think of these in realist terms.
I'd be curious to hear your approach to teaching these subjects. My guess is that many contemporary undergraduates are inclined towards anti-realism about morality, and I can't imagine you're happy to let moral realism "just remain incomprehensible" to them. What percentage of your students typically find moral realism incomprehensible, and how many of them can you get to understand it over the course of a semester? Are there any readings you have found particularly helpful in getting them to find the view comprehensible, perhaps even plausible?
I've only taught metaethics a few times (and not very recently), but I actually don't recall any students expressing Lance's total incomprehension of moral realism. Many find it implausible, but that's a very different matter. (Most undergrads have an incoherent mix of intuitions, at least some of which arguably favour realism. They all agree that abolishing slavery was moral progress, for example.)
A few readings I like that worked well:
* Shafer-Landau, R. (2010) ‘Ethics as Philosophy: A Defense of Ethical Nonnaturalism’ in T. Horgan & M. Timmons (eds.) Metaethics After Moore. Oxford: Clarendon.
* Bedke, M. (2010) ‘Might all normativity be queer?’ Australasian Journal of Philosophy 88 (1): 41-58.
* Harman, E. (2015) ‘Is it Reasonable to “Rely on Intuitions” in Ethics?’ in A. Byrne, J. Cohen, G. Rosen, & S. Shiffrin (eds.), The Norton Introduction to Philosophy.
>"Many find it implausible, but that's a very different matter."
Is it really so different? I used to think moral realism was implausible, but I am increasingly coming around to the view that I merely find it incomprehensible. (In particular, I don't understand why moral realists are so confident that moral norms can be distinguished from gastronomic norms in a way that makes error possible in the former case but not in the latter.) Perhaps upon further reflection, your students might come to see that they don't really reject moral realism, they just fail to comprehend it?
That's exactly what I think. I suspect more students would share my view that (at least some forms of) moral realism are not false, but unintelligible. But since this view isn't well-represented in the academic literature, they aren't introduced to this notion in lectures, articles, or textbooks. As such, it isn't a salient option.
I regard non-naturalist moral realism as unintelligible but I don't think moral naturalism always is. Instead, I think those accounts tend to be trivial or false (or both). So technically I don't think moral realism is incomprehensible, so much as specific forms of it.
In any case, I have not myself encountered anyone who appeared to endorse this view. However, I don't think most people have any particular views about realism or antirealism at all, so it wouldn't surprise me that very few people would share my views on the matter. For the same reason, I don't think many people would think the Many Worlds interpretation of quantum mechanics is "incomprehensible," because they don't hold a position on it at all, or even know what it is.
I agree on the incoherent mix of intuitions part, but only up to a point: I think people only begin to express such intuitions once they begin engaging in philosophy, and that, while this does occur outside academic contexts, for the most part people don't so much have incoherent metaethical intuitions as they have no particular metaethical intuitions at all. I endorse a combination of descriptive metaethical indeterminacy and variability, and lean more towards the indeterminacy part. These views aren't common but they've at least appeared in the literature in this paper:
Gill, M. B. (2009). Indeterminacy and variability in meta-ethics. Philosophical studies, 145(2), 215-234.
“Only on this view, I think, does it make sense for us to constrain our personal values in light of possible views that we’re confident we would never ourselves endorse.”
I have trouble understanding the phrase, “constrain our personal values.” My values are my values. What does it mean to constrain them? That I think of myself as fallible, so I may be mistaken? That I think of others as disagreeing, and not being obviously mistaken to do so? That I tolerate other persons living according to other values? Maybe I missed something important in the article, nothing really seems to fit here for me.
Taking moral fallibility/uncertainty into account, e.g. by "picking an option that scores slightly less well according to your own values, but vastly better according to other plausible (yet alien-to-you) values."
> I think it’s important that moral philosophers be rationalists (in the above sense). Sentimentalism (as understood here) is a kind of anti-philosophy, a refusal to reflect systematically on what matters. But we need such reflection, if we are to have any hope of uncovering new truths, or improving upon our untutored reactions. While it’s always possible for systematic thought to lead one further astray—an inconsistent Nazi is better than a consistent one!—careful philosophical inquiry remains our most reliable means of non-accidentally improving our epistemic position on moral matters. Or so I believe.
I know I've made this point in your comment section before, but the argument expressed in this paragraph still just seems bizarre to me.* Yes, your 'sentimentalism' is an opposition to systematic ethical theory (although it is not a 'refusal', as typically 'sentimentalists' have cogent arguments for why systematic theorising is a mistaken practice; they don't just declare that it's icky and for girls) - that much is true. But it simply does not follow that we must do such systematic theory 'if we are to have any hope of uncovering new truths, or improving upon our untutored reactions'. Moral thinking can proceed unaided by systematic theory - Arendt's 'thinking without a bannister' - and still allow us to improve upon naïveté and irrationality.
To give just one example, Williams, who you mention, famously made a quite important intervention to try to rescue the reputation of shame as a moral emotion, arguing that it could be much more productive and (indeed) conducive to self-respect and moral agency than has typically been assumed. None of his reasoning assumes anything like systematic moral theory, and his conclusions cannot be reconstructed as a systematic theory, but - if his arguments are sound - they offer a powerful 'uncovering' of truth and the possibility of 'improving upon our untutored reactions'. If you want to deny this, you have to deny that his arguments are sound (not an unreasonable position), and thus have to get into the first-order issues at play - which is already to admit that there is first-order substance to non-systematic ethical thinking.
Much the same, of course, could be said of Arendt or (certain readings of) Anscombe, and of many forms of Humeanism. Indeed, 'sentimentalism' is no more an 'anti-philosophy' than Humeanism is: arguing that a certain style of reasoning (viz., systematic moral theory) has limits is not the same as a refusal to think, and indeed is quite the opposite (https://personal.lse.ac.uk/ROBERT49/teaching/ph103/pdf/Hume_1748_Enquiry12_OnAcademicalOrSkepticalPhilosophy.pdf). (It is worth noting that Hume was a sentimentalist in the no-scare-quotes sense!)
There are, of course, arguments to be made for systematic moral theorising - for example, various full-blooded forms of moral realism seem to entail sufficiently strong deontic logics to render particularist 'sentimentalism' incoherent. But the idea that we should back systematic theory because without it we would have no way to get 'better' at morality** seems completely empty to me - first, because (as mentioned above) it's not the only way to think about ethics; second, because it's not clear that systematic moral philosophy actually has a particularly great track record of contributing to 'better' ethics.
*The last time I commented in this vein, you said you weren't making an argument, but in this paragraph it pretty clearly is an argument.
**I use this vague formulation to avoid the disputes about moral knowledge that follow from your use of the term 'epistemic position', which could very easily be accused of begging the question.
Fair enough. It's certainly true that sentimentalist anti-theorists can still offer incremental improvements upon our untutored reactions. But they do seem essentially limited in scope, and unwilling to countenance the more radical moral reforms that systematic theorizing might urge upon us. And I think we ought to be willing to at least consider such systematic revisions, even when they aren't emotionally appealing.
For some concrete examples of (what I see as) complacent sentimentalism, see the dismissive criticisms of EA from philosophers like Amia Srinivasan, Alice Crary, and Kathleen Stock:
https://www.lrb.co.uk/the-paper/v37/n18/amia-srinivasan/stop-the-robot-apocalypse
https://www.radicalphilosophy.com/article/against-effective-altruism
https://unherd.com/2022/09/effective-altruism-is-the-new-woke/
It's easy enough to identify instances of what you'd see as complacent sentimentalism, but one can reject both realism and sentimentalism. I'm a moral antirealist and I simultaneously a proponent of longtermism and I am very disinclined to favor moral positions merely because they're emotionally appealing.
The same could be true for many other antirealists. I don't know. That seems like an empirical question. Which is why I'd want to emphasize a more general point:
questions about the psychological consequences of different metaethical positions are, ultimately, empirical questions. Even if certain types of sentimentalism led to complacency, that doesn't mean any and all rejections of rationalist approaches, or of moral realism, would lead to complacency.
There is, at the same time, a kind of reverse, potentially undesirable consequence of not favoring antirealist or sentimentalist views. What if we would consider ourselves, on reflection, to be better off if we endorsed normative moral positions that more closely matched our sentiments, or our subjective preferences, but we don't do so because we mistakenly believe that normative moral theories deserve serious attention, and ought potentially factor into our deliberations? In other words, if moral antirealists like myself are right, moral realists may be wasting a lot of time considering moral systems that they wouldn't, on reflect, endorse or want to endorse, if they were moral antirealists.
In any case, I'm skeptical antirealism will turn out to have substantive negative practical consequences - perhaps some associated views would, but not antirealism itself, or what I'd take to be a more generally accurate model of morality and normativity more generally.
Yes, agreed that it's an empirical question. I'm merely suggesting that there's a stronger rational connection between anti-realism and sentimentalist complacency, in that it *makes sense* for (at least certain kinds of) anti-realists to (often) be more complacent than realists. But even if I'm right about that, nothing immediately follows since people don't always make sense, and there might be other (e.g. sociological) factors that have more influence.
On the "reverse" potential risk that you raise, one interesting asymmetry here is that if realism is true, it *really matters* that we get it right. But if realism is false, it doesn't really matter that we get it wrong. At worst, we merely get less of what we want. But it then doesn't matter whether we get more or less of what we want. So the stakes are much lower if realism is false than if it is true. That might give us more practical reason to guard against the risks of messing up conditional on realism being true, compared to the risks of messing up conditional on it being false.
>"the stakes are much lower if realism is false than if it is true"
This seems exactly backwards to me, for reasons similar to those I mentioned in the other thread. If non-naturalist moral realism is true, then there must be some objectively correct axiology, so I have reason to find and adopt a self-consistent axiology that conforms as closely as possible to my basic intuitions about what "really matters." But the best candidate for such an axiology is (roughly) nihilism. So if I am spending time thinking about what "really matters," then I am spending time thinking about the fact that nothing really matters at all. But if I am thinking about what matters *to me*, then I am thinking about the rich variety of things I care about and can direct my actions towards.
So if realism is true, the "stakes" are precisely nothing; if realism is false, the stakes are everything I care about.
(I realize that many moral realists reject nihilism; however, I cannot see that they have any grounds for doing so that do not rest on arbitrary stipulations or linguistic conventions. My own desires and preferences seem to provide a stronger foundation for moral thinking, and a *much* stronger foundation for moral action, than anything "objective" could possibly do.)
The thing about it "really mattering" if realism is true is that things might "really" matter that don't matter-to-us, in the subjective sense that I, or sentimentalists, care about. What if the objective moral facts turn out to require us to do things radically at odds with what we want to do? What if they require us to scream at tables or destroy the universe? For me, it merely matters whether they align with what I want at all.
This notion of things "really" mattering is a big part of why I reject realism: It's not just that I think it's false, but that I wouldn't care if it were true. I don't care about what "really" matters, I care only about what matters to me. And I don't accept the framing that if realists are right, that they've captured what "really" matters.
What realism captures is what stance-independently matters, but why should I accept the verbal framing that this is what "really" matters, other than in the sense that by "really" we just mean stance-independently? To say the stance-independent moral facts are what "really" matters gives the impression that this kind of mattering is somehow more important or better in some way. I don't think it is.
For instance, if I endorse utilitarianism, and some form of deontology was “true” in a realist sense, I would have no more inclination to comply with the deontological moral facts than I do if antirealism were true.
When you say "But if realism is false, it doesn't really matter that we got it wrong,” it depends what you mean by “really matter.” It doesn’t stance-independently matter, but it matters to me. And it mattering to me is precisely the kind of mattering that I think really matters. That is, I think antirealist conceptions of things mattering are what really matters, and realist conceptions don’t really matter.
Likewise, when you say, “So the stakes are much lower if realism is false than if it is true,” I don’t agree. This frames the stakes in terms of stakes being high or low with respect to a realist’s conception of value, which the antirealist can (and I do) reject. As such, I also do not concede that the stakes are lower.
As a result, not only am I not willing to grant that there are stance-independent facts, I am also not willing to grant realists free reign to frame the dialectic in terms that sound favorable - that is, for realists to claim that if they're correct, they've identified what "really" matters. To say that if realism is true that things “really” matter is a framing device that makes realism seem more appealing for reasons other than its plausibility.
It's not clear to me at all what the sentimentalism / rationalism distinction has to do with the radical / incremental distinction. Surely systematic theorising can fall prey just as easily to an idealisation of our naïve intuitions? And on the other hand, an openness on emotional connections can create the potential for quite radical shifts in moral perspective - as, say, was experienced by many white Americans reading Uncle Tom's Cabin before the US Civil War (or, to give an example less bounded by time, as has been experienced by generations of smart, precocious young men reading Augustine for the first time). Your descriptions of how sentimentalism can be complacent and rationalism radical do indeed seem to map onto real mechanisms, but they're not the only mechanisms at play here. Zooming out, there just doesn't seem to be any general correlation between being radical and being systematic,* and indeed Srinivasan's objection to EA is precisely that it is far too complacent about some things. You might argue that it's radical about others, but now you and her are disagreeing on 'what are the most important things to be radical about in ethics' - which brings us back to the object level.
* Unless you're implicitly defining 'radical' to mean 'radical in a good way', in which case you can argue there's a connection but only in a question-begging way. I'm not saying this is what you mean, but this is quite a common way I see the word used so I just wanted to head it off explicitly.
I think that also accepting moral realism affects how sympathetic we'll be to various theoretical virtues. Simplicity doesn't matter if you're an anti-realist, it plausibly does if you're a realist. Similarly, anti-realists will be less moved by arguments like "your view entails a puzzling type of strongly emergent value," and by arguments that appeal to the historical track record of various moral theories. I also think anti-realists would be more likely to be particularists -- if there's no fact of the matter, we may expect our intuitions to be a hodge podge of different moral sentiments.
Overall, it seems like moral realism makes normative ethics more interesting and robust.
I'm not sure that anti-realists shouldn't be moved by simplicity considerations. Scientific anti-realists seem to take simplicity pretty seriously for reasons that have nothing to do with a theory's literal truth, such as fruitfulness. Denying that there is normative authority doesn't mean that nothing can matter to the anti-realist. I think that moral anti-realists who think science is up to something interesting (whether they're scientific anti-realists or not) are likely to care about simplicity at least a little bit.
I actually find those considerations fairly plausible, though I don't know if such considerations would result in substantive practical differences in how people act (for better or worse). I'm curious about the last remark you left though. You say:
"Overall, it seems like moral realism makes normative ethics more interesting and robust. "
Do you think that's a reason to think realism is more likely to be true?
Only if we have pre-theoretic intuitions that we should be able to make significant moral progress.
Well, okay. But this also seems to reveal a potential bias. If moral realism would make normative theory more interesting and robust, that would provide an incentive for endorsing moral realism so that you can engage in an activity you find more interesting (and, I imagine, more meaningful).
In other words, if moral realism does make normative ethics more interesting and robust, this seems to be at least some evidence against realism and in favor of antirealism, because it represents a motivational bias that could prompt people to favor realism over antirealism for reasons unrelated to realism being more likely to be true.
In short: if philosophers would prefer realism is true and antirealism isn't true, we have reason to be at least a little bit more skeptical of people's commitment to realism, since they have an incentive to endorse for reasons other than its plausibility.
I don't think it's any sort of evidence *for* anti-realism. But I agree that it *undermines* the case for treating normative ethicists' moral realism as itself constituting "evidence" for realism. In the same way that philosophers of religion favouring theism is not evidence of theism's truth. (But nor is it positive evidence for atheism. It's instead to say that something people might have mistaken for evidence one way is instead evidentially neutral.)
More generally, though, "Philosophers believe P" is typically not much of a reason (if any reason at all) to believe P, at least when P is a first-order philosophical claim, rather than a higher-order claim about the state of the discourse. So I don't think there was much evidential force there in the first place, for the motivational bias factor to undermine!
I think it’s a little bit of indirect evidence for antirealism, only insofar as people believing a claim is evidence of the claim, and in particular people who specifically study a topic (philosophers in this case) believe the claim. I don’t take either to serve as much evidence for a claim, but I don’t take it to be no evidence at all.
At least one of the tasks I take on as an antirealist is explaining why people endorse moral realism, and in particular why philosophers would endorse realism. If I can show that realism is appealing for reasons other than it being true or there being good reasons to think it’s true, this goes some way in explaining why people would be moral realists even if moral realism isn’t true.
It may also be able to partially explain why moral realism is the most common view among philosophers: it could be due, in part, to selection effects. If people study a topic because it’s interesting, and being an antirealist makes moral philosophy less interesting, this could cause fewer people inclined towards realism to study moral philosophy, and it could cause those who do to be more inclined towards realism. While this is speculative, I suspect it is true. And I suspect at least part of the reason why views like mine are less popular is because people with views like mine lose interest in the field and move on to other topics.
In any case, I completely agree that “Philosophers believe P” is not much of a reason to believe P. I do, after all, hold positions so uncommon that almost nobody endorses them. As far as I know, Bentham’s Bulldog has claimed that most philosophers endorsing moral realism is good evidence for moral realism. Since I don’t think that it is, I hope we can convince them that it is, at best, not very good evidence.
I'm not sure that simplicity and realism interact the way you suggest! The best defenses of Occam's Razor that I'm aware of are ones that show that a methodology of always following Occam's Razor is more likely to eventually reach the truth with no more revision than necessary, compared to other methodologies, but don't show that simplicity is itself connected to truth (as in, simpler theories are no more likely to be true than more complex ones). It seems that an anti-realist could replace the role of truth in this argument with something else that is taken to be the telos of theorizing (we need some sort of normativity to explain why truth or anything else would be a telos of theorizing) but simplicity might still fall out as a characteristic of good theorizing, whether it's aimed at truth or something else.
I'm not sure what the other thing would be.
Interesting! I knew there were important "exceptions", of course -- Dancy, Gibbard, etc. But it's surprising to learn that there's no apparent correlation here at all, at least in the latest philpapers survey.
(That doesn't strictly rule out the possibility that there's a weak rational connection of the sort I'm positing; it might just have been counterbalanced by other sociological factors, like the influence of Dancy as you suggest in your other comment. But still... curious!)
>"I worry that this metaethical view will swiftly lead to sentimentalist complacency, at least for most people."
Assuming you believe that metaethical views can be true or false, is it more important to adopt metaethical views that are true or metaethical views that have good psychological consequences (e.g. avoiding "sentimentalist complacency")? How much attention should we pay to such psychological consequences when deciding what metaethical views to accept?
Extreme cases aside, we should probably just try to work out what's true. The practical implications are more relevant to our choice of what to focus on discursively, in terms of pushing back against others' false beliefs. People believe all sorts of false things. Maybe most don't matter much. But if a belief is both false *and* harmful, then we've extra reason to spend some time arguing against it.
My reaction to Sebo’s repugnant conclusion is to think that there’s no fact of the matter. I don’t endorse the rationalist or the sentimentalist’s views, and I think the correct response is to view this as a false dichotomy.
I certainly don’t endorse the conclusion that we should be rationalists about the matter. Why should we be? When you say, “Sentimentalism (as understood here) is a kind of anti-philosophy, a refusal to reflect systematically on what matters.” I do not agree. I don’t think it is, or at least I don’t think that it must be, anti-philosophical.
I am sympathetic to the sentimentalist, but such sympathies need not stem from a “refusal” to “reflect systematically on what matters,” as though this is a legitimate project, but a rejection of the notion that there is anything substantive to systematically reflect on in the first place. And to insist that there is (not that you’re doing so) would beg the question against myself or anyone else who denies this on philosophical grounds. For comparison, it wouldn’t be anti-philosophical for an atheist to refuse to systematically reflect on the nature of the Trinity. One can, on philosophical grounds, deny theism, and, as a result, such questions become moot. Likewise from a potential sentimentalist’s perspective, or my own.
I’m a bit puzzled as to why you think sentimentalism could lead to complacency. Why do you think it would do so? And why would giving up moral realism have any implications for your normative values? To me, this sounds a bit like someone saying that if they came to believe their favorite food wasn’t objectively tasty, and was merely subjectively tasty, that they’d become less interested in eating it, and more indifferent between eating their favorite foods and foods they despise. And that would strike me as a very strange reaction. I don’t think food would taste any better if gastronomic realism were true, and I don’t think people’s lives would matter any more to me if moral realism were true.
Regarding normative authority: I find this to be one of the most perplexing features of realism of all. I am only interested in acting in accordance with my personal values. If the objective moral facts were inconsistent with my goals, I wouldn’t care at all, and would simply not comply with them. Realists insisting that I’d be making some kind of “mistake,” and could somehow show this were true, all this would lead me to conclude is that I am committed to making certain kinds of mistakes, which I’d then proceed to make. The kind of “authority” moral realists seem to want doesn’t have any teeth. We could just ignore it, and there are no meaningful consequences. The only consequence seems to be that my actions wouldn’t be anointed with particular terminological designations, as though the mere act of labeling an action “bad” or “wrong,” is some kind of cosmic sanction against it.
In your conclusion, you state, “Moral realism, with its associated belief in stance-independent moral truths, encourages uncomfortable yet intrinsically plausible principles like impartiality.”
What do you mean by “intrinsically” plausible?
“This seems an important point. For the truth (on robust realism) may diverge significantly from your personal values—and what’s more, you can appreciate that there’s some sense in which the true values matter more than your personal values do. “
This is the kind of remark I find truly perplexing. “True moral values,” don’t matter more *to me* than my personal values do. And my personal values are the only kind of mattering that matters to me. In other words, the only things that matter to me are precisely those things that matter to me, and it doesn’t matter to me whether something “matters” independently of how much it matters to me. I am baffled at the notion that anyone else would care how much things “matter,” rather than how much things matter to them. Why does it matter to you how much something “matters”?
By "complacency", I don't mean "moral disinterest". One might be a complacent moral fanatic (as indeed I think most fanatics are -- they're not exactly known for rigorously questioning their moral assumptions). I'm talking about a lack of concern for *improving* one's moral perspective, correcting for errors or oversights, etc. Just as we typically aren't concerned about trying to avoid "gastronomic error" (since we don't believe that there is any such thing).
Do you not have any conception of moral progress? If you can't even conceive of someone making a moral mistake (or coming to appreciate that they made a mistake in the past, and so being concerned to avoid similar mistakes in future), then I'm not really sure where to begin. I guess my writing on this topic will just remain incomprehensible to you!
Efforts to improve are at least somewhat zero sum. I can’t spend all day trying to improve everything. While I may not care about which moral theory is correct, I do care about having a better understanding of how best to act in accordance with my values, to understand the relevant nonmoral facts, to work on biases and errors in reasoning, and so on. As such, while I may be complacent towards rarefied normative theory, I am, if anything, less complacent about actually living up to my values. So sure, perhaps I’m complacent in the particular respect you outline here, but that’s a complacency I’m quite happy with.
I can think of ways of describing “moral realism” that I’d endorse, but they wouldn’t have anything to do with moral realism. I do think people can make moral mistakes as well, but again, my understanding of that is in a thoroughly antirealist form. So more generally, I’d be happy to say that there can be progress, that people can make mistakes, and so on; I just don’t think of these in realist terms.
I'd be curious to hear your approach to teaching these subjects. My guess is that many contemporary undergraduates are inclined towards anti-realism about morality, and I can't imagine you're happy to let moral realism "just remain incomprehensible" to them. What percentage of your students typically find moral realism incomprehensible, and how many of them can you get to understand it over the course of a semester? Are there any readings you have found particularly helpful in getting them to find the view comprehensible, perhaps even plausible?
I've only taught metaethics a few times (and not very recently), but I actually don't recall any students expressing Lance's total incomprehension of moral realism. Many find it implausible, but that's a very different matter. (Most undergrads have an incoherent mix of intuitions, at least some of which arguably favour realism. They all agree that abolishing slavery was moral progress, for example.)
A few readings I like that worked well:
* Shafer-Landau, R. (2010) ‘Ethics as Philosophy: A Defense of Ethical Nonnaturalism’ in T. Horgan & M. Timmons (eds.) Metaethics After Moore. Oxford: Clarendon.
* Bedke, M. (2010) ‘Might all normativity be queer?’ Australasian Journal of Philosophy 88 (1): 41-58.
* Harman, E. (2015) ‘Is it Reasonable to “Rely on Intuitions” in Ethics?’ in A. Byrne, J. Cohen, G. Rosen, & S. Shiffrin (eds.), The Norton Introduction to Philosophy.
>"Many find it implausible, but that's a very different matter."
Is it really so different? I used to think moral realism was implausible, but I am increasingly coming around to the view that I merely find it incomprehensible. (In particular, I don't understand why moral realists are so confident that moral norms can be distinguished from gastronomic norms in a way that makes error possible in the former case but not in the latter.) Perhaps upon further reflection, your students might come to see that they don't really reject moral realism, they just fail to comprehend it?
Thanks for the reading suggestions!
That's exactly what I think. I suspect more students would share my view that (at least some forms of) moral realism are not false, but unintelligible. But since this view isn't well-represented in the academic literature, they aren't introduced to this notion in lectures, articles, or textbooks. As such, it isn't a salient option.
I regard non-naturalist moral realism as unintelligible but I don't think moral naturalism always is. Instead, I think those accounts tend to be trivial or false (or both). So technically I don't think moral realism is incomprehensible, so much as specific forms of it.
In any case, I have not myself encountered anyone who appeared to endorse this view. However, I don't think most people have any particular views about realism or antirealism at all, so it wouldn't surprise me that very few people would share my views on the matter. For the same reason, I don't think many people would think the Many Worlds interpretation of quantum mechanics is "incomprehensible," because they don't hold a position on it at all, or even know what it is.
I agree on the incoherent mix of intuitions part, but only up to a point: I think people only begin to express such intuitions once they begin engaging in philosophy, and that, while this does occur outside academic contexts, for the most part people don't so much have incoherent metaethical intuitions as they have no particular metaethical intuitions at all. I endorse a combination of descriptive metaethical indeterminacy and variability, and lean more towards the indeterminacy part. These views aren't common but they've at least appeared in the literature in this paper:
Gill, M. B. (2009). Indeterminacy and variability in meta-ethics. Philosophical studies, 145(2), 215-234.
“Only on this view, I think, does it make sense for us to constrain our personal values in light of possible views that we’re confident we would never ourselves endorse.”
I have trouble understanding the phrase, “constrain our personal values.” My values are my values. What does it mean to constrain them? That I think of myself as fallible, so I may be mistaken? That I think of others as disagreeing, and not being obviously mistaken to do so? That I tolerate other persons living according to other values? Maybe I missed something important in the article, nothing really seems to fit here for me.
Taking moral fallibility/uncertainty into account, e.g. by "picking an option that scores slightly less well according to your own values, but vastly better according to other plausible (yet alien-to-you) values."