Richard, thanks for this very insightful post. I'm thinking about working on related topics for part of my dissertation, so I had some thoughts I wanted to run by you.
To begin with, rather than being false, isn't naive instrumentalism in fact the true theory of instrumental rationality if anything is? That is, naive instrumentalism describes how an ideally rational agent reasons. Not only that, but this fact is fairly easy to appreciate. What is hard to appreciate is that humans ought not to follow the true theory of instrumental rationality because humans are not ideally rational. What I find interesting is that philosophers in the consequentialist tradition have (as you observe) been the ones to appreciate this most clearly, whereas non-consequentialists often assume that knowing the ideal moral goals is sufficient to enable a good-willed, naively-instrumental person to be moral. However, as you say, this in fact has nothing to do with core moral differences between consequentialism and non-consequentialism, but is in fact a dispute about practical rationality - so what explains this difference?
One possible explanation is that as a substantive matter, non-consequentialists tend to believe that the ideal moral goals are ones which a good-willed, morally knowledgeable human being will do best at following by being naively instrumental. For example, the Rossian prima facie duties seem to be like this - or at least Ross and his followers seem to believe that they are.
Another possibility is that non-consequentialists believe (perhaps implicitly) that lowering the standards of morality or rationality in response to human imperfection is morally unjustified and/or tends to make us worse people because it removes a source of (internal and external) pressure for self-improvement. I don't care much for the in-principle objection, but I do think that the second point has been neglected in the consequentialist tradition, whereas (e.g.) virtue ethicists have always been impressed by it. It happens to be true about human beings, though it isn't true in the abstract, that we are habit-forming agents, and the choices we make now shape the way we make future choices. To this extent, there's pressure on the non-ideal theory of rationality to strike a balance between accommodating and correcting human imperfection.
What I am now thinking about is the prospects for an ecumenical (i.e. theory-neutral) synthesis of these ideas. Since these questions about rationality are properly independent from disputes over what you call the core content of morality, this seems reasonably achievable. But I may have overlooked some reasons for pessimism here. For example, one reason to be pessimistic is that it's really true that for some non-consequentialist theories, naively instrumental pursuit of the ideal moral goals is as good as anything else human beings could manage.
(Let's use Rossian deontology as a case study. The duty of beneficence seems to be an obvious candidate for one that naively instrumental humans would fail to successfully discharge; however, given the relatively lax demands of the duty of beneficence as Ross conceives of it this may not in fact be true. (After all, the types of actions commonsensically associated with beneficence, like giving money to the homeless and volunteering at soup kitchens, are genuinely beneficent! They just often fail to be effectively beneficent.) Insofar as the duties of non-maleficence, fidelity, reparations, and gratitude are limited to face-to-face human relations, I think naively-instrumental, good-willed human beings probably do as well as any alternative. The question is whether it is plausible, by Rossian lights, to have such a restrictive understanding of the scope of these duties. If we understood, for instance, the question of what to do about various forms of social injustice as falling within the scope of the duty of reparations, then I think there's a strong case to be made that naively-instrumental, good-willed human beings often go wrong and would do better by being principled proceduralists.)
Hi Jeremy, it's a great topic to work on! Three main thoughts:
(1) I take naive instrumentalism to be true as an *ideal theory*, but it doesn't follow that it is the true account of what is instrumentally rational for *humans*. We need a different non-ideal theory, that takes into account -- and corrects for -- our deep biases and higher-order unreliability.
(2) I wouldn't describe non-ideal theory as "lowering the standards of morality or rationality". Instrumental rationality is *still* about how we can (expectably) *best* achieve the correct moral goals. It's just that the answer to this ambitious question depends upon details of our nature (incl. cognitive limitations). Principled proceduralism offers guidance that's better suited to human-sized minds. (This is an important *truth* about instrumental rationality.) Our minds have lower cognitive capacity than those of ideal agents. But that doesn't mean that the guidance is aptly described as having "lower standards". In some ways, it would seem just as natural to describe principled proceduralism as insisting upon "higher standards". But I think it's most accurate to just say that the guidance is *different* (not "higher" or "lower") from what would be suitable for ideal agents.
(3) As mentioned in the OP, I think non-consequentialists are often naive instrumentalists when it comes to politics and intellectual inquiry, in ways that are predictably very bad. But maybe there's an ideal form of Rossian Pluralism (or virtue ethics) that gives sufficiently greater *non-instrumental* weight to Millian liberal virtues to properly match their deep *instrumental* value, and thereby deter "naive" violations even when agents are themselves applying a naive decision procedure? It must be possible in theory. I guess the standard worry is just how psychologically feasible it is for people to abide by this, as the value of protecting people from oppression (or whatever) is apt to be much more *salient* than more abstract values like free speech (especially since it's so dubious that the *non-instrumental* value of something so abstract could reasonably trump real harms to vulnerable people).
I think this distinction sounds like its missing the point.
The ethics question that is imlicit is "how should people reason when it comes to moral questions ". So if you say that you are a utilitarian but you don't reason in a utilitarian way then you seem to have changed the target of the conversation.
Firstly, it confuses moral theories and decision procedures. Ethical theories simply aren't, in the first instance, theories of "how people should reason". They're usually characterized as theories about *what makes an action right or wrong*; my (somewhat heterodox) view is that they would do better to be framed around the question of *what is fundamentally most important, or worth caring about*. But either way, these fundamental questions of ethical theory are very different from the practical question you raise. If you come to ethical theory thinking that the rival views are answering that practical question, you will come away badly confused.
As a theory, utilitarianism has *implications* for which decision procedure you should use. It says you should adopt whatever decision procedure is such that *your adopting it (i.e. reasoning in that way) would have the best consequences*. Which particular decision procedure actually meets this description is an empirical question, not a philosophical one. (Note that any theory X need not recommend reasoning in an X-ish way. For example, if the fate of the world depended upon your adopting an irrational decision procedure, any sane moral theory will agree that you ought to do precisely that -- make yourself irrational, if you can, in order to save the world.)
Secondly, my whole point is that *you don't know* what it is to "reason in a utilitarian way". To answer that, you would need to combine utilitarian goals with a theory of *instrumental rationality*. Many people ASSUME naive instrumentalism, without even realizing it. They have a picture in mind that actually involves the combination of utilitarianism + naive instrumentalism, but *mis-label* this picture as "reasoning in a utilitarian way". The whole point of my post was to explain why this is a mistake.
True "reasoning in a utilitarian way" involves combining utilitarian goals with whatever is the *correct* theory of instrumental rationality. I've offered some indications of what I think this involves. The precise details are open to question. But I'm pretty sure that naive instrumentalism is not the right answer, and so the caricatured understanding of "utilitarian reasoning" is actually an outright misconception.
This is a great article. It seems people constantly use naive utilitarianism as an argument against utilitarianism which just seems wild to me. Even on the basis of an arguing that is self effacing is bad (which seems to be true), utilitarianism arguably isn’t even self effacing some of the time. If utilitarianism is defined as “make the outcome with most most net positive utility result” as opposed to “take the action that will result in the most net positive utility,” it’s not even telling you to use a different theory when the practical implication would be to not always think about what’s actually going to maximize utility.
"This latter point is broadly under-theorized. Decision theory provides a kind of ideal theory of instrumental rationality, applicable to cognitively unlimited and unbiased angels, perhaps."
There has been some work on this kind of thing, though I haven't explored it yet:
Would the following scenarios affect what is naive and what is a wisely principled procedure?
1) Suppose we place the same moral weight on ants and people (in some sense).
2) What if society becomes majority utilitarian? [Of course, utilitarians will still be selfish, but lying won't necessarily be frowned upon if it was for the greater good.]
Can you expand upon what you have in mind? Like, a concrete example of something that's currently naive but might, in the specified scenarios, instead qualify as wisely principled?
(My default assumption is that the answer is 'no'. Social norms and expectations might change what even qualifies as (deceitful) *lying*, as opposed to mere "white lie"-style expected untruths. But I think rational utilitarians should generally value truthfulness *at least* as highly as most people do -- maybe more.)
Suppose you're a smart high schooler from a third world country with very limited opportunities. Should you cheat slightly on the SAT (if you could without getting caught) to get into a good university scholarship so that you can put yourself in a position of more career power to better help ants?
Or if you're a middle schooler from a village. Should you cheat to get into a high school in the city. If you don't, you'll probably be stuck with low education because the school in your village is bad.
Seems broadly similar to a normal case involving ordinary altruism rather than ants? I guess the thought is meant to be that the stakes are much higher, in part because your values are so much rarer / more neglected. But then the higher stakes make it all the more important that you not get caught cheating (in real life you can't *stipulate* future results, e.g. that you won't get caught, or that "cheating slightly" is necessary or would make any difference to whether you get a good scholarship).
Just as in the ordinary case, there might be special circumstances in which you'd do well to cheat, but it doesn't seem generally likely to be good career advice for smart high-schoolers (incl. from third world countries with very limited opportunities). And again, the higher stakes just make it all the more important that they follow actually-good career advice.
" the stakes are much higher, in part because your values are so much rarer / more neglected." Yes.
"And again, the higher stakes just make it all the more important that they follow actually-good career advice."
Not sure about that. The reward function may have a threshold shape, or at least be nonlinear. If the only way to affect change on ants is to become super successful at something, then a non-stellar level of career success (which would result from not cheating) is perhaps of little use. If you get caught, you're not much worse off than if you hadn't tried (relative to the stakes), but if you succeed, the payoff could be huge. Anyways, that's the idea of what a good scenario would be cooked up to do.
Yeah, that's possible. I think it's unusual to be in such an asymmetric (high upside / low downside from cheating) situation, and it's important to pay attention to possible downsides (just look at all the broader reputational harm caused by SBF's fraud). But I'm not claiming that norm-breaking could *never* be rationally justified. Just that I think we should start from a very strong pro-social/co-operative disposition, and not be *easily* swayed into norm-breaking defection based just on very speculative/unreliable reasons for thinking oneself to be the rare exception to the rule.
I tend to agree. But my point that how often you ought to be swayed may depend on if you adopt radical hedonistic utilitarian values such as extreme concern for animals or the exchangeability of current people with "merely possible" people.
If you say being a "principled" defender of X only commits you to having some principle Y which causes you to defend X, then how could someone even be a non-principled defender of X? All defenders of X either defend X because of the principle X, or because of a different principle or set of principles Y. Clearly, if you defend X because of the principle X, you are said to defend X "in principle," while if you defend X because of an alternative principle Y, you are said to defend X "not in principle."
An objection to this would be that someone might defend X because of Y where Y is self-interest or flipping a coin. So a "principled" defender of X commits you to having some non-random moral principle, X or Y, on which you defend X. But I think my description above better matches common usage.
For example, I think it is totally fair to say utilitarians have "no principled objection" to slavery. I suppose they do have *a* principled objection, but the principle on which they object is not slavery itself, which is obviously what the statement, "you have no principled objection to slavery," is actually supposed to convey.
Maybe the semantics are boring or irrelevant, but the implications are not. When you say "there’s little reason for non-theorists to care about this further, purely theoretical matter," I think this is incorrect. Using the given example of free speech, most people on earth do not, in fact, think that free speech norms are "more conducive to moral progress and overall well-being than any realistic alternative." So whether you support free speech on utilitarian or deontological grounds will alter your support of free speech, should you become convinced of the majority-held view.
In general, I think saying it doesn't matter whether you support X on instrumental or non-instrumental grounds is incorrect when the instrumental value of X is controversial.
It's very common for people to make unprincipled appeals to free speech, i.e. just when it is advantageous to "their side". That they are "unprincipled" about it is revealed by the fact that they won't defend the free speech of people they disagree with.
On the further implications: it's true that the grounds of one's support for X will influence *what you'd have to change your mind about* in order to cease supporting X. But I don't know that there's any particular reason to expect instrumentally-based support to be noticeably less robust (or more likely to be subsequently abandoned) than non-instrumentally-based support.
Compare: most people on earth do not, in fact, think that there are non-instrumental reasons to support free speech. So whether you support free speech on instrumental or non-instrumental grounds will alter your support of free speech, should you become convinced of the majority-held view *about the lack of an adequate non-instrumental basis*.
I guess if you thought it was normatively overdetermined -- that there were *both* instrumental and non-instrumental reasons -- that would be the *most* robust position, least susceptible to being overturned by later changes of mind. But I still don't think there's *much* reason for most people to care about any of this, because I expect *either* basis is sufficiently robust in practice. It seems very rare (as far as I can tell) for people to change their minds about these sorts of things.
Judging by recent Rasmussen poll results, naive instrumentalism is far more prevalent among the politically-engaged elite than in the adult US population at large.
Asked whether they would prefer for the political candidates they favor to win election by cheating than lose, 35% of "the elite one percent" -- i.e., people with postgraduate degrees and annual incomes of more than $150,000 living in densely-populated urban areas -- answered "yes," versus only 7% of other poll respondents. And among a subset of the elite one percent who said that they talk about politics every day (a question that only 8% of "non-elite" respondents answered in the affirmative) 69% said that they'd prefer for the candidates they favor to win through cheating than for them to lose(!)
Interesting. Though if someone is politically disengaged or apathetic, any naive instrumentalist dispositions they have wouldn't show up on specifically political questions. You'd need to ask about norm-violating means of achieving something else that they cared about more.
A fair point. In light of which I'll modify my take on those poll results: they simply indicate that naive instrumentalism is rife among the politically-engaged elite in the US. Which is dismaying but not surprising.
I would simply prefer to doubt the survey methodology. I love slides 10 and 22. Re slide 25, it would be interesting to see what proportion of those responding to that particular question would agree that Trump was not re-elected because of cheating by his opponents.
All I know about the specific survey question at issue here -- aside from the fact that it was conducted by Scott Rasmussen, the founder and former president of Rasmussen reports, a Balletopedia editor-at-large, and FWIW a co-founder of ESPN -- is that responses were received from 1000 participants. What reason do you have to mistrust the result?
I don't want to get into a long discussion on Richard Chappell's website about this, as I see it as peripheral. I am quite familiar with surveys and statistical methods, but had not heard of Rasmussen previously. A quick search did find a couple of recent articles that further cemented my doubts:
"A few weeks later, Rasmussen again published dubious poll results on behalf of a right-wing organization. This time, the findings alleged to have uncovered rampant fraud in 2020, including that 1 in 12 Americans had been offered “pay” or a “reward” for their vote. Trump and his allies celebrated the poll; again, the results do not comport with the reality of there being no demonstrable wide-scale vote-buying scheme at the state or national level."
This suggests to me that there are some problems in either the wording of questions or the sampling Rasmussen uses.
suggests only that "Rasmussen has indeed had strongly Republican-leaning results relative to the consensus for many years. Despite that strong Republican house effect, however, they’ve had roughly average accuracy overall because polls have considerably understated Republican performance in several recent elections...", and argues that ABC not drop Rasmussen's polls from the 538 site's poll averaging. But he did criticize Rasmussen Reports ten years ago for their automated telephone sampling methods.
More generally, there is a moderate literature (mainly from business schools!) on the ethical behaviour of atheists, which reminded me of this presentation by Rasmussen.
Unlike you, I claim no expertise re public polling technique or statistical methods, but I've had some prior familiarity with Philip Bump's punditry, from which I've gathered that he's a partisan hack. His critique of Rasmussen's contentions about the prevalence of various sorts of hanky-panky by mail voters in 2020 struck me as essentially question-begging, and through hasty Googling I found a cogent rebuttal in this blogpost: https://ethicsalarms.com/2023/12/13/confirmation-bias-test-the-rasmussen-2020-voter-fraud-survey/
Richard, thanks for this very insightful post. I'm thinking about working on related topics for part of my dissertation, so I had some thoughts I wanted to run by you.
To begin with, rather than being false, isn't naive instrumentalism in fact the true theory of instrumental rationality if anything is? That is, naive instrumentalism describes how an ideally rational agent reasons. Not only that, but this fact is fairly easy to appreciate. What is hard to appreciate is that humans ought not to follow the true theory of instrumental rationality because humans are not ideally rational. What I find interesting is that philosophers in the consequentialist tradition have (as you observe) been the ones to appreciate this most clearly, whereas non-consequentialists often assume that knowing the ideal moral goals is sufficient to enable a good-willed, naively-instrumental person to be moral. However, as you say, this in fact has nothing to do with core moral differences between consequentialism and non-consequentialism, but is in fact a dispute about practical rationality - so what explains this difference?
One possible explanation is that as a substantive matter, non-consequentialists tend to believe that the ideal moral goals are ones which a good-willed, morally knowledgeable human being will do best at following by being naively instrumental. For example, the Rossian prima facie duties seem to be like this - or at least Ross and his followers seem to believe that they are.
Another possibility is that non-consequentialists believe (perhaps implicitly) that lowering the standards of morality or rationality in response to human imperfection is morally unjustified and/or tends to make us worse people because it removes a source of (internal and external) pressure for self-improvement. I don't care much for the in-principle objection, but I do think that the second point has been neglected in the consequentialist tradition, whereas (e.g.) virtue ethicists have always been impressed by it. It happens to be true about human beings, though it isn't true in the abstract, that we are habit-forming agents, and the choices we make now shape the way we make future choices. To this extent, there's pressure on the non-ideal theory of rationality to strike a balance between accommodating and correcting human imperfection.
What I am now thinking about is the prospects for an ecumenical (i.e. theory-neutral) synthesis of these ideas. Since these questions about rationality are properly independent from disputes over what you call the core content of morality, this seems reasonably achievable. But I may have overlooked some reasons for pessimism here. For example, one reason to be pessimistic is that it's really true that for some non-consequentialist theories, naively instrumental pursuit of the ideal moral goals is as good as anything else human beings could manage.
(Let's use Rossian deontology as a case study. The duty of beneficence seems to be an obvious candidate for one that naively instrumental humans would fail to successfully discharge; however, given the relatively lax demands of the duty of beneficence as Ross conceives of it this may not in fact be true. (After all, the types of actions commonsensically associated with beneficence, like giving money to the homeless and volunteering at soup kitchens, are genuinely beneficent! They just often fail to be effectively beneficent.) Insofar as the duties of non-maleficence, fidelity, reparations, and gratitude are limited to face-to-face human relations, I think naively-instrumental, good-willed human beings probably do as well as any alternative. The question is whether it is plausible, by Rossian lights, to have such a restrictive understanding of the scope of these duties. If we understood, for instance, the question of what to do about various forms of social injustice as falling within the scope of the duty of reparations, then I think there's a strong case to be made that naively-instrumental, good-willed human beings often go wrong and would do better by being principled proceduralists.)
Hi Jeremy, it's a great topic to work on! Three main thoughts:
(1) I take naive instrumentalism to be true as an *ideal theory*, but it doesn't follow that it is the true account of what is instrumentally rational for *humans*. We need a different non-ideal theory, that takes into account -- and corrects for -- our deep biases and higher-order unreliability.
(2) I wouldn't describe non-ideal theory as "lowering the standards of morality or rationality". Instrumental rationality is *still* about how we can (expectably) *best* achieve the correct moral goals. It's just that the answer to this ambitious question depends upon details of our nature (incl. cognitive limitations). Principled proceduralism offers guidance that's better suited to human-sized minds. (This is an important *truth* about instrumental rationality.) Our minds have lower cognitive capacity than those of ideal agents. But that doesn't mean that the guidance is aptly described as having "lower standards". In some ways, it would seem just as natural to describe principled proceduralism as insisting upon "higher standards". But I think it's most accurate to just say that the guidance is *different* (not "higher" or "lower") from what would be suitable for ideal agents.
(3) As mentioned in the OP, I think non-consequentialists are often naive instrumentalists when it comes to politics and intellectual inquiry, in ways that are predictably very bad. But maybe there's an ideal form of Rossian Pluralism (or virtue ethics) that gives sufficiently greater *non-instrumental* weight to Millian liberal virtues to properly match their deep *instrumental* value, and thereby deter "naive" violations even when agents are themselves applying a naive decision procedure? It must be possible in theory. I guess the standard worry is just how psychologically feasible it is for people to abide by this, as the value of protecting people from oppression (or whatever) is apt to be much more *salient* than more abstract values like free speech (especially since it's so dubious that the *non-instrumental* value of something so abstract could reasonably trump real harms to vulnerable people).
I think this distinction sounds like its missing the point.
The ethics question that is imlicit is "how should people reason when it comes to moral questions ". So if you say that you are a utilitarian but you don't reason in a utilitarian way then you seem to have changed the target of the conversation.
I strongly disagree, for two reasons.
Firstly, it confuses moral theories and decision procedures. Ethical theories simply aren't, in the first instance, theories of "how people should reason". They're usually characterized as theories about *what makes an action right or wrong*; my (somewhat heterodox) view is that they would do better to be framed around the question of *what is fundamentally most important, or worth caring about*. But either way, these fundamental questions of ethical theory are very different from the practical question you raise. If you come to ethical theory thinking that the rival views are answering that practical question, you will come away badly confused.
As a theory, utilitarianism has *implications* for which decision procedure you should use. It says you should adopt whatever decision procedure is such that *your adopting it (i.e. reasoning in that way) would have the best consequences*. Which particular decision procedure actually meets this description is an empirical question, not a philosophical one. (Note that any theory X need not recommend reasoning in an X-ish way. For example, if the fate of the world depended upon your adopting an irrational decision procedure, any sane moral theory will agree that you ought to do precisely that -- make yourself irrational, if you can, in order to save the world.)
Secondly, my whole point is that *you don't know* what it is to "reason in a utilitarian way". To answer that, you would need to combine utilitarian goals with a theory of *instrumental rationality*. Many people ASSUME naive instrumentalism, without even realizing it. They have a picture in mind that actually involves the combination of utilitarianism + naive instrumentalism, but *mis-label* this picture as "reasoning in a utilitarian way". The whole point of my post was to explain why this is a mistake.
True "reasoning in a utilitarian way" involves combining utilitarian goals with whatever is the *correct* theory of instrumental rationality. I've offered some indications of what I think this involves. The precise details are open to question. But I'm pretty sure that naive instrumentalism is not the right answer, and so the caricatured understanding of "utilitarian reasoning" is actually an outright misconception.
This is a great article. It seems people constantly use naive utilitarianism as an argument against utilitarianism which just seems wild to me. Even on the basis of an arguing that is self effacing is bad (which seems to be true), utilitarianism arguably isn’t even self effacing some of the time. If utilitarianism is defined as “make the outcome with most most net positive utility result” as opposed to “take the action that will result in the most net positive utility,” it’s not even telling you to use a different theory when the practical implication would be to not always think about what’s actually going to maximize utility.
"This latter point is broadly under-theorized. Decision theory provides a kind of ideal theory of instrumental rationality, applicable to cognitively unlimited and unbiased angels, perhaps."
There has been some work on this kind of thing, though I haven't explored it yet:
https://easley.economics.cornell.edu/docs/Constructive.Decision.Theory.pdf
https://mkcamara.github.io/ctc.pdf
Great post Richard! Very interesting.
Just wanted to point out that you misspelt "if". Right under the subtitle of "Two Conceptions of Consequentialism" you spelt if --> iff
Ah, sorry, that's philosopher-jargon abbreviation for "if and only if"!
You have a typo: "that nobody really thinks it is instrumentally rational for humans to go around constantly calculated expected utilities."
fixed now, thanks!
Would the following scenarios affect what is naive and what is a wisely principled procedure?
1) Suppose we place the same moral weight on ants and people (in some sense).
2) What if society becomes majority utilitarian? [Of course, utilitarians will still be selfish, but lying won't necessarily be frowned upon if it was for the greater good.]
* Note there are 2 million ants for every person.
Can you expand upon what you have in mind? Like, a concrete example of something that's currently naive but might, in the specified scenarios, instead qualify as wisely principled?
(My default assumption is that the answer is 'no'. Social norms and expectations might change what even qualifies as (deceitful) *lying*, as opposed to mere "white lie"-style expected untruths. But I think rational utilitarians should generally value truthfulness *at least* as highly as most people do -- maybe more.)
Focus on the ants scenario.
Suppose you're a smart high schooler from a third world country with very limited opportunities. Should you cheat slightly on the SAT (if you could without getting caught) to get into a good university scholarship so that you can put yourself in a position of more career power to better help ants?
Or if you're a middle schooler from a village. Should you cheat to get into a high school in the city. If you don't, you'll probably be stuck with low education because the school in your village is bad.
The kids you're displacing don't care about ants.
Seems broadly similar to a normal case involving ordinary altruism rather than ants? I guess the thought is meant to be that the stakes are much higher, in part because your values are so much rarer / more neglected. But then the higher stakes make it all the more important that you not get caught cheating (in real life you can't *stipulate* future results, e.g. that you won't get caught, or that "cheating slightly" is necessary or would make any difference to whether you get a good scholarship).
Just as in the ordinary case, there might be special circumstances in which you'd do well to cheat, but it doesn't seem generally likely to be good career advice for smart high-schoolers (incl. from third world countries with very limited opportunities). And again, the higher stakes just make it all the more important that they follow actually-good career advice.
" the stakes are much higher, in part because your values are so much rarer / more neglected." Yes.
"And again, the higher stakes just make it all the more important that they follow actually-good career advice."
Not sure about that. The reward function may have a threshold shape, or at least be nonlinear. If the only way to affect change on ants is to become super successful at something, then a non-stellar level of career success (which would result from not cheating) is perhaps of little use. If you get caught, you're not much worse off than if you hadn't tried (relative to the stakes), but if you succeed, the payoff could be huge. Anyways, that's the idea of what a good scenario would be cooked up to do.
Yeah, that's possible. I think it's unusual to be in such an asymmetric (high upside / low downside from cheating) situation, and it's important to pay attention to possible downsides (just look at all the broader reputational harm caused by SBF's fraud). But I'm not claiming that norm-breaking could *never* be rationally justified. Just that I think we should start from a very strong pro-social/co-operative disposition, and not be *easily* swayed into norm-breaking defection based just on very speculative/unreliable reasons for thinking oneself to be the rare exception to the rule.
I tend to agree. But my point that how often you ought to be swayed may depend on if you adopt radical hedonistic utilitarian values such as extreme concern for animals or the exchangeability of current people with "merely possible" people.
If you say being a "principled" defender of X only commits you to having some principle Y which causes you to defend X, then how could someone even be a non-principled defender of X? All defenders of X either defend X because of the principle X, or because of a different principle or set of principles Y. Clearly, if you defend X because of the principle X, you are said to defend X "in principle," while if you defend X because of an alternative principle Y, you are said to defend X "not in principle."
An objection to this would be that someone might defend X because of Y where Y is self-interest or flipping a coin. So a "principled" defender of X commits you to having some non-random moral principle, X or Y, on which you defend X. But I think my description above better matches common usage.
For example, I think it is totally fair to say utilitarians have "no principled objection" to slavery. I suppose they do have *a* principled objection, but the principle on which they object is not slavery itself, which is obviously what the statement, "you have no principled objection to slavery," is actually supposed to convey.
Maybe the semantics are boring or irrelevant, but the implications are not. When you say "there’s little reason for non-theorists to care about this further, purely theoretical matter," I think this is incorrect. Using the given example of free speech, most people on earth do not, in fact, think that free speech norms are "more conducive to moral progress and overall well-being than any realistic alternative." So whether you support free speech on utilitarian or deontological grounds will alter your support of free speech, should you become convinced of the majority-held view.
In general, I think saying it doesn't matter whether you support X on instrumental or non-instrumental grounds is incorrect when the instrumental value of X is controversial.
It's very common for people to make unprincipled appeals to free speech, i.e. just when it is advantageous to "their side". That they are "unprincipled" about it is revealed by the fact that they won't defend the free speech of people they disagree with.
On the further implications: it's true that the grounds of one's support for X will influence *what you'd have to change your mind about* in order to cease supporting X. But I don't know that there's any particular reason to expect instrumentally-based support to be noticeably less robust (or more likely to be subsequently abandoned) than non-instrumentally-based support.
Compare: most people on earth do not, in fact, think that there are non-instrumental reasons to support free speech. So whether you support free speech on instrumental or non-instrumental grounds will alter your support of free speech, should you become convinced of the majority-held view *about the lack of an adequate non-instrumental basis*.
I guess if you thought it was normatively overdetermined -- that there were *both* instrumental and non-instrumental reasons -- that would be the *most* robust position, least susceptible to being overturned by later changes of mind. But I still don't think there's *much* reason for most people to care about any of this, because I expect *either* basis is sufficiently robust in practice. It seems very rare (as far as I can tell) for people to change their minds about these sorts of things.
Judging by recent Rasmussen poll results, naive instrumentalism is far more prevalent among the politically-engaged elite than in the adult US population at large.
Asked whether they would prefer for the political candidates they favor to win election by cheating than lose, 35% of "the elite one percent" -- i.e., people with postgraduate degrees and annual incomes of more than $150,000 living in densely-populated urban areas -- answered "yes," versus only 7% of other poll respondents. And among a subset of the elite one percent who said that they talk about politics every day (a question that only 8% of "non-elite" respondents answered in the affirmative) 69% said that they'd prefer for the candidates they favor to win through cheating than for them to lose(!)
https://www.rmgresearch.com/wp-content/uploads/2024/01/Elite-One-Percent.pdf
https://twitter.com/RobertBluey/status/1770789411568910756
Interesting. Though if someone is politically disengaged or apathetic, any naive instrumentalist dispositions they have wouldn't show up on specifically political questions. You'd need to ask about norm-violating means of achieving something else that they cared about more.
A fair point. In light of which I'll modify my take on those poll results: they simply indicate that naive instrumentalism is rife among the politically-engaged elite in the US. Which is dismaying but not surprising.
I would simply prefer to doubt the survey methodology. I love slides 10 and 22. Re slide 25, it would be interesting to see what proportion of those responding to that particular question would agree that Trump was not re-elected because of cheating by his opponents.
All I know about the specific survey question at issue here -- aside from the fact that it was conducted by Scott Rasmussen, the founder and former president of Rasmussen reports, a Balletopedia editor-at-large, and FWIW a co-founder of ESPN -- is that responses were received from 1000 participants. What reason do you have to mistrust the result?
Hi William.
I don't want to get into a long discussion on Richard Chappell's website about this, as I see it as peripheral. I am quite familiar with surveys and statistical methods, but had not heard of Rasmussen previously. A quick search did find a couple of recent articles that further cemented my doubts:
https://www.washingtonpost.com/politics/2024/03/08/rasmussen-538-polling/
which claims
"A few weeks later, Rasmussen again published dubious poll results on behalf of a right-wing organization. This time, the findings alleged to have uncovered rampant fraud in 2020, including that 1 in 12 Americans had been offered “pay” or a “reward” for their vote. Trump and his allies celebrated the poll; again, the results do not comport with the reality of there being no demonstrable wide-scale vote-buying scheme at the state or national level."
This suggests to me that there are some problems in either the wording of questions or the sampling Rasmussen uses.
Nate Silver, in his blog comments,
https://www.natesilver.net/p/polling-averages-shouldnt-be-political
suggests only that "Rasmussen has indeed had strongly Republican-leaning results relative to the consensus for many years. Despite that strong Republican house effect, however, they’ve had roughly average accuracy overall because polls have considerably understated Republican performance in several recent elections...", and argues that ABC not drop Rasmussen's polls from the 538 site's poll averaging. But he did criticize Rasmussen Reports ten years ago for their automated telephone sampling methods.
More generally, there is a moderate literature (mainly from business schools!) on the ethical behaviour of atheists, which reminded me of this presentation by Rasmussen.
Unlike you, I claim no expertise re public polling technique or statistical methods, but I've had some prior familiarity with Philip Bump's punditry, from which I've gathered that he's a partisan hack. His critique of Rasmussen's contentions about the prevalence of various sorts of hanky-panky by mail voters in 2020 struck me as essentially question-begging, and through hasty Googling I found a cogent rebuttal in this blogpost: https://ethicsalarms.com/2023/12/13/confirmation-bias-test-the-rasmussen-2020-voter-fraud-survey/