I’m generally in favour of intelligent people trying to think and communicate clearly about important topics. Sometimes they’ll make mistakes, in which case others can (ideally) explain where they go wrong. Occasional error is the price we pay for intellectual progress, and it’s one that is worth paying, especially when the stakes are high. As I previously put it:
Ideas are worth exploring, and if we sometimes get it wrong then that’s okay: an acceptable cost for more often getting things right when it really counts. And especially in any situation where the status quo risks are severe, we should be at least as concerned about mistakenly neglecting a good policy solution as we are about mistakenly advancing a bad policy.
That was from thinking about the issue first-personally, but similar reasoning applies third-personally, at least if we have any faith in the tendency of good arguments to eventually win out over bad ones (and if we don’t have such faith, what are we even doing in academia?).1 For example, critics of effective altruism risk immense harm,2 but I’d never dream of trying to censor them. Quite the opposite: I think EA orgs did well to run a criticism contest that explicitly invited and incentivized thoughtful critiques, because (i) intelligent critique can help us to test whether our ideas withstand scrutiny, and better triangulate on the truth,3 and (ii) the truth matters.
(That said, I do think the potential for harm is a good reason to think carefully about the issues, to really take care to try to avoid mistakes. I’ve been surprised by how some—even philosophers—express casually derogatory attitudes towards effective altruism, in ways that seem predictably harmful while lacking any obvious compensatory epistemic benefit.4)
The Precautionary Argument
In ‘Moral Philosophy’s Moral Risk’, Brennan & Freiman consider the following “precautionary” argument against moral philosophy:
P1: Pro tanto, if you are likely to commit a significant moral error when performing an activity, then you should avoid performing that activity.
P2: You are very likely to commit a significant moral error when you engage in moral inquiry, especially over sensitive and difficult topics.
C: Thus, you should avoid such moral inquiry.
P1 is false. I’ve argued previously that avoiding wrongdoing is not actually an appropriate moral goal. But even if the premise is fixed to refer instead to absolute (not merely comparative) harms, the revised premise would still be false, for it is engaging in one-sided cost analysis rather than fully-considered cost-benefit analysis. Don’t valorize the void: if we want to secure morally satisfactory outcomes, we need to consider how the potential moral upsides compare to the downsides. The chance to better promote justice, for example, cannot defensibly be regarded as inherently less significant than the risk of inadvertently promoting injustice. (Otherwise you should probably just prefer that nothing exist at all.)
Of course, we shouldn’t deliberately (or negligently) spread moral mistakes;5 as noted above, we should be seriously trying to get things right.
Ethics Review Boards for Controversial Ideas?
Some in the comments of this epic Daily Nous thread suggested that potentially “harmful” work should be subject to “additional scrutiny”, and possibly even non-academic review by an ethics board. This latter suggestion strikes me as utterly terrifying—who could possibly be trusted to decide which moral opinions are “safe” to see the light of day? And who do you think will actually get to call the shots, at the end of the day, and on what basis—given that the whole point of moral inquiry is that we don’t already know all the answers?6
Writing this from Florida, threats to academic freedom from the right are at least as salient as those from the left. Those on the left who treat academic research as just another political arena for the powerful to enforce their opinions as orthodoxy are making DeSantis’ case for him—why shouldn’t a political arena be under political control? The only principled grounds to resist this, I’d think, is to insist that academic inquiry isn’t just politics by another means. Individual academics may have (and, in appropriate venues, advocate for) their own normative viewpoints, of course, but academic institutions ought to be more neutral than that.
Accordingly, when we evaluate academic work, we generally have a professional obligation to bracket considerations of its political valence or anticipated indirect consequences.7 The only relevant question is whether the work contributes to our collective understanding of the topic being addressed, and so helps to bring us closer to grasping the full truth. (Notably, even arguments for false conclusions can have this epistemic virtue—see again the Mill quote in footnote 3.)
Moral Priorities
None of this requires thinking that procedural justice matters more than substantive justice (despite naïve claims to the contrary). Rather, the crucial claim is that, in the long run, maintaining the integrity of neutral truth-seeking institutions will better secure substantive justice (and better results more generally) than naïve/unconstrained pursuit of this end. Others may disagree, but recognizing this disagreement as ultimately empirical (concerning the long-run instrumental value of neutral procedures and institutions) may hopefully provoke less hostility than the imagined bad values many are otherwise inclined to attribute.
As with other procedural norms and values—against jury tampering, manipulating election results, etc.—it’s an interesting question when violations of these norms could realistically be justified. I’m not defending an absolutist view here: there are always conceivable exceptions to any rule. But I do think the naïve stance invites abuse, and we should, accordingly, be very strongly disposed to respect procedural obligations. And I think this is a big part of what constitutes “professional ethics” (and integrity), in academia as elsewhere.
Limits
The argument for free inquiry is limited in scope to those questions on which we should want to find true answers. Maybe there are some questions that are better not asked (e.g. how to make an omnicidal super-virus). If there are strong reasons to think that even true answers to the question would be socially devastating, censorship of an entire topic could be defensible. But two noteworthy limitations of this case: (i) It seemingly calls for a blanket ban on inquiry into a certain question, rather than (the more popular) partial censorship of some answers but not others. (ii) Since it remains important to form true beliefs about the meta-question of whether the first-order topic is in fact so dangerous, free inquiry into the meta-question must still be allowed (as per the Mill quote in fn 6).
A second limitation is that I’ve just been talking about truth-seeking institutions, like academia. My view is that academia should be a “safe space” for free inquiry, even when it takes forms that others don’t like. It doesn’t follow that elsewhere in society must be like academia in this regard. For example, Google’s firing of James Damore may have been unobjectionable, even though it would be a clear violation of academic freedom for a university to fire an academic for advancing such arguments. Arguably, academics are morally licensed to engage in free inquiry in a way that others may not be. (But nor have I argued that others shouldn’t share in this license; I simply haven’t addressed that further question here.)
Do you disagree?
If so, feel free to leave a comment (politely) explaining your reasoning! Even if you’re wrong, it might help us to better triangulate on the truth.
Cf. Scott Alexander on symmetric vs asymmetric weapons:
Logical debate has one advantage over narrative, rhetoric, and violence: it’s an asymmetric weapon. That is, it’s a weapon which is stronger in the hands of the good guys than in the hands of the bad guys. In ideal conditions (which may or may not ever happen in real life)—the kind of conditions where everyone is charitable and intelligent and wise—the good guys will be able to present stronger evidence, cite more experts, and invoke more compelling moral principles.
For example, if you convince just one person not to take a course of action—e.g. earning to give via a permissible, high-paying career—that would have led to their donating an extra ~$50k per year to GiveWell’s top charities, for example, then you are causally responsible for ~10 people’s deaths per year. That’s really bad!
As J.S. Mill famously wrote in On Liberty:
[T]he peculiar evil of silencing the expression of an opinion is, that it is robbing the human race; posterity as well as the existing generation; those who dissent from the opinion, still more than those who hold it. If the opinion is right, they are deprived of the opportunity of exchanging error for truth: if wrong, they lose, what is almost as great a benefit, the clearer perception and livelier impression of truth, produced by its collision with error…
But there is a commoner case than either of these; when the conflicting doctrines, instead of being one true and the other false, share the truth between them; and the nonconforming opinion is needed to supply the remainder of the truth, of which the received doctrine embodies only a part. Popular opinions, on subjects not palpable to sense, are often true, but seldom or never the whole truth… When there are persons to be found, who form an exception to the apparent unanimity of the world on any subject, even if the world is in the right, it is always probable that dissentients have something worth hearing to say for themselves, and that truth would lose something by their silence.
Perhaps the mere fact that some intelligent people are dismissive of EA is some kind of higher-order evidence that it’s actually bad? But given the obvious potential for bias here (it isn’t in their interests for EA to become more widely accepted), this seems very weak evidence. It would seem overall better if there was more of a social expectation that such morally risky derogation of others’ attempts to do good be accompanied by good supporting reasons—or at least a serious attempt at such.
At least, not if morality is agent-neutral. Agent-relative views might have different implications, however. On those views, it might be morally bad (from your perspective) for other agents to have, and to rationally act upon, true moral beliefs. Depending on the details of the view, you might easily have most moral reason to try to deceive them. (See: Is Non-Consequentialism Self-Effacing?)
As J.S. Mill (again) wrote, “The usefulness of an opinion is itself matter of opinion: as disputable, as open to discussion, and requiring discussion as much, as the opinion itself.” Sensitive debates in ethics often concern precisely the contours of appropriate moral concern, and how competing considerations should be weighed against one another. It will typically not be possible to venture an opinion as to the practical value of a work of moral philosophy without first settling (or, more realistically, presupposing) whether the views it argues for are correct or not. But if we already knew the answers, we wouldn’t need academic research in the first place.
Compare the obligations of medical professionals to bracket judgments regarding the instrumental value of their patients’ lives:
We do not want emergency room doctors to pass judgment on the social value of their patients before deciding who to save, for example. And there are good utilitarian reasons for this: such judgments are apt to be unreliable, distorted by all sorts of biases regarding privilege and social status, and institutionalizing them could send a harmful stigmatizing message that undermines social solidarity. Realistically, it seems unlikely that the minor instrumental benefits to be gained from such a policy would outweigh these significant harms. So utilitarians may endorse standard rules of medical ethics that disallow medical providers from considering social value in triage or when making medical allocation decisions. But this practical point is very different from claiming that, as a matter of principle, [indirect effects matter less].
I think that there should be IRB approval before you write your blog posts. What if we should valorize the void, and failing to do so is dangerous?
Point 5 on agent-neutrality vs. agent-relativity is interesting. Reminds me of Donald Regan's contention that "Evaluator-relative theories do not allow agents to give sincere moral advice." ("Against evaluator relativity" p. 107). I imagine this point is contested by evaluator-relative theorists. I'd be interested to see what they say. Does anyone has a reference for that?