A couple of recent posts by other academics put me in mind of my old take on reactive vs goal-directed ethics. First, Setiya writes, in On Being Reactive:
Philosophers often write as if means-end reason were the factory setting for human agency… It’s not my experience and I doubt it’s yours… [Arational action] pervades our interaction with others. We are often guided by emotion, not beliefs about the best means to our ends. Instrumental reason is not a default possession but a hard-won aspiration.
I think this is at least as true of much moral action as it is of the rest of our lives. The perennial complaint motivating effective altruism is that most people don’t bother to think enough about how to do good. Many give to a charity when asked, without any apparent concern for whether a better alternative was available. (And many others, of course, aren’t willing to donate at all—even as they claim to care about the bad outcomes they could easily avert.)
Being at all strategic or goal-directed in one’s moral efforts seems incredibly rare, which is part of what makes effective altruism so non-trivial (alongside how unusual it is to be open to any non-trivial degree of genuinely impartial concern—extending even to non-human animals and to distant future generations). Many moralists have lamented others’ lack of altruism. The distinctive lament of EAs is that good intentions are not enough—most people are also missing instrumental rationality.
This brings me to Robin Hanson’s question, Why Don’t Gamers Win at Life?:
We humans inherit many unconscious habits and strategies, from both DNA and culture. We have many (often “sacred”) norms saying to execute these habits “authentically”, without much conscious or strategic reflection. (“Feel the force, Luke.”) Having rules be implicit makes it easier to follow these norms, and typical life social relations are complex and opaque enough to also make this easier.
Good gamers then have two options: defy these norms to consciously calculate life as a game, or follow the usual norm to not play life as a game.
This suggests a novel explanation of why some people hate effective altruism. EA is all about making ethics explicit, insofar as is possible. (I don’t think it’s always possible. Longtermist longshots obviously depend on judgment calls and not just simple calculations. Even GiveWell just use their cost-effectiveness models as one consideration among many. That’s all good and reasonable. Both still differ strikingly from folks who refuse to consider numbers at all.)
Notoriously, EA appeals disproportionately to nerdy analytic thinkers—i.e., the sorts of people who are good at board games. Others may be generally suspicious of this style of thinking, or specifically hostile to replacing implicit norms with explicit ones. One can hypothesize obvious cynical reasons that could motivate such hostility. What I’m curious to consider now is: do you think there are principled reasons to think that the more “explicit” ethics of effective altruists is actually a bad thing? Or should we take this causal explanation to be, in effect, a debunking explanation of why many people are unreasonably opposed to EA (and to goal-directed ethics more generally)?
Thoughts welcome.
There are definitely a lot of skills where explicitly thinking about the thing makes people do worse, until they’ve developed a deep enough understanding to eventually do better - stereotypically, it’s hard to ride a bike effectively while you’re thinking about how you’re doing it, and most people don’t ever put in the work to become the kind of expert cyclist who actually does better as a result of their thinking. In lots of intro classes, students learn the basics of some calculations, but end up making enough mistakes that their answers are far worse than the ones they would have come up with through gut estimating, though with a few years of coursework they get better.
I take it that a lot of critiques of enlightenment and modernism are based on ways that thinking explicitly about things has often led people to miss important factors entirely because they are hard to make explicit. Everything from Chesterton’s Fence to James C Scott’s “Seeing like a State” is telling some version of that story.
"What I’m curious to consider now is: do you think there are principled reasons to think that the more “explicit” ethics of effective altruists is actually a bad thing?"
I think the answer is no. But I also understand the instinct in part.
I think that generally we should applaud people who use numbers in a sensible fashion, but we should be very careful in imposing on everyone numerical requirements or constraints.
Take criminal jury trials in the United States. Let's take two proposed statements:
"Lots of innocent people get convicted at trials."
"Six percent of jury trial convictions are of innocent people."
I would expect the first statement to be followed by anecdotes, and the second to be followed by some sort of calculation that allows people to determine the reliability of that claim. Both could be valid or nonsense, for sure, but the latter statement seems to me to give us a better start.
But take these potential jury instructions:
1. To find the defendant guilty, you must find that the charge has been proved beyond a reasonable doubt. Proof beyond a reasonable doubt is proof that leaves you with an abiding conviction that the charge is true. The evidence need not eliminate all possible doubt because everything in life is open to some possible doubt.
2. To find the defendant guilty, you must find that the chance she is innocent is less than one in 200.
I think the former instruction will lead to much more accurate results. I think some people have internalized that and then (incorrectly) apply the latter principle to EA.
I agree with those who view "trying is better than not trying" as obviously correct. I think critiques of EA as too mathy are... aggravating. But that doesn't mean that spreading good vibes isn't good, right?