12 Comments
Aug 20Liked by Richard Y Chappell

There are definitely a lot of skills where explicitly thinking about the thing makes people do worse, until they’ve developed a deep enough understanding to eventually do better - stereotypically, it’s hard to ride a bike effectively while you’re thinking about how you’re doing it, and most people don’t ever put in the work to become the kind of expert cyclist who actually does better as a result of their thinking. In lots of intro classes, students learn the basics of some calculations, but end up making enough mistakes that their answers are far worse than the ones they would have come up with through gut estimating, though with a few years of coursework they get better.

I take it that a lot of critiques of enlightenment and modernism are based on ways that thinking explicitly about things has often led people to miss important factors entirely because they are hard to make explicit. Everything from Chesterton’s Fence to James C Scott’s “Seeing like a State” is telling some version of that story.

Expand full comment
Aug 20Liked by Richard Y Chappell

"What I’m curious to consider now is: do you think there are principled reasons to think that the more “explicit” ethics of effective altruists is actually a bad thing?"

I think the answer is no. But I also understand the instinct in part.

I think that generally we should applaud people who use numbers in a sensible fashion, but we should be very careful in imposing on everyone numerical requirements or constraints.

Take criminal jury trials in the United States. Let's take two proposed statements:

"Lots of innocent people get convicted at trials."

"Six percent of jury trial convictions are of innocent people."

I would expect the first statement to be followed by anecdotes, and the second to be followed by some sort of calculation that allows people to determine the reliability of that claim. Both could be valid or nonsense, for sure, but the latter statement seems to me to give us a better start.

But take these potential jury instructions:

1. To find the defendant guilty, you must find that the charge has been proved beyond a reasonable doubt. Proof beyond a reasonable doubt is proof that leaves you with an abiding conviction that the charge is true. The evidence need not eliminate all possible doubt because everything in life is open to some possible doubt.

2. To find the defendant guilty, you must find that the chance she is innocent is less than one in 200.

I think the former instruction will lead to much more accurate results. I think some people have internalized that and then (incorrectly) apply the latter principle to EA.

I agree with those who view "trying is better than not trying" as obviously correct. I think critiques of EA as too mathy are... aggravating. But that doesn't mean that spreading good vibes isn't good, right?

Expand full comment
Aug 20·edited Aug 20Liked by Richard Y Chappell

Many non-utilitarians are moral particularists so they believe being explicit is impossible. But even one's who aspire to greater generality still rely heavily on concepts that resist precise conceptual analysis. For example, Doing/allowing, causation, rights, desert, exploitation, reasonable expectations.

Expand full comment
author

Though it's worth noting that most would allow at least *some* room for the impartial good mattering, so I do find it a bit surprising that more non-utilitarians aren't excited about EA as one project among many.

Expand full comment
Aug 21Liked by Richard Y Chappell

Yeah, I think that's a nice point for EA's to emphasize.

Another route is to focus more on effectiveness, while leaving folks to decide what pro-social goals they want to pursue effectively. For example, if you want to help your own community (but not others'), you can still think about the most effective way to do that. [Hopefully it's not zero-sum with other communities.]

I never read Pummer's "Effective Justice" paper, but I hope it gets more attention from the justice crowd.

Expand full comment
Aug 26Liked by Richard Y Chappell

My half formed thought on this: from Abram Demski (https://www.lesswrong.com/posts/xJyY5QkQvNJpZLJRo/radical-probabilism-1) I learned that if you announce a rule for updating on evidence ahead of time, then the only rule that obeys certain nice coherence properties is the usual Bayesian update rule. But there is a loophole: if you don't announce an actual rule, but just make sure your updates obey the coherence properties, a wider range of potential updates can be defended.

Demski glosses this as having trust in some kind of update process while not being able to explicate the process.

I wonder if it's possible to have a similar non -explicit utilitarianism, and how different it could look.

Anyway, I think the main concern about being explicit is it opens you up to being gamed: unless you're convinced that your explicit set of rules covers all the cases you will ever be interested in, it incentivizes others to find the edge cases.

Expand full comment
Aug 21·edited Aug 24

"Notoriously, EA appeals disproportionately to nerdy analytic thinkers—i.e., the sorts of people who are good at board games."

May be, but there's no shortage of anti-EA (or at least anti-utilitarian) math phds who would bristle at the suggestion that they're ideology is less analytic(al).

Expand full comment

'Something very distinctive about EA is that it prompts us to *explicitly consider trade-offs*'

That's not at all 'distinctive' about EA. That's the basis for all ethical decision making, that any of us might face on any given day, and in every profession, certainly every governmental arrangement and decision. It is at the foundation of every iteration of moral philosophy.

To assert EA holds a distinctive focus on trade-offs suggests either not realizing what ethics is actually about, or having no awareness of how people around the world have been addressing societal problems all along. To declare it is different does not make it so.

'Do you have some alternative suggestion for how to navigate it, that doesn't involve trying to work out which makes the greater total impact?'

Consistent with what I stated above, the focus on 'greatest total impact' is nether novel or recent. Outcome research has been a staple of social services in the US since at least the 1930's.

It's odd to me that disciples of EA seem unaware of this, or feel the need to credit themselves with the discovery of managing ethical trade-offs.

Which is to say there is no objection to trying to be ethical, or considering trade-offs, in the efforts we make to remediate problems in the world. Pretty sure no one, ever, has 'objected' to any of this.

Instead, the concerns with EA are about its premises- that it has cracked the code, or perhaps, concocted the algorithm, for determining how to determine how best to do good.

As I suggested before, the ethical premises are no more explicit and well-defined with EA than any other model or mode of approaching addressing societal ills, and in some quite specific ways, EA disciples fail to acknowledge assumptions that are deeply fraught and contentious. Utilitarianism itself can be, and has been, forcefully critiqued since Bentham first promulgated it.

With that in mind, you might not want to suggest that anyone else is 'ignoring problems', since there seem to be some prodigious gaps and blind spots in your attempt at defending EA.

In summary- a) EA has no claim on operating with more explicit ethical principles than any other approach to addressing societal concerns, b) it cannot claim to have focused more on 'trade-offs' than any other approach, c) appears not (among its disciples) to display basic self-awareness of its assumptions, or its arbitrary assignment of relative moral value from culturally derived and limited premises, d) cannot demonstrate superior outcomes to other approaches except under those same problematic assumptions, culturally laden values and premises (which is to say in purely logical terms, it employs tautological arguments to 'prove' its worth).

Hope that clarifies my views.

Expand full comment
author

Ugh, we could waste all day trading insults about how "unaware" the other person is of obvious facts, but it's not the kind of conversation I enjoy. Please be more respectful if you wish to comment on my blog again. In particular, bear in mind that when an amateur thinks that an academic expert is missing something obvious in their area of specialization, while always possible, it's objectively far more likely that the amateur is failing to understand something. Pompous assertions like "To declare it is different does not make it so," are (i) obnoxious, and (ii) irrational, as it suggests you don't realize that (as an academic expert) I have reasons for my views, which I could explain further if you asked politely. (Feel free to google my CV if you have any doubts about my academic credentials; it might help you to better calibrate how you should approach the discussion.)

For anyone else following along, I'll just note two points, and leave it at that:

(1) People routinely neglect trade-offs, for example by failing to consider opportunity costs. This comes through very clearly in many critics of EA, e.g. Crary et al. complain that EA funders decline to fund their preferred interventions (e.g. animal sanctuaries), without making any attempt whatsoever to establish that animal sanctuaries are a better use of funds than what is being funded in their place. In general, there's a bit of a taboo against ranking important values (see Robin Hanson on the psychology of the "sacred"), and it results in many decisions being made non-strategically, based on arbitrary considerations like salience or personal preference (this is famously how many people make decisions about charitable giving, which EA is trying to change).

(2) Obviously EA didn't "discover" this; it's mostly just a matter of applying economic thinking to impartial altruistic goals. Still, it's distinctive because so few moral activists or movements are so receptive to economic thinking. I don't particularly care whether people self-identify as EA or "credit" this style of thinking to the movement that's attempting to popularize it amongst the morally motivated. But I would like more people to *think this way*, however they want to call it.

Expand full comment

":do you think there are principled reasons to think that the more “explicit” ethics of effective altruists is actually a bad thing?"

The ethics of EA aren't per se more explicit than any other ethical framework, and that's not the basis for any objection to how EA is practiced by its proponents.

EA employs calculations that presumably establish the relative benefits of interventions to enhance the quality of life of individuals who experience hardship and suffering to some degree. The basic notion is to maximize the purported benefits of any particular intervention. More people get helped to a greater degree. Better use of limited resources, so it is claimed.

Some criticisms of EA might be empirical- does EA produce demonstrably better results at improving conditions for more people in the world than methods and models of intervention that are deemed 'not EA'?. Well, there's a catch. There's nothing novel about EA's focus on maximizing benefits of interventions (except the branding). Sociologist Lyman Stone makes this point here: https://medium.com/@lymanstone/why-effective-altruism-is-bad-80dfbccc7a68

So, what's the comparison standard, if EA isn't doing anything substantially different than any other approach to doing good?

But the whole notion of a comparison between putatively differing models is problematic (to put it mildly), because EA entails assumptions of blurry, ill-defined, or even (ironically) *implicit* value judgments.

Is helping 100 individuals experience 20% improvement in some facet of their life morally preferable to helping 1 person experience 80% improvement in some other facet, as long as the same total dollars are spent?. Disciples of EA take this to be axiomatic. It is, of course, old Utilitarian wine in a new shiny bottle. A quick glance at the Wikipedia entry on Jeremy Bentham will make this plain.

Of course, assigning greater value to one aspect of human life than another is entirely arbitrary and culturally determined. It can very quickly devolve into valuing one sort of human life over another (and I would not be the first to point this out). The characteristics of the very narrow demographic that embraces EA should give us pause in this regard. Is this group prioritizing a vision of life within a society that suits them best, at the expense of other moral frameworks and social arrangements? It would appear so, while wrapping themselves in the mantle of virtue. From a purely moral perspective, this is suspect on its face.

All of which is decidedly not explicit, and is crucial to assessing both the 'effectiveness' and 'altruism' of EA.

Expand full comment
author

Something very distinctive about EA is that it prompts us to *explicitly consider trade-offs* like the one you describe (helping 100 individuals experience 20% improvement in some facet of their life vs helping 1 person experience 80% improvement in some other facet).

I don't think it is "axiomatic" that the former of the two options is automatically better -- it obviously depends on the substantive details (some facets of life may be orders of magnitude more important than others; if the former is "optimizing your choice of breakfast cereal" and the latter is "optimizing career choice", then I think EA principles would pretty clearly lead us to prioritize the latter!).

I'm actually not sure what your objection is to this. Do you think it is better to *not even consider* the trade-off? Do you have some alternative suggestion for how to navigate it, that doesn't involve trying to work out which makes the greater total impact? Your comment seems to exemplify a common anti-EA mode of thought, which I would summarize as, "EA makes salient that we need to think about X, and tries to suggest ways to do this. But it isn't obvious how we should think about X. [Implicit conclusion: so we shouldn't think about X at all, and so EA is bad.]" I think this is bad reasoning.

For more on how ignoring problems isn't a solution, see: 'Puzzles for Everyone': https://www.goodthoughts.blog/p/puzzles-for-everyone

Expand full comment

Problems will arise when one executes on some model that was theoretically calculated to lead to some expected outcome ends up making things worse. Because models are not reality, it would make sense that the complexity imbedded in the model of the outcome needs to objectively be better than doing nothing. Hard to do that without trials, but I expect that the vast majority of such moral models can be rejected before even reaching the point it needs a trial.

Trying can be sometimes worse than not trying.

Expand full comment