Inspired by Bentham’s Bulldog, I recently donated $1000 to the Shrimp Welfare Project. I don’t know that it’s literally “the best charity”—longtermist interventions presumably have greater expected value—but I find it psychologically comforting to “diversify” my giving,1 and the prospect of averting ~500 hours2 of severe suffering per dollar seems hard to pass up. If you have some funds available that aren’t otherwise going to an even more promising cause, consider getting in on the #shrimpact!
The fact that most people would unreflectively dismiss shrimp welfare as a charitable cause shows why effective altruism is no “truism”. Relatively few people are genuinely open to promoting the good (and reducing suffering) in a truly cause-neutral, impartial way. For those who are, we should expect the lowest-hanging fruit to be causes that sound unappealing. As a result, if someone gives exclusively to conventionally appealing causes, that’s strong evidence that they aren’t seriously trying to do the most impartial good. If you’re serious about doing more good rather than less, then you should be open to at least some weird-sounding stuff.3
And you should, of course, seriously try to do more good rather than less, at least some of the time, with some of your resources. (There are tricky questions about just how much of your time and resources should go towards optimizing impartial beneficence. But the correct answer sure ain’t zero.)4
A bad objection
In the remainder of this post, I want to discuss a terrible objection that people commonly appeal to when trying to rationalize their knee-jerk opposition to “weird” EA causes (like shrimp welfare or longtermism).
“Different things can’t be precisely quantified or compared”
This has got to be one of the most common objections to EA-style cost-effectiveness analyses, and it is so deeply confused. Oddly, I can’t recall seeing anyone else explain why it’s so confused. (Quick answer: rough estimates are better than no estimates.)
The problem, in a nutshell, is that quantification enables large-scale comparison, and such comparison is needed in order to make high-stakes tradeoffs in an informed way. Tradeoffs, in turn, are essential to practical rationality. We can’t avoid them: different values are in conflict, and can’t all be jointly satisfied. We have to choose, or “trade off”, between them. The only question is how. We can do so openly and honestly, by seriously trying to assess their comparative value or importance. Or we can do so dishonestly, with our heads in the sand, pretending that one of the values doesn’t have to be counted at all.
Now, when people complain that EA quantifies things (like cross-species suffering) that allegedly “can’t be precisely quantified,” what they’re effectively doing is refusing to consider that thing at all. Because the realistic alternative to EA-style quantitative analysis is vibes-based analysis: just blindly going with what’s emotionally appealing at a gut level. And many things that are difficult to precisely quantify (like the suffering of non-cute animals) lack emotional appeal. They’ll be completely neglected in a vibes-based analysis. That is, in effect, to give them precisely zero weight.
To address the objection, consider the datum:
(Less Wrong): It’s better to be slightly wrong than to be very wrong about moral weights and priorities.
Something I find frustrating is that many people seem to instead endorse:
(Ostrich Thinking): It’s better to ignore a question than to answer it imperfectly.
Ostrich Thinking is very stupid unwise, because your unreflective assumptions could easily be even further from the truth than the imperfect answers you would reach by giving serious thought to a problem. Compared to ignoring numbers, even the roughest quantitative model or “back of the envelope” calculation can help us to be vastly less wrong.
“Your analysis requires a lot of assumptions…”
An especially popular form of Ostrich Thinking combines:
Rational satisficing: the crazy view that there’s no reason to do more good once you’ve identified a “good enough” option; and
Certainty bias: preferring the near-certainty of some positive impact over an uncertain prospect with much greater expected value.
Combining these two bad views yields the result that you should definitely donate to a “safe” option like GiveWell-recommended charities, rather than longtermist or animal welfare causes that involve a lot more uncertainty.5 This view might be expressed by saying something like, “Prioritizing X is awfully speculative / depends on a lot of questionable assumptions…” But it’s important to understand that this actually gets things backwards.
Firstly, note that we should not simply be aiming to do a little good with certainty. We should always prefer to do more good than less, all else equal; and we should tolerate some uncertainty for the sake of greater expected benefits. (Both rational satisficing and certainty bias are deeply unreasonable.) So, the question that properly guides our philanthropic deliberations is not “How can I be sure to do some good?” but rather, “How can I (permissibly) do the most (expected) good?”
You cannot offer an informed answer to this question without forming judgments on “speculative” matters (from AI safety to insect sentience). This renders these topics puzzles for everyone. In order to be confident that global health charities are a better bet than AI safety or shrimp welfare, you need to assign negligible credence to the assumptions and models on which these other causes turn out to be orders of magnitude more cost-effective. That’s a big assumption! It’s actually much more epistemically modest to say, “I split my credence across a wide range of possibilities, some of which involve so much potential upside that even moderate credence in them suffices to make speculative cause X win out.”
Conventional Dogmatism
It’s worth reiterating this point, because even smart people often seem to miss it. It’s very conventional to think, “Prioritizing global health is epistemically safe; you really have to go out on a limb, and adopt some extreme views, in order to prioritize the other EA stuff.” This conventional thought is false. The truth is the opposite. You need to have some really extreme (near-zero) credence levels in order to prevent ultra-high-impact prospects from swamping more ordinary forms of do-gooding. As I previously explained:
It’s essentially fallacious to think that “plausibly incorrect modeling assumptions” undermine expected value reasoning. High expected value can still result from regions of probability space that are epistemically unlikely (or reflect “plausibly incorrect” conditions or assumptions). If there’s even a 1% chance that the relevant assumptions hold, just discount the output value accordingly. Astronomical stakes are not going to be undermined by lopping off the last two zeros.
Tarsney’s Epistemic Challenge to Longtermism is so much better at this [than Thorstad]. As he aptly notes, as long as you’re on board with orthodox decision theory (and so don’t disproportionately discount or neglect low-probability possibilities), and not completely dogmatic in refusing to give any credence at all to the longtermist-friendly assumptions (robust existential security after time of perils, etc.), reasonable epistemic worries ultimately aren’t capable of undermining the expected value argument for longtermism.
The case for shrimp welfare isn’t quite so astronomical, but the numbers are nonetheless large enough to accommodate plenty of uncertainty before the expected value dips below those of more typical charities. So it would seem similarly epistemically reckless to dismiss it as a cause area (compared to typical charities), without careful analysis.6
Conclusion
Strive for good judgment with numbers. Be wary of misleading appeals to complexity. Like the intellectual charlatans who use big words to hide their lack of ideas, moral charlatans send false signals of moral depth with their dismissive talk of “oversimplified quantitative models”—as though they had a more sophisticated alternative in their back pocket. But they don’t. Their alternative is unreflective vibes and Ostrich Thinking. They imagine that ignoring key factors—implicitly counting them for zero—is somehow more “sophisticated” or epistemically virtuous than a fallible estimate. Don’t fall for it. Better yet, share this corrective the next time you see such Ostrich Thinking in the wild: refusing to quantify is refusing to think.
While you’re at it, take care to avoid the conventional dogmatism that regards ultra-high-impact as impossible. Certainty bias can feel like you’re “playing it safe”—you’re minimizing the risk of failing to make any difference—but is that really the most important kind of risk? Be aware of other respects in which it can be quite wildly reckless to pass up better opportunities. For example, it can be morally reckless to ignore risks of extremely bad outcomes (e.g. extinction or long-term dystopias). And, as I’ve explained in this post, it can be epistemically “reckless”—really going out on a limb!—to assign extreme (near-zero) credence to plausible possibilities involving ultra-high impact. As long as you’re broadly open to expected value reasoning (as you plainly should be), even a fairly small chance of ultra-high impact can be well worth pursuing.
I think it’s easier to give to high-EV “longshots” if you don’t feel like all your eggs are in one basket, even if the “one basket” approach technically has greater expected value. But YMMV.
Or maybe it’s more like ~5000 hours, if the stunners are used for 10 years?
Again, balance it out with a well-rounded charity portfolio if you need to. Whatever helps you to get higher expected impact than you otherwise would.
If anyone’s aware of an argument to the contrary—that zero is better than even just, say, 1% optimizing impartial beneficence, I’d love to hear it. Many criticisms of EA rely upon the ‘all or nothing’ fallacy, and simply argue that utilitarianism (the most totalizing, extreme form that EA could conceivably take) is unappealing, as if that would somehow entail the wholesale rejection of optimizing impartial beneficence.
To be clear, I’m a big fan of GiveWell and the charities it recommends! What I’m objecting to here is rather a particular pattern of reasoning that could lead one to mistakenly believe that GiveWell charities are clearly superior to animal welfare and longtermist alternatives. It’s fine to personally prefer GiveWell charities, but any minimally intelligent and reflective person should appreciate that there are difficult open questions surrounding cause prioritization, and good grounds for judging some alternatives to be even more promising. So I think it’s very unreasonable to be dismissive of any of the major EA cause areas.
It’s not necessarily a problem to have extreme credences—some claims are very implausible, and should be assigned near-zero probability! But you should probably reflect carefully before forming such extreme views, especially when they’re wildly at odds with the views of many experts who have looked more closely into the matter.
Because I'm that kind of person, I can't help but point out that we sometimes pay real social and emotional costs from quantifying. It's why we don't like to quantity the economic value of sex in a relationship or compare the utilities of saving lives (eg those with a disability or disease and those without). The problem is that there are many situations where even acknowledging the trade off has a strong social/emotional meaning of not being caring/reliable.
I don't think that's a major factor in the usual EA situation but it isn't non-existent. I think there are a range of highly local or social charities that you don't donate much to that is probably better not to quantify. Not because you wouldn't recognize the social value in donating in ways that increase social bonds but because it's really damn hard to quantify without having that impact to a degree.
I mean I bite all the utilitarian bullets -- even the repugnant conclusion -- but I still can't shake the emotional feeling that if I start concioussly quantifying in ways that touch on things like community, love and friendship there are some negative effects.
I totally agree about the EA stuff. But I think that in interpersonal life, "common sense" probably does better than trying to quantify. That's because the benefits we get from holding certain dispositions over time are really hard to quantify on an individual action basis. E.g. it is probably impossible to weigh a positive concrete impact of a lie against being a less honest person in general. But our rules of thumb for interpersonal relationships don't rely on any shaky calculations -- we have them just because they work.