61 Comments

Because I'm that kind of person, I can't help but point out that we sometimes pay real social and emotional costs from quantifying. It's why we don't like to quantity the economic value of sex in a relationship or compare the utilities of saving lives (eg those with a disability or disease and those without). The problem is that there are many situations where even acknowledging the trade off has a strong social/emotional meaning of not being caring/reliable.

I don't think that's a major factor in the usual EA situation but it isn't non-existent. I think there are a range of highly local or social charities that you don't donate much to that is probably better not to quantify. Not because you wouldn't recognize the social value in donating in ways that increase social bonds but because it's really damn hard to quantify without having that impact to a degree.

I mean I bite all the utilitarian bullets -- even the repugnant conclusion -- but I still can't shake the emotional feeling that if I start concioussly quantifying in ways that touch on things like community, love and friendship there are some negative effects.

Expand full comment

But yes I 100% agree in every sense that matters I just can't help from quibbling

Expand full comment

Actually is not the opposite true? A lot of unhappiness in relationships, love, sex, and friendships comes from the feeling one party has that they are doing too much and do not receive enough back, and quantifying this ´too much’ is precisely the first step to recognise the problem and how it can be solved.

Expand full comment

This is obviously subjective, but I would definitely say that the opposite is not true in many cultures (ex: Middle eastern or African). The idea is that, if a person in a relationship is precisely “keeping track” of all the things they’re doing and what value they’re “getting from the relationship”, then this is viewed basically as business agreement.

If you treat your partner the way you treat a customer, then many cultures would view it legitimate that you are the problem. In short, your irrationality and willingness to forgive and not spreadsheet-attack every detail in your day to day is viewed as a proof of actual honest care. You treat that person differently than everyone else, so it means something. That’s the reality of dealing with emotional beings. It allows you to get more info through other means than simple discussions where dishonesty and manipulation can creep in.

Expand full comment

I totally agree about the EA stuff. But I think that in interpersonal life, "common sense" probably does better than trying to quantify. That's because the benefits we get from holding certain dispositions over time are really hard to quantify on an individual action basis. E.g. it is probably impossible to weigh a positive concrete impact of a lie against being a less honest person in general. But our rules of thumb for interpersonal relationships don't rely on any shaky calculations -- we have them just because they work.

Expand full comment

Yes, agreed!

Expand full comment

As a general rule, there are many situations in which no numbers are better than bad numbers. This is because people tend to (a) ascribe greater credibility to something with a number on it, and, (b) underrate the way that errors propagate when you perform mathematical operations on uncertain numbers. One is tempted to think that a garbage number times a garbage number is merely another garbage number, when in fact it may be more of a toxic waste number, if you see what I mean.

Overreliance on numbers can be just as much of a “refusal to think” as avoiding numbers entirely. Any applied scientist worth their salt knows that you have to think about your calculations instead of following them blindly. And just as it can be better not to give a number up to 20 decimal places when only the first couple of digits hold any accuracy, so also sometimes it can be better not to have a number at all than to have a number with minimal correspondence to reality.

Expand full comment

Right, see the "Don't Blindly Trust Numbers" section of "Good Judgment with Numbers":

https://www.goodthoughts.blog/p/good-judgment-with-numbers#%C2%A7dont-blindly-trust-numbers

That said, more people should be aware that (i) *ideal* agents could and would quantify values; and (ii) if we're going to refrain from doing so in a particular context, we should have some *grounds* for believing that our commonsense heuristics are more likely to approximate ideal guidance than would our non-ideal attempts at quantification.

Compare: https://latecomermag.com/article/commonsense-clues-a-defense-of-longtermism/#Ideal-vs-Approximate-Guidance

Expand full comment

I disagree with your claim that the ideal moral agent could and would quantify values. I would say that the ideal moral agent could and would sympathise perfectly with all parties relevant to a given situation in a simultaneous fashion, and would be able to subjectively judge the correct course accordingly. Numbers are what we use when our sympathy is made inadequate by our human limitations; they are generally a kludge but sometimes a helpful one.

If you are only trying to speak to a utilitarian or utilitarian-adjacent audience, then insisting on numbers as the ideal moral articulation may make sense. But if you want a broader audience then you will lose people if you assume that they will readily see things this way.

Expand full comment

I agree that "the ideal moral agent could and would sympathise perfectly with all parties relevant to a given situation in a simultaneous fashion, and would be able to subjectively judge the correct course accordingly."

I'm just suggesting that they could comfortably describe this process by quantifying the various competing interests. Indeed, some such cognitive process must underlie their "subjective judgment" if it is not to be a random miracle that they hit upon the correct answer. (Think about what subconscious processing must occur here, rather than treating moral judgment as a "black box" that functions by magic.)

Expand full comment

On the one hand, I would like to insist on the validity — and, on some topics, the primacy — of subjective judgment that does not proceed via numerical explanation. Moral and aesthetic judgments, in particular, do not strike me as fundamentally numerical in their underlying nature.

On the other hand, it’s not crazy to think that complete sympathy with everyone would be a miracle. It certainly sounds like one. Simone Weil thought that all truly successful sympathy with another person is a miraculous act of God. Since I probably have to count her as part of my philosophical lineage, perhaps I should be careful of dismissing your “miracle” description!

However, speaking for myself, I hesitate to go as far as Weil does. I can see non-miraculous reasons why human beings would have evolved some capacity to perceive something of the internal state of others, and as an agnostic I cannot actually attribute anything to God with any certainty. (Moreover, I often see people understandably interpreting statements like this as indications of the moral superiority of theists. Weil herself believed you could be a saint and an atheist, so she is innocent of this.)

I definitely think that our sympathy is always imperfect, and that our understanding of other people is in some respects always mysterious and uncertain. Acknowledging this is not a flaw. To me, the notion that there would therefore indeed be something mysterious about the workings of an ideal moral agent naturally follows.

It’s not that I don’t see how a person could come to the conclusion that this mystery consists in some kind of poorly understood mathematics. This seems to be true of physics, after all. Transferring this viewpoint to the context of morality is an understandable move, but not one that I necessarily agree with. I hope you can appreciate the differences in our points of view.

Expand full comment

I think you may have misunderstood me. Let's grant that our ideal agent empathetically perceives and understands all the internal states of all others. What then? By what cognitive process do they move from these multitudinous inputs to a moral verdict as to what should be done? It's a function, of some sort, and different moral theories correspond to different functions from inputs to verdicts. You can argue about what the function should be, in what ways it should or shouldn't be aggregative, etc. But there's no alternative to there being a function from moral inputs to moral verdicts. The agent's subjective moral judgment must be guided by *something* (and, being ideal, these underlying cognitive processes will be perfectly transparent to our imagined agent). I'm just talking about making the underlying algorithm explicit.

More specifically, I'd claim that the ideal agent will take some amount of shrimp suffering to outweigh some amount of human suffering, and would hardly be at a loss for words if we asked them to represent the trade-offs numerically. They certainly would not say anything like, "You just can't quantify or compare cross-species suffering."

Expand full comment

The process of moral judgment is a function in the sense that it has inputs and outputs, sure. But are those inputs and outputs best conceived of as numbers? I think not. Even if the ideal moral agent could give a rough numerical scheme that might track their subjective judgment, they would certainly emphasise the limitations and occlusions created by describing their judgment in such a way.

A number is not empathy; a number does not in itself contain the entire quality necessary for ideal moral judgment. An ideal moral agent would have the ability to subjectively compare cross-species suffering in a qualitative as well as quantitative sense. Rendering this merely quantitative would lose information.

I think the numbers start to feel more important when dealing with shrimp precisely because it is harder to have direct empathy for a shrimp. We turn to mathematics in the hope that it could compensate for a flaw in ourselves. Such a move is not obviously wrong under the circumstances, but it does not indicate that the numerical view is the ideal one; quite the contrary.

Expand full comment

How does your ideal agent know how much to sympathize with different people? How do they translate from correct distribution of sympathy into the correct action, if say, one action helps 5 people, and another 7? How do they deal with choosing between an action *certain* to help some people and another action with only a 50/50 chance of helping some other people?

I *suspect*, though I don't have a hard argument, that once you start thinking about this, number-y claims will start creeping back in, or at least a pattern of claims about preferability that *could* be translated into number talk. (Number tall just summarizes facts that *could* in principle be written out at tedious length without numbers by talking about orderings in terms of preferability instead.)

Expand full comment

Sympathy, in itself, is not a limited quantity. An ideal agent could sympathise with everyone completely.

It is possible — no, it is almost guaranteed — that correct actions could be specified mathematically. In particular, if we allow that there is always a correct action then we assign 1 to this action and 0 to all other actions, and, bingo! Morality has now been specified numerically.

Of course, this is not the kind of numerical reasoning that you are hoping for. Rather, you are asking if there is some sort of numerical law involving continuous quantities. I am not convinced that such a law is what morality “ultimately is,” and would instead view all such numericizations as mere imperfect models that need not reflect the “ground truth” of morality in a precise way.

Expand full comment

Thought provoking piece, but I’m going to go against the grain here and saying that donating to shrimp welfare programs that aim to alleviate shrimp pain is not a good use of funds. Why? Because the best arguments strongly suggest we should not believe shrimp feel pain. There is no evidence for anything in the fish/invertebrate brain being capable of such a complex experience. I disagree with EA in part because I reject the idea that we have to accept some seemingly arbitrary possibility this is true when we have no reason to believe it.

I’ve been an observer of the debate about fish and invertebrate pain for some time. I really recommend anyone curious about this debate read Brian Key’s work on fish pain. His arguments are fairly simple: in a nutshell, the experience of pain requires advanced neural circuitry that fish (and simpler creatures, like insects or invertebrates) simply do not possess. The fish brain is well understood, and no part of the fish brain possesses the sorts of feedback circuitry that are necessary for pain experience (and this is true for insects and invertebrates as well). Brian Key also rejects the positive arguments for fish/invertebrate pain. He and other skeptics of fish/invertebrate pain point out that the behaviors pain proponents think demonstrate consciousness can also be accomplished unconsciously. (It’s a fascinating debate, and I think anyone interested should read about it - I came away thinking Brian Key’s arguments were devastating.)

https://www.wellbeingintlstudiesrepository.org/animsent/vol1/

In short, I look at “shrimp pain charities” the way I look at Christian missionaries. If the missionaries are right, we should invest all our money into supporting them because they are saving souls from an eternity of suffering. But there’s no reason to think that they’re right, and so there’s no obligation to donate to them. Even if you want to argue that there’s a .0000001% chance they are right, I still don’t see how there’s an obligation to donate to them. Likewise, if the shrimp pain advocates are right, we should spend money saving shrimp from painful deaths. But we have no reason to believe in shrimp pain, and we have good reason to think these advocates are wrong. I don’t see how there’s any obligation to donate, just as there isn’t for the missionary case. So I would suggest saving that money, or investing it in a better cause.

(Needless to say, if shrimp don’t feel pain, that is great news for the universe! All proponents of animal welfare should be glad to read Brian Key’s work. If he is right, there is vastly less suffering in the universe than we imagine.)

Expand full comment

I will admit I don't know much about the works of Key, and am no expert in this, but from the little I've seen, many other experts on fish pain seem to disagree with him.

For instance, following his 2016 paper "Why fish do not feel pain", more than 40 other scientists responsed, almost all of whom reject his conclusions (https://www.smithsonianmag.com/science-nature/fish-feel-pain-180967764/). Apparently, they acknowledge that while some critics he makes are valid, having a brain as complex as ours is not necessary for having subjective experiences of the world. Brains come in all kinds of forms, and can obtain similar results even with different structures - so looking at whether another animal has a brain part similar to ours is not enough.

More recently, the New York declaration on Animal Consciousness signed by 250 scientists also ackowledged a realistic possibility of consciousness in fish and invertebrates (https://sites.google.com/nyu.edu/nydeclaration/declaration?authuser=0). They cite for instance that crayfish display “anxiety-like” states, which altered by anti-anxiety drugs.

Rethink priorities's Moral Weight Project also looked into that. They cite that invertebrate such as fruit flies are used as a model of depression in humans, since they have similar neurology and display depression-like behaviour when starved (https://80000hours.org/podcast/episodes/bob-fischer-comparing-animal-welfare-moral-weight/)

Now, maybe Key is right, but your claim "the best arguments strongly suggest we should not believe shrimp feel pain" seem too confident. At a bare minimum, there's an ongoing debate and disagreement about what is required to feel pain and consciousness, which would justify a cautious stance.

Expand full comment

Well, as an observer, I have to note that leaders in the debate recently released a comprehensive and extremely damning report complaining of bad practices and ideological bias in the "pro-pain" side. There appears to be strong reason to think that many of the fish pain defenders might be emotionally invested in the topic (as well as related ethical/political positions like veganism and EA) and prone to anthropomorphic bias, and that this colors a lot of the debate.

I linked to a comprehensive review of these bad practices here, and I think it's well worth reading. But this sort of thing is why I'm less convinced by arguments along the lines of "Well, there's a lot of academic controversy, and so outsiders shouldn't pass judgment one way or another." I would argue the bad practices documented here are pretty damning of the "pro-pain" side; if the case for fish/crustacean pain was as clear cut as those "pain advocates" suggest, there wouldn't be any need to use misleading evidence to make their case or engage in ad hominem attacks on skeptics.

https://www.tandfonline.com/doi/full/10.1080/23308249.2023.2257802#d1e441

While in general I think agnosticism is a good idea in hot button debates, the fact that 1) Brian Key's arguments and debates that I've seen seem really convincing to me and 2) there appear to be enormous methodological issues and logical fallacies in the pro-pain side, as well as use of ad hominem attacks and "cancel culture" type practices make me more confident that Key's position is on the right track. I do think it's important to defend researchers who take unpopular positions in defense of good ideas and are unjustly attacked for it, and that's partly why I am emphatic in my support of Key.

Expand full comment

This is interesting. I shall look at this paper, thanks for sharing it.

I hope it has an answer to the question I often ask myself, which is "Why would evolution not include something as useful as pain in other animals?"

Expand full comment

I find Key's arguments quite lame https://benthams.substack.com/p/betting-on-ubiquitous-pain as it seems do most of the relevant experts. Yes, fish brain regions responsible for pain aren't the same as human brain regions responsible for pain, but that doesn't mean that fish don't feel pain. Fish eyes are also different from human eyes, fish methods of locomotion are different from human methods of locomotion, but that doesn't mean humans can't move. Additionally, we can generalize the other way: in every known case of a creature with a brain and nociceptors it feels pain. So one can conclude by reasoning similar to Keys that all you need for pain is a brain that processes info from nociceptors and nociceptors, which fish and shrimp have.

There's an abundance of evidence that fish and shrimp are conscious that I lay out in the above post. Fish respond to painkillers, are distracted by pain so that they don't focus on other things, remember and avoid places where they were hurt, actively seek out painkillers, respond behaviorally as if they were in pain by exhibiting distress and rubbing the injured area, have physiological responses characteristic of pain, avoid using injured areas, and make complex tradeoffs between pain and gain.

As for shrimp, they:

Make tradeoffs between pain and gain.

Nurse their wounds.

Respond to anesthetic.

Self-administer drugs.

Prioritize pain response over other behavior.

Have physiological responses to pain like increased heartrate.

React to pain, trying to get away.

Remember crabs they fought for up to four days.

Remember and avoid areas where they suffered pain.

Integrate together lots of different kinds of information into one broad picture, the way consciousness does.

Engage in future planning.

Display stress and anxiety.

Have individual personality differences.

Respond to anti anxiety medication.

Seem to become exhausted.

All this seems pretty decisive, especially when the alternative is that there was a radical shift in consciousness throughout the vertebrate lineage. As Culum Brown notes

"However, this position misses the fundamental

argument for fish feeling pain, which is founded in part on the conservative nature of

vertebrate evolution. If the rest of the vertebrates feel pain, then the most parsimonious

hypothesis is that they do so because pain evolved deep in the evolutionary history of

vertebrates (perhaps even before teleosts). Rather than to suppose that pain spontaneously

arose somewhere else in the vertebrate lineage (e.g., between amphibians and reptiles), it is

more parsimonious to infer that fish feel pain for the same reasons the rest of the

vertebrates do. (By the way, from a phylogenetic perspective, all tetrapods are bony fishes!)"

https://www.wellbeingintlstudiesrepository.org/cgi/viewcontent.cgi?article=1029&context=animsent:

Expand full comment

Since you're linking to the debate thread, I assume you're familiar with the response from Key, which is to say that it's a mischaracterization of his argument to say that he believes fish must have a human cortex to feel pain. Rather, his argument is that a system must have some form of self-monitoring/feedback in order to be aware of itself/aware of injuries. Humans are simply used as a model system to understand how such a self-monitoring system could work. Or as he puts it:

"However, my thesis was never this simple. I clearly sought to define the neural substrates (both anatomical and physiological) that are prerequisites for the feeling of pain in humans. At no time did I say that either these structures or physiological processes were sufficient for pain, and present only in humans. It seems that the field was unprepared for the use of humans as a model system to inform on function in other animals. Interestingly, Mather warned against relying on what humans report when studying humans. Seth claimed that my thesis was “easily challenged by a wealth of evidence from non-mammalian species like birds.” This evidence related to the complex behaviour displayed by these animals. While this statement, by itself, is not very compelling, it does relate to points raised by other commentators who suggested that birds may possess cortex-like neural architecture (Brown; Striedter; Dinets). Even though these commentators didn’t realise it, this is the very point I was arguing for in my paper. I did clearly articulate “only vertebrate nervous systems possessing all of the following neuroanatomical features are capable of feeling pain.” I proposed that within vertebrates, only those animals with such features are capable — at least — of feeling pain (i.e., those features are necessary but not sufficient). "

Re: behaviors, I have to note two things. One, many of the references you cite in your post do not seem to be reliable. For example, Lynne Sneddon's work on trout has failed to replicate, and the fact pro-pain researchers consistently fail to make note of this and other failures of replication casts doubt on their project. See this link and search, "Sneddon" for evidence of this. This is partly why I find claims that the majority of researchers in this controversial area agree on fish pain to be an unconvincing reason to support the pro-pain side. There are clearly a lot of researchers who, for whatever reason, are not using best practices in support of this idea.

https://www.tandfonline.com/doi/full/10.1080/23308249.2023.2257802

Second, the behavioral criteria cited by Jonathan Birch and others prove too much, in the sense that if someone accepted the behavioral criteria for sentience they would likely have to extend sentience to virtually all organisms. For example, slime molds and protozoa show nociceptive responses and primitive forms of "learning" as well. Perhaps you don't find it a stretch to think those organisms are also sentient, but you can understand why most people would think they aren't,and why most people would probably think money sent to charities to alleviate pain in, say, sea lice or slime molds is not a good use of funds.

Expand full comment

The fact that one result by Sneddon failed to replicate doesn't mean her stuff is broadly unreliable, and most of the behavioral things that I'm citing have been verified on lots of different occasions, not just from one lone study.

Second, it isn't true that the Birch criteria prove too much. The kind of aversive learning to avoid particular areas, integrating pain signals into a brain which detects nociceptive activity, response to analgesia, and so on aren't met by sea lice.

There are no serious and credible people who think sea lice are conscious, while the same isn't true of shrimp and fish.

Regarding the point in the debate thread, I have a few things to say about it:

1) it's not at all obvious if we can identify the sorts of functional characteristics needed for consciousness. We should be very dubious about arguments of the form "X is needed for consciousness in humans, probably the sorts of things that are needed for consciousness in other creatures are broadly similar, they are similar in such and such ways, and animals don't possess them."

2) this proves too much. if right, it would mean rays which pass the mirror test and octopi aren't conscious.

3) doesn't address the case of rats and people with their neocortex removed. In the case of rats, they continue to behave mostly normally.

Expand full comment

That from Sneddon was just one example, but of course there are many other failures that are documented in the link I sent.

As to the Birch criteria, I'm simply referencing the work of his critics who point out that Birch arbitrarily applies his criteria to some animal life but not others. I think everybody will have a category of "obviously not conscious", but it's problematic that the criteria he's using could render sea lice conscious. Again, I'll just quote the article I sent earlier.

"The criteria advocated by Birch et al. (Citation2021) to ascribe sentience to animals are being applied arbitrarily. For example, Birch et al. (Citation2021) argue, based on their criteria, that all cephalopods and decapod crustaceans should now be considered “sentient beings”, yet within the Mollusca they do not extend their analyses to other groups such as bivalves (e.g., scallops, oysters) and gastropods (e.g., abalone, snails). These taxa also react in response to visual, chemical, noxious and environmental cues (e.g., Barnhart et al. Citation2008; Wesołowska and Wesołowski Citation2014; Siemann et al. Citation2015; Hochner and Glanzman Citation2016; Walters Citation2018b, Citation2022) including alleged “avoidance learning” (Selbach et al. Citation2022), and share similar neuroanatomical networks to the Cephalopoda. Moreover, within the Crustacea, members of the Copepoda have similar physiology and neurological networks to the Decapoda and also react in response to visual, chemical, noxious and environmental cues. Based on the criteria of Birch et al. (Citation2021), copepods, bivalves, and gastropods would appear to satisfy at least three or four of their eight criteria with reasonably high certainty, leading to a potentially erroneous conclusion of “some evidence” or “substantial evidence” of sentience in these groups (Walters Citation2022).

Perhaps these criteria are being applied arbitrarily because taking their consistent application to its logical conclusion would be extremely problematic. For example, sea lice (Lepeophtheirus spp., Caligus spp.) are ectoparasitic copepods that cost the global salmon farming industry hundreds of millions of dollars annually to control (Abolofia 2017; Stene et al. Citation2022). This cost is incurred in large part to satisfy animal welfare concerns over the impact of lice infestation on the welfare of wild and cultured salmon (Macaulay et al. Citation2022), but without any regard for the impact of the treatments on the welfare of the sea lice (Moccia et al. Citation2020). Similarly, extending the same sentience analysis to bivalve molluscs could result in bans on the consumption of fresh, live oysters."

Re: people with cortices removed, rats, etc: I'll note that these cases have been addressed in the literature as well, and it's an open question as to whether rays are conscious. We can't simply assume that they are if whether or not they are is part of the debate.

Expand full comment

Oh, one thing I forgot to mention, when Key broadens the criteria by saying merely that creatures need to be able to detect pain from some bit of their body, that probably includes fish and shrimp who rub the injured areas.

Merely responding aversively isn't enough. They don't have brains connected to nociceptors (or brains at all), they don't respond to anesthetic, they don't remember areas where they were harmed and then avoid them in the future, and so on.

I'm way more confident that things passing the mirror test and being very smart are conscious than I am about Key's stuff. Like, the reason to think dogs are conscious isn't mostly that we've looked at their brains, but by their behavior. But octopi and rays are similar.

Seems like there isn't anything good to say about people and rats with cortices removed other than just biting the bullet and saying they're not conscious.

Expand full comment

If it *requires* "advanced neural circuitry" does that mean we *already* know right now that intelligent aliens with quite different biology from humans couldn't feel pain? That seems a bit suspicious to me. Or is there some way Key avoids that conclusion?

I'd also say that "I should assign non-extreme credences to both sides of a scientific dispute when serious scientists disagree and I'm not an expert" seems quite different to me than "I should care about the miniscule chance missionaries are right". One involves deference to real expertise and the other doesn't.

Expand full comment

Of course, there are different neural structures that could support pain experience. But Key's point is that we know pain experience in humans and mammals requires some sort of feedback/monitoring system, and the fish/invertebrate brain simply doesn't have that feature in any form. You can always posit that panpsychism is true or there's some mysterious third thing that supports pain experience, but that is purely speculative and I don't think that's a strong reason to actually believe in fish pain. Absence of evidence is evidence of absence. Re: missionaries, I am sure there are very intelligent Christian apologists, and I'm not an expert in philosophy of religion. It's certainly an active debate. But I don't think that means I have to be agnostic about whether the missionaries are right. I shouldn't feel obligated to support them on the off chance I'm wrong about my atheist beliefs.

Expand full comment

A couple of disanalogies that seem relevant to me:

* Christian apologists seem to be engaged in motivated reasoning; I don't see any such reason to disregard the expertise of scientists and philosophers of mind who believe in fish pain.

* The "missionaries are right" prospect would radically upend my entire worldview, whereas the choice between "fish feel pain" vs "fish can't feel pain" seems pretty independent of my other beliefs: from my current perspective, either could easily turn out to be true.

That said, I hope you're right - as you say, it would be great news for the universe!

Expand full comment

I think that's fair, but I'd also question point 1. I don't know if it's true that the debate about fish/invertebrate pain is entirely objective and impersonal to the people engaged in the debate. I do get the sense there's a lot of emotional investment in these ideas (to be fair, on both sides!) But that's just my personal impression - it's surprising how heated the debate can be.

Expand full comment

Fair enough, if the argument is based on *functional* details, it may well escape objections about aliens. (That said, I *personally* think that actually multiple realizability arguments involving aliens also have bite against views on which the fine-grained, detailed functional organization of the human brain is necessary for consciousness, but I recognize lots of sensible people don't believe that.)

However, I agree with Richard that there is a big difference between hard scientists and religious apologists in terms of how much we should defer. I am less keen on deferring to philosophers, though it is probably often better than not doing so. In addition to the worries about motivated reasoning that Richard raises, science just has a much better track record at producing consensus and new knowledge than philosophy of religion or philosophy in general. Though I guess you could say that "science" is too broad and heterogeneous a category to work with here, and that it's only "hard" science that is usually very reliable, and that does not include stuff about consciousness.

I also find it implausible that the scientists who disagree with Keys about fish pain are primarily motivated by niche philosophical theories like panpsychism*. When I googled "can fish feel pain" one of the first things I found was a link on Wikipedia to a paper simply disputing whether Keys is correct that fish brains lack the relevant structures. Now, I don't think you should just completely defer to what scientists believe here. I think if you've read the literature and think Keys has much the better argument, I think it's fine to move your credence in "fish have pain" somewhat in the *direction* of 0, relative to where it'd be if you just looked at how many experts go which way. But I do think it is probably unreasonable to be more than say, 90-95% confident that Keys is right. And I think even that level of doubt *might* make donating to help fish look good in expected value terms. (Or it might not, obviously you'd have to crunch the numbers.) I don't think we should stop using our best theory of reasoning under uncertainty here just because there are some weird philosophical experiments involving extremely implausible claims and unimaginable vast pay-offs where it seems wrong to pick the option with highest expected value. "Keys is just wrong about either what functions are required for conscious pain or what functions fish brains can perform" doesn't seem anything like the sort of out-there hypotheses that show up in Pascal's Wager type thought experiments (https://nickbostrom.com/papers/pascal.pdf).

I've no *personal* opinion on whether Keys is right, by the way, never read the literature on this.

*For what it's worth, I have little personal sympathy with such theories. Indeed, I actually find it hard to write anything about consciousness that doesn't assume physicalism, even when I feel I *ought* to be giving some non-negligible weight to dualism being true, given that a significant minority of philosophers of mind endorse it. I'm a functionalist reductionist realist about consciousness. I also don't think its *crazy* to think that almost all non-human animals lack consciousness altogether, because they can't introspect, though that isn't ultimately my view. But I don't think you should be more than like 95% certain in any view here.

Expand full comment

I think that's fair. My experience here is colored by my interest in contemporary feminist debates about (for instance) whether it's appropriate to give children puberty blockers if they identify as transgender, or whether early fetuses are conscious. These sorts of debates are all scientific ones, but wherever you land on these issues, you probably agree that at least some scientists engaged in these debates are engaged in motivated reasoning.

Because of that background, I'm inclined to think that scientists engaged in controversial debates are less objective and more emotionally invested than they might think themselves to be. I was very interested in Koko (the ape that could supposedly use sign language) when I was a kid, and I was stunned when I later learned that the scientists invested in Koko had simply deceived themselves and others into thinking she was actually using language. That was my first inkling that scientists who care deeply about the creatures they study might become emotionally invested in a false anthropomorphism about those animals. I admit that when I started reading the controversy over fish pain, I got the impression some of the defenders of fish pain were similarly emotionally invested in the topic. The case Key made in his essays was so straightforward and convincing, and the attempts to undermine his arguments just didn't land for me. Yet the debate was so heated and the attacks on Key so ferocious that it began to feel like something other than pure academic disagreement was motivating some of the researchers. I wouldn't say that the researchers were motivated by a false belief in panpsychism or some other niche theory, but I admit I do wonder if anthropomorphism was playing a role.

Expand full comment

I'd add that when I glanced at Keys paper, it had a gratuitously inflammatory comparison of people who urge caution in the face of uncertainty about fish pain to anti-vaxxers who think vaccines cause autism. That says nothing about the quality of his argument, but it's certainly suggestive of bias:

"he idea that it is more benevolent to assume that fish feel pain, rather than not

feel pain, has emerged as one position of compromise in the debate on fish consciousness.

However, accepting such an assumption at “face value” in biology can lead to devastating

consequences. I would like to highlight this concept using the recent example of how a

scientific research article was published that purportedly linked measles-mumps-rubella

(MMR) vaccination causally to autism. Although this link was subsequently disproven, many people continued to accept at “face value” the causal association between MMR

vaccination and the development of autism in children (Brown et al., 2012). This caused

parents not to have their children vaccinated, and it subsequently led to a public health

crisis (Flaherty, 2011). Thus, while initially accepting the idea that MMR vaccination

causes autism may be considered a safe way to proceed (even if it is not true), it can cause

catastrophic effects."

Expand full comment

Having skimmed Key's paper a bit more, I think it also probably faces at least some multiple realizability issues. He seems to be suggesting "recurrent processing" where information moves "backward" as well as "forward" in a network is needed for consciousness, simply because it is characteristic of conscious experiences in humans. Tentatively, I doubt such processing is literally nomologically necessary for a system to be human-level intelligent. And I think if we met aliens that we're smart and behaved roughly like us but had only feed-forward processing we'd think they were conscious. Ditto if they behaved like dogs, pigs, whales etc., and in that case I'm even more skeptical feedback is nomologically necessary. I am told by a work colleague who is an ML PhD that current neural nets in AI are feedforward only, and they are already reasonably smart in some (not all) ways, and it's not clear that lack of feedback is a key limitation here that prevents them getting much smartER.

But to be fair, this is not the only thing Key claims is necessary for pain that fish brains lack, and he only has to be right once. The argument that they lack enough integration for conscious pain seems more plausible to me.

Expand full comment

That sounds reasonable. Bias *can* run the other way though: sometimes people think being anti-sentimental is what makes them a real, proper scientist (this bias is often connected to a preference for "masculine" over "feminine" things also, I think, although I am pretty skeptical when people lean too much on that to dismiss "masculine" thinking styles.)

Expand full comment

That's a good point. I too have an interest here: I have tropical fish, and I would much rather Key be correct than his opponents! I much prefer the idea that my fish are elegant pieces of natural machinery to the idea that trillions of fish just like mine have brief lives of unreflective agony. That would make the universe, and life in general, seem far worse.

That said, I will reiterate that I think anthropomorphic bias typically goes the other way, with people preferring to believe that animals they care about are *more* similar to humans in their mental states, not less.

Expand full comment

"Quick answer: rough estimates are better than no estimates."

And Fermi-estimated sensitivity analysis applied to those rough estimates is even better :)

I suspect that refusing to quantify might be an instance of the more general phenomenon of refusing to feel uncomfortable, but until I have a data set that could allow me to reject the null hypothesis you should ignore my suspicion.

Expand full comment

1. The use of numbers isn't, by itself, a substantive ethical commitment. Numbers are useful for succinctly describing a complete view of exactly which tradeoffs to make. When I say I favor saving the greater number in rescue cases... this is just a summary of my rescue dispositions. (I'd rescue A and B over C, A and C over B, B and C over A, and so on.) It implies nothing beyond that.

2. In some cases, the structure of the *real numbers* may be limiting. It may prevent you from summarizing your view. In the universally beloved book Winning Ways for Your Mathematical Plays, the authors invent a mathematical system of "nimbers" (after the game Nim), which includes nimbers like ∗ and objects like ↑, which describe the value of different game positions. The nimbers interact in ways that are not possible only using standard numbers/operations. I offer this as inspiration to those who feel unable to express their ethical ideology in a numerical framework. Rather than call your opponents "number fetishists", why not devise a framework that succinctly summarizes your view?

Expand full comment

The surreal numbers that Conway and Guy discuss in Winning Ways are very flexible, but that also causes some problems for their use. With the real numbers, there are some physical scenarios that we can represent meaningfully on a ratio scale, using just one arbitrary choice of unit, and others that we can represent meaningfully on an interval scale, with two arbitrary choices of unit and zero point. There are also situations we can represent on an ordinal scale, where there are infinitely many arbitrary choices we make.

With the hyperreals or surreal, there are always uncountably many arbitrary choices, so there is a limit to how well meaning can be expressed with them.

But I do think it’s valuable to consider more ways of possibly representing the kinds of tradeoffs and considerations that matter - no need to force things into the real numbers, if that isn’t the nature of the things, and we should be careful about which aspects we read off from the numerical representation. But none of that is an excuse to avoid thinking about what are the real tradeoffs that we want to consider - it’s just a question of how to most effectively think about them.

Expand full comment

Maybe "universally beloved" is too strong? Or maybe you're just saying that hyperreals or surreal numbers aren't good for describing a determinate and unambiguous ethical theory. But it sounds like you agree with the broader point?

edit: Oh, I see. I didn't mean to suggest that the Winning Ways framework in particular would actually be helpful for thinking about ethics. Just thought it was cool that they devised a framework which was helpful for modeling what they cared about, even though it required some innovation... and hope that this general attitude could be replicated by others.

Expand full comment

My post on Shrimp consciousness, answering Matthew:

https://forum.effectivealtruism.org/posts/3nLDxEhJwqBEtgwJc/arthropod-non-sentience

Expand full comment

Great points. Vox's Future Perfect was annoyingly guilty of the "can't compare different types of doing good so go with vibes" fallacy in an advice column a few months ago, and I wrote a rebuttal here along similar lines: https://open.substack.com/pub/exasperatedalien/p/optimization-is-integrity?utm_source=share&utm_medium=android&r=ksl93

Expand full comment

Matthew and me had this conversation about Shrimp consciousness:

https://benthams.substack.com/p/the-best-charity-isnt-what-you-think/comment/77329731?r=biy76&utm_medium=ios

.

Expand full comment

I largely like this article and agree with it. Devils advocate time:

One contention (on only some of this, not all of it) would be that these people might just reasonably be doing moral philosophy very differently than you. While you might want to focus on broad and general intuitions (pain is bad, pleasure is good) other people might want to be more particularistic (in a certain case, x feels like it’s more pressing, and I put less weight on the general intuition).

Im not sure that there is a good reason to think that either the generalist or particularist method is better (both succumb to EDAs, for example).

I will certainly agree that the conclusion should be something more like “let’s hedge against these types of intuitions vs these - perhaps a particular case being very unintuitive is a cost (idk how to operationalize this, but I can take my best guess)” instead of “you can’t PROVE your point” or “that one feels bad, so I’m just gonna go with my gut always.”

Expand full comment

Also, while you call it uncertainty bias, one can just say that they are risk averse in ethics. Therefore, that EV calculation fails to account for the risk assessment which can reasonably impact one’s decisions (as many economists and philosophers would agree). I’m not sure that this is at all irrational.

Expand full comment

Uncertainty bias (minimizing the very specific "risk" of wasting one's efforts/resources) is very different from "risk aversion" in the sense of being especially averse to *bad* outcomes.

Example: risk-averse people buy insurance to hedge against the risk of worst-case outcomes, but it's quite likely that you'll never need the insurance, making the act one that has a high "risk" of being wasteful (making no actual difference).

I take this to show that we shouldn't be specifically averse to the "risk" of wasted effort/resources. If you're going to be risk-averse, it should be in the sense of being averse to especially bad outcomes. And that should probably make one especially motivated to donate to "speculative" causes, since if hundreds of billions of shrimp are massively suffering (or AI wipes out humanity), that's an extremely bad outcome.

Expand full comment

Example:

Certainty of $100 or 50% chance of 0 and 50% chance of $201. I think risk averse people will often choose the first (and this is arguably rational) despite lacking EV maximization. It does seem like they are being more risk averse - tho maybe you can reasonably argue that there is some number here of how much and they are not being up front about it (which I would also agree with).

Expand full comment

Sure, there are some cases where risk aversion and certainty bias coincide. I'm just pointing out that aiming for certain impact and aiming to avoid worst-case outcomes are very different aims, as shown by the fact that they *can* come apart.

Expand full comment

I have no problem with risk aversion (as long as it's quantified :)? ). At the metacognitive level it can provide a useful counter-heuristic to various forms of shiny object syndrome (I don't like *this*, therefore something else will be better).

YMMV, but my broad sense is that uncertainty avoidance (people's discomfort with uncertainty, where the uncertainty is still there regardless of their discomfort :), and distortion of their thinking processes as a result) carries a pretty high cost in terms of *highly* nonoptimal decisionmaking. My sample set might well be biased.

Expand full comment

As a follow up, I wanted to bring attention to this article by major figures in the debate over fish pain. (You can google the authors and see that they aren't quacks, but are highly respected figures in the field!)

https://www.tandfonline.com/doi/full/10.1080/23308249.2023.2257802

They point out, first, that the debate over fish pain is not as objective and fair-minded as it may seem. They bring attention to noxious practices in the debate - there are plenty of ad hominem attacks and cancel-culture type practices going on, as well as generally bad research practices. This is partly why I'm not as moved as others by the statement that widescale debate in the field means that outsiders should not draw conclusions over which side is likely to be correct.

Perhaps more significantly, they bring up that many advocates of shrimp/fish pain push practices that would have significant negative impacts on humans and large scale ecosystems. Yet these negative consequences are rarely brought up by the "pain proponents". For example, consider the move to ban eyestalk ablation in shrimp. The authors point out:

"Eyestalk ablation to boost larval production from penaeid shrimp broodstock has also emerged as a

welfare issue in recent years. Diarte-Plata et al. (2012) suggested that ablation was “painful” based on tail flicking and leg or antennal rubbing as welfare indicators. Neither tail flicking nor rubbing are validated or reliable pain indicators in crustaceans, however, as shown by Puri and Faulkes (2010) in the case of rubbing and Weineck et al. (2018) who demonstrated that tail flicking is a reflex that also occurs in transected shrimp abdomens separated from the head. Nevertheless, eyestalk ablation affects several other easily quantifiable functional welfare metrics such as broodstock survival and larval quality; hence alternatives to ablation are used by the shrimp aquaculture industry when they are available (Magaña-Gallegos et al. 2018).

...

Demands by certain interest groups to ban eyestalk ablation in all shrimp farming would result in the use of ten to twenty times more P. monodon broodstock to meet industry needs for post larvae. This would immediately have the unintended consequence of requiring many more P. monodon broodstock, another “lose-lose” situation as it conflicts with one of the basic 3Rs welfare principles of reduction of numbers of animals used. Such a move would also increase fishing pressure on wild stocks, while the lack of reliable larval supply would threaten entire aquaculture industries in countries where P.Rv n F Sn & Aquuu vannamei is not available, threatening livelihoods and regional and/or global food security."

So it's really worth carefully examining the claims of the "pain proponents", as the practices they suggest might have really major terrible consequences for the world as a whole, consequences that are not often brought up by these advocates.

Expand full comment

How about this for quantification: spending billions of dollars to prevent shrimp suffering in the manner discussed here will have ~0 impact on the likelihood of survival of my family or the human race or earth-based biological life or intelligent life or many other relevant categories you could imagine.

Expand full comment

I think they have a fairly low cap on how much extra funding they need (certainly not billions). But I take your point that one could reasonably prioritize x-risk over mere suffering-reduction (regardless of species).

Expand full comment

I still can't quite buy the aggregationism necessary to make these assertions work. If a billion people stub their toes, there is no entity that feels the pain of the sum total of a billion stubbed toes. There is only one toe being stubbed a billion separate times. The vantage point that imagines the aggregate suffering produced by this is...what exactly?

How is it not just an imaginative leap - a result of a single human mind trying to *imagine* the agony of aggregated suffering - (a cognitive invention, since again, there is no consciousness experiencing the aggregated effect) - necessarily processed as a subjective emotion of empathic horror, and thus taking on the correlating extreme moral weight?

Claude helped break it down thusly:

When we talk about "a billion people stubbing their toes," what actually exists?

-There are a billion individual experiences

-There is no single entity that experiences the combined pain

-It's just the same type of minor event happening separately many times

What exactly is the perspective from which we're summing up this suffering? When we say "the total suffering is X," what are we really referring to?

The "total suffering" might just be:

-A psychological construct created by one person trying to imagine it all at once

-An emotional reaction of horror at trying to comprehend the scale

-Not something that exists as an actual experience anywhere in reality

If no consciousness ever experiences the "total suffering," why should we treat it as morally different from a single toe stub? What makes the billion separate incidents combine into something morally weightier?

I'm open to the possibility that I'm seriously misunderstanding something here.

Expand full comment

The fallacy is supposing that we could never have reason to care about more than a "single entity". But of course we should care about many more than just one. Genocide is worse than murder, even though no single entity is murdered more than once. However bad one murder is, two murders are twice as bad. Similarly, painfully suffocating a billion animals to death is a billion times worse than so mistreating just one.

Expand full comment

It seems like the same argument, if it works, can be used to prove that a trillion people being tortured is no less than one. Seems implausible to me.

Expand full comment
Comment deleted
Nov 19
Comment deleted
Expand full comment

I agree with it. (I take as a premise that pain and suffering are bad. I don't take this normative truth to logically follow from any merely descriptive truths.)

Expand full comment