22 Comments
May 20Liked by Richard Y Chappell

I think my main disagreement with Singer is some of the stuff he has said about Israel-Palestine, mostly recent stuff, and some of his comments on capitalism/socialism. Was very disappointing to see the awful "human shields" argument from him. But overall, him along with Chomsky, have probably been the most influential thinkers to how I think about the world. He's the first philosopher I ever read, way back in HS. Weird to see him retire, but at least he's got a podcast and substack.

On your third disagreement with him, it seems like personal consumption choices here matter a lot - if you eat less meat, on average, less animals are tortured and killed. A lot more should be done to fight animal suffering, but at the very least you shouldn't pay for something to experience extreme suffering and death. And, for most people, I would suspect this is actually the easiest thing they could do. If you're living paycheck to paycheck, it's rather hard to donate. If you do donate your income, you can donate to human charities instead. You could also become an activist, but this requires lots of effort and time. Going vegan might require some initial effort, but overall it's almost trivial. Sometimes I have to quickly glance at ingredients on food labels, but you probably should be doing that anyway. It seems like the absolute bare minimum anyone should do, not some major effort or commitment.

I also think one important aspect and benefit of veganism is the social effects. Most people get their ethics from social norms, not through careful reflection and reasoning. We're never going to achieve the desired goals unless non-vegan behavior becomes socially unacceptable. And vegan social norms are rather hard to encourage if you're not a vegan yourself.

Expand full comment
author
May 24·edited May 24Author

Fair point re: veganism and social effects!

What's awful about the "human shields" argument? It seems like there are complex issues around rewarding hostage-taking. E.g., it seems pretty straightforward that we'd all be better off (ex ante) with a strict policy of never paying ransoms to kidnappers. Why is the "human shields" argument so much worse than the "don't pay ransoms to kidnappers" argument?

Expand full comment
May 28·edited May 28Liked by Richard Y Chappell

Well, for one, I think it's empirically confused. Past investigations of human shields have found the accusation to be mostly meritless. More importantly, it's contrary to the current reality of how targets are selected:

https://www.972mag.com/mass-assassination-factory-israel-calculated-bombing-gaza/

https://www.972mag.com/lavender-ai-israeli-army-gaza/

The current massive death toll of civilians has little to do with human shields, and more to do with killing "operatives" while they're in private homes, along with their families and others. They target with AI, and for even low-level operatives, they allow 15-20 civilian deaths. For higher-level operatives, it's at least up to 300.

The typical accusation, and the one Singer used, is that "Hamas locates its military sites in residential areas," rather than typical hostage-taking. But this isn't unusual. Both Israel and the U.S. have military bases near civilians. Gaza is far more dense and it would be much harder for Hamas to avoid. The Kirya base in Israel is surrounded by civilian infrastructure, with military personnel frequenting much of it. Would it be legitimate to blow up a nearby bus or mall that contains some military personnel? If Hamas blew up the base and surrounding area, killing 20x as many civilians as combatants, most people would rightly view it as horrific.

So, I don't think the hostage-taking analogy holds. But even if we accept the premise that Hamas is doing the equivalent of hostage-taking, Israel's actions would still be abhorrent. If some criminals take hostages, say in a hospital or a school, whatever else you think about the correct action, it's clear that you shouldn't just start blasting and killing the hostages and criminals. If a criminal takes a hostage, it's bizarre to claim killing the hostage is worth it if it means taking out the hostage taker. Typically, it's not police policy to just shoot through a civilian to get the bad guy. You also shouldn't drop a bomb on the hospital or school, killing everyone inside. I think most people would be bewildered if the response in Die Hard was to flatten Nakatomi Plaza with missiles. Your life shouldn't be forfeit because you've been taken hostage, nor should it be forfeit because a hostage was taken near to you. Israel isn't just refusing to pay a ransom - they're deciding to just kill everyone: hostage, nearby civilians, and criminals.

In any case, although I disagree with him using the human shields argument, and much of what he said initially in an article and some tweets, Singer does at least recognize what is happening in Gaza is not justifiable: https://www.project-syndicate.org/commentary/hamas-attack-and-gaza-death-toll-unjustified-by-peter-singer-2024-01

Expand full comment
author

Thanks for the explanation!

Expand full comment

singer's recent remarks about the genocide in gaza are truly horrific. he is very dismissive/seems like he can't be bothered to grapple with the enormity of it. appealing to human shields is plain lazy and irresponsible. the unavoidable upshot is that he doesn't appear to take the wellbeing of palestinians very seriously at all

Expand full comment
May 20Liked by Richard Y Chappell

Regarding the .01% AGI argument, I think you are making some contentious assumptions there. Basically, I'd argue that argument is wrong for the same reasons Pascal's wager fails.

I mean, if the world is filled with those kinds of risks (be it nuclear war, bioweapons, secular decline, etc etc) it becomes much less clear that attention to AGI doesn't take away from efforts to reduce those other risks.

Also, for the AGI argument to have force you need to think that working on AGI risk is relatively likely to reduce rather than increase that risk and that it won't increase other risks.

For instance, my take on AGI is basically somewhat like my take on law enforcement use of facial recognition. It was always going to happen (if technically possible) and the choice we had was wether to handwring about it so that it was sold by the least responsible company (Clearview) or encourage somewhat more responsible and technically proficient companies (Amazon/google) from offering it.

Basically, I don't think you can avoid the fact that public concern about AGI will create pressure for western countries to regulate and for prestigious computer scientists not to work on it and that seems like a very bad thing. So even if there is a serious risk there we may want to STFU about it if that makes the outcome plausibly worse.

Also, I fear AGI concerns trade off against taking other concerns about AI seriously.

Expand full comment
author

That seems reasonable! I was more just puzzled b/c Singer's take seemed to be more along the lines of thinking that decades-distant problems should automatically be deprioritized compared to traditional global health charities.

Expand full comment
May 20Liked by Richard Y Chappell

As a theoretical matter I agree, but as a practical matter I have alot of sympathy for Singer's attitude here.

In particular, I fear that there is sufficient play in the joints of longtermist or AGI concerns to let people tell themselves they are doing something really altruistic while getting to do whatever it is they find glorifying or pleasurable.

Not that it's not good for people to enjoy altruism, it clearly is, but I feel like alot of the stuff around longtermism and AGI ends up kinding being: see look the things that I think are cool and which make me seem really important are actually super altruistically important.

This isn't to say that there isn't value in these things, but we are ultimately allocating limited social approval/concern to incentivize certain activities and we don't want to waste it on things that people would be doing anyway or where the possibilities are so large that it's wasteful.

Expand full comment
author

I don't know. I don't personally have anything to gain from a greater focus on AI (it's not my area), but it does seem to me sufficiently transformative and risky-seeming that I definitely want more safety-oriented people to be thinking carefully about it!

Global catastrophic risks in general seem severely under-attended to. (Pandemic prevention may be the most obvious. But I also have more of a professional interest in that.) So I think longtermism offers a helpful corrective here.

I take the point that, because these matters involve tough judgment calls, that introduces more room for bias. (Similar things can of course be said of non-utilitarian priorities -- esp. those of the "systemic change" critique of EA -- but for some reason rarely are.) It's worth being aware of that, and attempting to counteract it. But I don't think wholesale dismissal of longtermist concerns (and everything else that requires tough judgment calls?) is the right way to go.

Expand full comment
May 20Liked by Richard Y Chappell

I didn't necessarily mean people like you (or actually philosophers at all), but I think alot of the Yudkowsky aligned concern for AI for both the people doing the safety research in that style and the donors has more to do with the appeal of a narrative where intelligence (and particularly a certain kind of STEM intelligence) is the most powerful thing in the world and which centers the areas they are interested in as the most important things in the world.

Indeed, one thing I fear here is that this focus on AGI as somehow uniquely dangerous tends to distract from the very real but less sexy kind of risks that are better thought of as simple mistakes/errors in complex systems than rogue AI. For instance, looking at people makes me think we should be more worried about what might be thought of as mental illness in an AI than about alignment or a siperintelligence seruptitously pursuing some kill all humans scenario.

Regarding the major change that AI represents and dangers there. I 100% agree those are real and worth considering, but I'd also argue that (as with almost all transformative technologies) we are actually far oversupplied with such worries. Indeed, for some of the same reasons you raise about how we don't weight potential benefits of novel drugs highly enough I fear that a similar issue happens here.

It doesn't mean that there are no dangers, only that there isn't a need to encourage people to pay more attention to them.

I dunno if you've looked at philjobs lately but some crazy huge fraction of the job openings mention AI and I tend to fear that we are going to see the same thing with AI as we do with bioethics -- a strong incentive for philosophers to come up with intellectual justifications that give intellectual seriousness to the kind of anxiety that this new tech raises in people.

Doesn't mean it's all wrong or anything, I just fear it will be oversupplied relative to pushback against it (just because the benefits tend to involve less interesting novel theories).

Expand full comment
May 20Liked by Richard Y Chappell

So I agree it would have been more satisfying to give a theoretical account of why such approaches are less beneficial but I can also understand that at a fully practical level one might fear that even making the case just further encourages people putting effort into the debate and raises the profile of these efforts.

If there is a strong human bias towards not appreciating the the true scale of combinatorial possibilities when considering how our present actions influence the far future as I would suggest then the problem is that you might worry that even making your case against these approaches will have the effect of saying "this is a totally valid altruistic endeavor unless you believe in this view I have which, while correct, you will likely not accept"

Expand full comment
May 21Liked by Richard Y Chappell

It’s definitely notable that OpenAI, which was originally a poster child for people taking existential risks seriously and working to mitigate them, has potentially become the biggest source of such risk, and that Anthropic, founded precisely to avoid the pitfalls OpenAI was encountering, isn’t obviously doing better.

Expand full comment
author
May 22·edited May 22Author

I'm not an expert in this stuff, but I think Anthropic is doing some promising work? E.g. https://www.anthropic.com/research/mapping-mind-language-model

Expand full comment

True, but I still tend to take the position that it's inevitable and that the least bad outcome is if the people likely to take the worry most seriously develop it first -- and that requires going fast rather than going cautiously.

Expand full comment

"So even if there is a serious risk there we may want to STFU about it if that makes the outcome plausibly worse."

Why use such strong language? Frankly, you sound like Elon Musk, and that is not a compliment, sorry.

Why is it "a very bad thing" if development of A(G)I is slowed down a bit? I could understand this if AI were our generation's "green revolution", or oil revolution, both enablers of massive population growth (and thriving). Is that what you mean? Do we need AI in the sense we now need fossil fuels (or their full and sustainable replacement) and fertilizer. I don't see this. I see precision laser weed killing, already existing, like golden rice.

Isn't AGI more like a fundamental research (can we create a mind) as opposed to AI that will save countless lives (protein folding, drug invention, x-ray interpretation etc)?

Expand full comment

The point is that there isn't really a slow it down option. The choice we have is just who gets there first. Even if you decide to slow down it's likely there will be people in other countries (whether publicly or in classified programs) who won't.

It's no different than atomic weapons. Once the world knew they were possible they would be built the only question was who got them first. Would it have been better if no one built nukes? Maybe, but that wasn't a choice we had, we had the choice of whether we built them first or someone else did. I tend to think the AGI risks are less if they are built first by, say, openAI than whoever is likely to do so if they or even the west generally imposes a pause.

Expand full comment
May 21Liked by Richard Y Chappell

"Unlike many bioethicists, he never relies on questionable intuitions ... "

Hmm, I don't know Singer's work much, but that sounds like an overstatement for any ethicist. Why am I wrong? Not sure exactly what is meant by "questionable", but I don't see how one could do ethics (including what you call "theory driven applied ethics") without relying heavily on intuitions.

[The term "intuition" is broad and maybe refer to multiple related things, so maybe that's source of my confusion here?]

Expand full comment
author

Right, "questionable" is doing a lot of work here. Basically what I had in mind here was that he largely relies upon uncontroversial harms (e.g. suffering) to do the moral heavy-lifting. This is strikingly different from the "just seems wrong" style of bioethics often found elsewhere.

Expand full comment
May 20Liked by Richard Y Chappell

I really like his new podcast. Having recently become more sympathetic to Utilitarianism, I feel very lucky that we have a place to hear him speak every week

Expand full comment

Is this the Farewell conference because after it he's sacrificing himself so his organs can save 1.2 lives in expectation (EV is +10 QALY!)? or is he just retiring like a normal person?

Expand full comment

I have no prior knowledge of Peter Singer's philosophic oeuvre (and little of anyone else's), but I gather from what you've just said about it that you and he concur on these ethical principles: 1) people of ample means should contribute money and/or effort to improve the lot of those who are less well-to-do, particularly those languishing in poverty, and 2) insofar as possible without resorting to deception, illicit coercion, or violation of democratic norms citizens of developed countries should do what they can -- through voting, political advocacy, and financial support and/or volunteer work for political candidates -- to impel their countries' governments to adopt and pursue egalitarian policies to minimize economic disparity, both domestically and worldwide.

I am not a student of paleoanthropology, either, but I assume you would agree that we are descended from small-brained hominids that gradually evolved, through a process of natural selection favoring reproduction by more intelligent individuals over those of less intelligence, into a species with much greater cranial capacity and median intelligence. That evolutionary trend has clearly been suspended in first-world countries; indeed, it has been flipped around completely. For various and fairly obvious reasons, childbearing in such countries is now inversely correlated with parental intelligence. In 1994, using educational attainment as a proxy for intelligence, Charles Murray and Richard Herrnstein took account of median lifetime childbearing among five assorted groups of US women: those who don't finish high school; high-school grads who do not go on to college; those who go to college but do not earn degrees; those with baccalaureate degrees; and those with postgraduate or professional degrees. Of those five groups, they noted, the only one with median lifetime childbearing above replacement level (2.1) was the high-school dropouts. That of the high-school grads who did not go on to college was approximately at replacement level; that of those with some college but no degree was sub-replacement-level; that of those with baccalaureate degrees was lower still; and that of those with graduate or professional degrees lowest of all. Other social scientists have maintained that this dysgenic trend has been in effect since as far back as the beginning of the 19th century, and there is evidence that median intelligence in the UK was higher in the Victorian era than now. https://www.sciencedirect.com/science/article/abs/pii/S0160289613000470

Is this trend of any concern to you? And is the predictable effect of a government policy or individual course of action on the median intelligence of future generations of any ethical importance?

Expand full comment
author

I'm not concerned to "minimize economic disparity" so much as to *maximize overall well-being / opportunity*. (I generally favor an "abundance agenda" when it comes to politics, more than traditional egalitarianism.)

In line with my general concern for absolute rather than comparative assessments, I'm perfectly happy for less-educated people to have kids, regardless of how this affects the statistical "median". (In the same way, I welcome immigrants who contribute productively, even if their presence reduces the artificial statistic of "*per capita* GDP".) I welcome future biotechnologies, e.g. gene editing, that will provide greater reproductive freedom to parents by allowing them to boost their children's intellectual capacities. But to protect against abuse, I think it's very important that such technologies be governed by liberal norms -- i.e., left to the parents' choice, not any kind of coercive eugenics imposed by state authorities.

You might like Julian Savulescu's work on procreative beneficence: https://pubmed.ncbi.nlm.nih.gov/12058767/

(The basic idea seems clearly right to me.)

Expand full comment