Discussion about this post

User's avatar
cinc's avatar

I think my main disagreement with Singer is some of the stuff he has said about Israel-Palestine, mostly recent stuff, and some of his comments on capitalism/socialism. Was very disappointing to see the awful "human shields" argument from him. But overall, him along with Chomsky, have probably been the most influential thinkers to how I think about the world. He's the first philosopher I ever read, way back in HS. Weird to see him retire, but at least he's got a podcast and substack.

On your third disagreement with him, it seems like personal consumption choices here matter a lot - if you eat less meat, on average, less animals are tortured and killed. A lot more should be done to fight animal suffering, but at the very least you shouldn't pay for something to experience extreme suffering and death. And, for most people, I would suspect this is actually the easiest thing they could do. If you're living paycheck to paycheck, it's rather hard to donate. If you do donate your income, you can donate to human charities instead. You could also become an activist, but this requires lots of effort and time. Going vegan might require some initial effort, but overall it's almost trivial. Sometimes I have to quickly glance at ingredients on food labels, but you probably should be doing that anyway. It seems like the absolute bare minimum anyone should do, not some major effort or commitment.

I also think one important aspect and benefit of veganism is the social effects. Most people get their ethics from social norms, not through careful reflection and reasoning. We're never going to achieve the desired goals unless non-vegan behavior becomes socially unacceptable. And vegan social norms are rather hard to encourage if you're not a vegan yourself.

Expand full comment
Peter Gerdes's avatar

Regarding the .01% AGI argument, I think you are making some contentious assumptions there. Basically, I'd argue that argument is wrong for the same reasons Pascal's wager fails.

I mean, if the world is filled with those kinds of risks (be it nuclear war, bioweapons, secular decline, etc etc) it becomes much less clear that attention to AGI doesn't take away from efforts to reduce those other risks.

Also, for the AGI argument to have force you need to think that working on AGI risk is relatively likely to reduce rather than increase that risk and that it won't increase other risks.

For instance, my take on AGI is basically somewhat like my take on law enforcement use of facial recognition. It was always going to happen (if technically possible) and the choice we had was wether to handwring about it so that it was sold by the least responsible company (Clearview) or encourage somewhat more responsible and technically proficient companies (Amazon/google) from offering it.

Basically, I don't think you can avoid the fact that public concern about AGI will create pressure for western countries to regulate and for prestigious computer scientists not to work on it and that seems like a very bad thing. So even if there is a serious risk there we may want to STFU about it if that makes the outcome plausibly worse.

Also, I fear AGI concerns trade off against taking other concerns about AI seriously.

Expand full comment
20 more comments...