Regarding the .01% AGI argument, I think you are making some contentious assumptions there. Basically, I'd argue that argument is wrong for the same reasons Pascal's wager fails.
I mean, if the world is filled with those kinds of risks (be it nuclear war, bioweapons, secular decline, etc etc) it becomes much less clear that attention to A…
Regarding the .01% AGI argument, I think you are making some contentious assumptions there. Basically, I'd argue that argument is wrong for the same reasons Pascal's wager fails.
I mean, if the world is filled with those kinds of risks (be it nuclear war, bioweapons, secular decline, etc etc) it becomes much less clear that attention to AGI doesn't take away from efforts to reduce those other risks.
Also, for the AGI argument to have force you need to think that working on AGI risk is relatively likely to reduce rather than increase that risk and that it won't increase other risks.
For instance, my take on AGI is basically somewhat like my take on law enforcement use of facial recognition. It was always going to happen (if technically possible) and the choice we had was wether to handwring about it so that it was sold by the least responsible company (Clearview) or encourage somewhat more responsible and technically proficient companies (Amazon/google) from offering it.
Basically, I don't think you can avoid the fact that public concern about AGI will create pressure for western countries to regulate and for prestigious computer scientists not to work on it and that seems like a very bad thing. So even if there is a serious risk there we may want to STFU about it if that makes the outcome plausibly worse.
Also, I fear AGI concerns trade off against taking other concerns about AI seriously.
That seems reasonable! I was more just puzzled b/c Singer's take seemed to be more along the lines of thinking that decades-distant problems should automatically be deprioritized compared to traditional global health charities.
As a theoretical matter I agree, but as a practical matter I have alot of sympathy for Singer's attitude here.
In particular, I fear that there is sufficient play in the joints of longtermist or AGI concerns to let people tell themselves they are doing something really altruistic while getting to do whatever it is they find glorifying or pleasurable.
Not that it's not good for people to enjoy altruism, it clearly is, but I feel like alot of the stuff around longtermism and AGI ends up kinding being: see look the things that I think are cool and which make me seem really important are actually super altruistically important.
This isn't to say that there isn't value in these things, but we are ultimately allocating limited social approval/concern to incentivize certain activities and we don't want to waste it on things that people would be doing anyway or where the possibilities are so large that it's wasteful.
I don't know. I don't personally have anything to gain from a greater focus on AI (it's not my area), but it does seem to me sufficiently transformative and risky-seeming that I definitely want more safety-oriented people to be thinking carefully about it!
Global catastrophic risks in general seem severely under-attended to. (Pandemic prevention may be the most obvious. But I also have more of a professional interest in that.) So I think longtermism offers a helpful corrective here.
I take the point that, because these matters involve tough judgment calls, that introduces more room for bias. (Similar things can of course be said of non-utilitarian priorities -- esp. those of the "systemic change" critique of EA -- but for some reason rarely are.) It's worth being aware of that, and attempting to counteract it. But I don't think wholesale dismissal of longtermist concerns (and everything else that requires tough judgment calls?) is the right way to go.
I didn't necessarily mean people like you (or actually philosophers at all), but I think alot of the Yudkowsky aligned concern for AI for both the people doing the safety research in that style and the donors has more to do with the appeal of a narrative where intelligence (and particularly a certain kind of STEM intelligence) is the most powerful thing in the world and which centers the areas they are interested in as the most important things in the world.
Indeed, one thing I fear here is that this focus on AGI as somehow uniquely dangerous tends to distract from the very real but less sexy kind of risks that are better thought of as simple mistakes/errors in complex systems than rogue AI. For instance, looking at people makes me think we should be more worried about what might be thought of as mental illness in an AI than about alignment or a siperintelligence seruptitously pursuing some kill all humans scenario.
Regarding the major change that AI represents and dangers there. I 100% agree those are real and worth considering, but I'd also argue that (as with almost all transformative technologies) we are actually far oversupplied with such worries. Indeed, for some of the same reasons you raise about how we don't weight potential benefits of novel drugs highly enough I fear that a similar issue happens here.
It doesn't mean that there are no dangers, only that there isn't a need to encourage people to pay more attention to them.
I dunno if you've looked at philjobs lately but some crazy huge fraction of the job openings mention AI and I tend to fear that we are going to see the same thing with AI as we do with bioethics -- a strong incentive for philosophers to come up with intellectual justifications that give intellectual seriousness to the kind of anxiety that this new tech raises in people.
Doesn't mean it's all wrong or anything, I just fear it will be oversupplied relative to pushback against it (just because the benefits tend to involve less interesting novel theories).
So I agree it would have been more satisfying to give a theoretical account of why such approaches are less beneficial but I can also understand that at a fully practical level one might fear that even making the case just further encourages people putting effort into the debate and raises the profile of these efforts.
If there is a strong human bias towards not appreciating the the true scale of combinatorial possibilities when considering how our present actions influence the far future as I would suggest then the problem is that you might worry that even making your case against these approaches will have the effect of saying "this is a totally valid altruistic endeavor unless you believe in this view I have which, while correct, you will likely not accept"
It’s definitely notable that OpenAI, which was originally a poster child for people taking existential risks seriously and working to mitigate them, has potentially become the biggest source of such risk, and that Anthropic, founded precisely to avoid the pitfalls OpenAI was encountering, isn’t obviously doing better.
True, but I still tend to take the position that it's inevitable and that the least bad outcome is if the people likely to take the worry most seriously develop it first -- and that requires going fast rather than going cautiously.
"So even if there is a serious risk there we may want to STFU about it if that makes the outcome plausibly worse."
Why use such strong language? Frankly, you sound like Elon Musk, and that is not a compliment, sorry.
Why is it "a very bad thing" if development of A(G)I is slowed down a bit? I could understand this if AI were our generation's "green revolution", or oil revolution, both enablers of massive population growth (and thriving). Is that what you mean? Do we need AI in the sense we now need fossil fuels (or their full and sustainable replacement) and fertilizer. I don't see this. I see precision laser weed killing, already existing, like golden rice.
Isn't AGI more like a fundamental research (can we create a mind) as opposed to AI that will save countless lives (protein folding, drug invention, x-ray interpretation etc)?
The point is that there isn't really a slow it down option. The choice we have is just who gets there first. Even if you decide to slow down it's likely there will be people in other countries (whether publicly or in classified programs) who won't.
It's no different than atomic weapons. Once the world knew they were possible they would be built the only question was who got them first. Would it have been better if no one built nukes? Maybe, but that wasn't a choice we had, we had the choice of whether we built them first or someone else did. I tend to think the AGI risks are less if they are built first by, say, openAI than whoever is likely to do so if they or even the west generally imposes a pause.
Regarding the .01% AGI argument, I think you are making some contentious assumptions there. Basically, I'd argue that argument is wrong for the same reasons Pascal's wager fails.
I mean, if the world is filled with those kinds of risks (be it nuclear war, bioweapons, secular decline, etc etc) it becomes much less clear that attention to AGI doesn't take away from efforts to reduce those other risks.
Also, for the AGI argument to have force you need to think that working on AGI risk is relatively likely to reduce rather than increase that risk and that it won't increase other risks.
For instance, my take on AGI is basically somewhat like my take on law enforcement use of facial recognition. It was always going to happen (if technically possible) and the choice we had was wether to handwring about it so that it was sold by the least responsible company (Clearview) or encourage somewhat more responsible and technically proficient companies (Amazon/google) from offering it.
Basically, I don't think you can avoid the fact that public concern about AGI will create pressure for western countries to regulate and for prestigious computer scientists not to work on it and that seems like a very bad thing. So even if there is a serious risk there we may want to STFU about it if that makes the outcome plausibly worse.
Also, I fear AGI concerns trade off against taking other concerns about AI seriously.
That seems reasonable! I was more just puzzled b/c Singer's take seemed to be more along the lines of thinking that decades-distant problems should automatically be deprioritized compared to traditional global health charities.
As a theoretical matter I agree, but as a practical matter I have alot of sympathy for Singer's attitude here.
In particular, I fear that there is sufficient play in the joints of longtermist or AGI concerns to let people tell themselves they are doing something really altruistic while getting to do whatever it is they find glorifying or pleasurable.
Not that it's not good for people to enjoy altruism, it clearly is, but I feel like alot of the stuff around longtermism and AGI ends up kinding being: see look the things that I think are cool and which make me seem really important are actually super altruistically important.
This isn't to say that there isn't value in these things, but we are ultimately allocating limited social approval/concern to incentivize certain activities and we don't want to waste it on things that people would be doing anyway or where the possibilities are so large that it's wasteful.
I don't know. I don't personally have anything to gain from a greater focus on AI (it's not my area), but it does seem to me sufficiently transformative and risky-seeming that I definitely want more safety-oriented people to be thinking carefully about it!
Global catastrophic risks in general seem severely under-attended to. (Pandemic prevention may be the most obvious. But I also have more of a professional interest in that.) So I think longtermism offers a helpful corrective here.
I take the point that, because these matters involve tough judgment calls, that introduces more room for bias. (Similar things can of course be said of non-utilitarian priorities -- esp. those of the "systemic change" critique of EA -- but for some reason rarely are.) It's worth being aware of that, and attempting to counteract it. But I don't think wholesale dismissal of longtermist concerns (and everything else that requires tough judgment calls?) is the right way to go.
I didn't necessarily mean people like you (or actually philosophers at all), but I think alot of the Yudkowsky aligned concern for AI for both the people doing the safety research in that style and the donors has more to do with the appeal of a narrative where intelligence (and particularly a certain kind of STEM intelligence) is the most powerful thing in the world and which centers the areas they are interested in as the most important things in the world.
Indeed, one thing I fear here is that this focus on AGI as somehow uniquely dangerous tends to distract from the very real but less sexy kind of risks that are better thought of as simple mistakes/errors in complex systems than rogue AI. For instance, looking at people makes me think we should be more worried about what might be thought of as mental illness in an AI than about alignment or a siperintelligence seruptitously pursuing some kill all humans scenario.
Regarding the major change that AI represents and dangers there. I 100% agree those are real and worth considering, but I'd also argue that (as with almost all transformative technologies) we are actually far oversupplied with such worries. Indeed, for some of the same reasons you raise about how we don't weight potential benefits of novel drugs highly enough I fear that a similar issue happens here.
It doesn't mean that there are no dangers, only that there isn't a need to encourage people to pay more attention to them.
I dunno if you've looked at philjobs lately but some crazy huge fraction of the job openings mention AI and I tend to fear that we are going to see the same thing with AI as we do with bioethics -- a strong incentive for philosophers to come up with intellectual justifications that give intellectual seriousness to the kind of anxiety that this new tech raises in people.
Doesn't mean it's all wrong or anything, I just fear it will be oversupplied relative to pushback against it (just because the benefits tend to involve less interesting novel theories).
So I agree it would have been more satisfying to give a theoretical account of why such approaches are less beneficial but I can also understand that at a fully practical level one might fear that even making the case just further encourages people putting effort into the debate and raises the profile of these efforts.
If there is a strong human bias towards not appreciating the the true scale of combinatorial possibilities when considering how our present actions influence the far future as I would suggest then the problem is that you might worry that even making your case against these approaches will have the effect of saying "this is a totally valid altruistic endeavor unless you believe in this view I have which, while correct, you will likely not accept"
It’s definitely notable that OpenAI, which was originally a poster child for people taking existential risks seriously and working to mitigate them, has potentially become the biggest source of such risk, and that Anthropic, founded precisely to avoid the pitfalls OpenAI was encountering, isn’t obviously doing better.
I'm not an expert in this stuff, but I think Anthropic is doing some promising work? E.g. https://www.anthropic.com/research/mapping-mind-language-model
True, but I still tend to take the position that it's inevitable and that the least bad outcome is if the people likely to take the worry most seriously develop it first -- and that requires going fast rather than going cautiously.
"So even if there is a serious risk there we may want to STFU about it if that makes the outcome plausibly worse."
Why use such strong language? Frankly, you sound like Elon Musk, and that is not a compliment, sorry.
Why is it "a very bad thing" if development of A(G)I is slowed down a bit? I could understand this if AI were our generation's "green revolution", or oil revolution, both enablers of massive population growth (and thriving). Is that what you mean? Do we need AI in the sense we now need fossil fuels (or their full and sustainable replacement) and fertilizer. I don't see this. I see precision laser weed killing, already existing, like golden rice.
Isn't AGI more like a fundamental research (can we create a mind) as opposed to AI that will save countless lives (protein folding, drug invention, x-ray interpretation etc)?
The point is that there isn't really a slow it down option. The choice we have is just who gets there first. Even if you decide to slow down it's likely there will be people in other countries (whether publicly or in classified programs) who won't.
It's no different than atomic weapons. Once the world knew they were possible they would be built the only question was who got them first. Would it have been better if no one built nukes? Maybe, but that wasn't a choice we had, we had the choice of whether we built them first or someone else did. I tend to think the AGI risks are less if they are built first by, say, openAI than whoever is likely to do so if they or even the west generally imposes a pause.