My read on the current models (I have plus tier ChatGPT) is that it’ll do anything a competent undergrad can. That’s not Amodei’s “18 months to paradise” but a competent 20 year old research assistant is nothing to sneeze at.
Yeah I recently had a very bright high school student work as a research assistant for me in developing a syllabus for next year (basically, I'd point him to some things to read and summarize for me, with an eye to whether they touched on certain themes and arguments I wanted to make sure we cover in the class). He did a great job, but it was pretty clear to me that this is the kind of thing that it's actually much more sensible to use AI for. (In this case, the arrangement was as much for his sake as for mine.)
I think that philosophers should not only learn more about AI but, even more importantly, have a responsibility to contribute to AI Safety and AI Alignment. There's a non-negligible risk of extinction from AI this century (see pages 23-25 of this report https://forecastingresearch.org/xpt ; every expert survey UNANIMOUSLY hold that risk of extinction from AI OUTRANKS risk of extinction from pandemics and nuclear war (and also climate change and non-anthropogenic risks like asteroids; even if risk of extinction from AI averages around 8%, it's morally urgent and important to help reduce the risk and make it, say, 0.0000001%). And the risks are held on a par by many many reputable folks here (https://www.safe.ai/work/statement-on-ai-risk).
So making AI safe and aligned is not only important from a Longtermist viewpoint, it's also important since you and your children are personally threatened by this risk (several leading thinkers even think that AGI could come in the next 3 to 5 years).
I think the intersection of moral philosophy and AI alignment/governance is very important and sadly neglected. There needs to be more work in "philosophico-technical AI safety". As for me, I have just completed the AI Alignment course and "Writing Intensive" course on the AISafetyFundamentals website. I'm currently writing a paper on the topic "Can AI be made Morally Flexible and Open-Minded? Can it help reduce existential risk from power-seeking and value lock-in? What AI Researchers need to know about Building a Self-Correcting Value System".
Considering how much you discuss morality I was pretty disappointed how quickly this article became a series of life pro tips on how to make my life more efficient instead of discussing if it’s even right to use or how AI could be detrimental to human thinking processes long term.
You're welcome to expand on either of those issues if that interests you more. But I think it's an interesting and important question how we can make the most of these tools, so that's also something I'm keen to invite further discussion of.
fwiw, I think it's obviously fine to use a tool in good ways, even if other people might use the same tool in bad ways. So the "is it even OK to use?" question doesn't seem serious to me. Compare "Utopian Enemies of the Better" for a sense of my underlying perspective:
As noted in the OP, I also very much want to encourage more research into how humanity can best manage the promise and peril of AI over the coming years. I don't currently have the expertise to say anything worth reading on that topic. But as a meta point, I would see it as more worthwhile to focus on a practical question like "how we can govern AI use and development in order to secure better outcomes?" rather than a purely axiological question like: "will AI be detrimental in such-and-such respect?" (Even if the answer to the latter is 'Yes', that doesn't yet establish what - if anything - we can or should do about it.)
Replace "the work" (in that sentence) with "some work", and notice how you then have more time (and background info at hand) with which you can do the real work.
Also, quite a lot of the post is about getting AI to do *new* work for you: things that an individual can't ordinarily do on their own at all.
> "I’m not clear where in your vision the actual thinking happens."
It sounds like you haven't thought enough about what I'm saying. But if you're really stuck, I bet Claude could help you brainstorm more charitable interpretations! :-)
My read on the current models (I have plus tier ChatGPT) is that it’ll do anything a competent undergrad can. That’s not Amodei’s “18 months to paradise” but a competent 20 year old research assistant is nothing to sneeze at.
Yeah I recently had a very bright high school student work as a research assistant for me in developing a syllabus for next year (basically, I'd point him to some things to read and summarize for me, with an eye to whether they touched on certain themes and arguments I wanted to make sure we cover in the class). He did a great job, but it was pretty clear to me that this is the kind of thing that it's actually much more sensible to use AI for. (In this case, the arrangement was as much for his sake as for mine.)
I think that philosophers should not only learn more about AI but, even more importantly, have a responsibility to contribute to AI Safety and AI Alignment. There's a non-negligible risk of extinction from AI this century (see pages 23-25 of this report https://forecastingresearch.org/xpt ; every expert survey UNANIMOUSLY hold that risk of extinction from AI OUTRANKS risk of extinction from pandemics and nuclear war (and also climate change and non-anthropogenic risks like asteroids; even if risk of extinction from AI averages around 8%, it's morally urgent and important to help reduce the risk and make it, say, 0.0000001%). And the risks are held on a par by many many reputable folks here (https://www.safe.ai/work/statement-on-ai-risk).
So making AI safe and aligned is not only important from a Longtermist viewpoint, it's also important since you and your children are personally threatened by this risk (several leading thinkers even think that AGI could come in the next 3 to 5 years).
I think the intersection of moral philosophy and AI alignment/governance is very important and sadly neglected. There needs to be more work in "philosophico-technical AI safety". As for me, I have just completed the AI Alignment course and "Writing Intensive" course on the AISafetyFundamentals website. I'm currently writing a paper on the topic "Can AI be made Morally Flexible and Open-Minded? Can it help reduce existential risk from power-seeking and value lock-in? What AI Researchers need to know about Building a Self-Correcting Value System".
These links might be a good place to start for AI Safety: www.aisafety.info , aisafety.com , aisafetybook.com (a free comprehensive online textbook) , aisafetyfundamentals.com
Considering how much you discuss morality I was pretty disappointed how quickly this article became a series of life pro tips on how to make my life more efficient instead of discussing if it’s even right to use or how AI could be detrimental to human thinking processes long term.
You're welcome to expand on either of those issues if that interests you more. But I think it's an interesting and important question how we can make the most of these tools, so that's also something I'm keen to invite further discussion of.
fwiw, I think it's obviously fine to use a tool in good ways, even if other people might use the same tool in bad ways. So the "is it even OK to use?" question doesn't seem serious to me. Compare "Utopian Enemies of the Better" for a sense of my underlying perspective:
https://www.goodthoughts.blog/p/utopian-enemies-of-the-better
As noted in the OP, I also very much want to encourage more research into how humanity can best manage the promise and peril of AI over the coming years. I don't currently have the expertise to say anything worth reading on that topic. But as a meta point, I would see it as more worthwhile to focus on a practical question like "how we can govern AI use and development in order to secure better outcomes?" rather than a purely axiological question like: "will AI be detrimental in such-and-such respect?" (Even if the answer to the latter is 'Yes', that doesn't yet establish what - if anything - we can or should do about it.)
> Active engagement is better than passive.
This sits incongruously in an article about getting AI to do the work for you.
> “Tap a paragraph to see questions. Tap a question to see answers. No typing. No switching apps. Just knowledge on tap.”
Like that. Couch potatoing for the mind.
>Remember that you want AI to support and scaffold, not substitute for, your thinking.
I’m not clear where in your vision the actual thinking happens.
You can support and scaffold an aircraft all you like, at some point it has to fly by itself.
> "getting AI to do the work for you."
Replace "the work" (in that sentence) with "some work", and notice how you then have more time (and background info at hand) with which you can do the real work.
Also, quite a lot of the post is about getting AI to do *new* work for you: things that an individual can't ordinarily do on their own at all.
> "I’m not clear where in your vision the actual thinking happens."
It sounds like you haven't thought enough about what I'm saying. But if you're really stuck, I bet Claude could help you brainstorm more charitable interpretations! :-)