
At the start of the pandemic, Peter Singer and I argued that our top priority should be to learn more, fast. I feel similarly about AI, today. I’m far from an expert on the topic, so the main things I want to do in this post are to (i) share some resources that I’ve found helpful as a novice starting to learn more about the topic over the past couple months, and (ii) invite others to do likewise!
There’s a lot of uncertainty about the future of AI: some chance that capabilities will continue to skyrocket (with either utopian or dystopian results); some chance that they’ll soon plateau (at least temporarily). If you’re not an expert, you should presumably distribute your credence widely, to be ready for anything. Even if AI capabilities stagnate at precisely today’s levels (which no informed person could seriously believe), it is already sufficient to cause massive upheavals once people—and institutions—fully adjust.
My general sense is that almost everyone should be thinking more about AI. At a minimum: think (and learn) more about how you can better make use of AI in your everyday life (whether that’s to boost your productivity, creativity, or sheer fun). If you think it offers no value to you, you’re making a big mistake. If you value your time, and sometimes engage with information of any kind, then your life can be improved by judicious use of AI. You just need to work out how.
For academics and public intellectuals, the clear-cut #1 priority for humanity right now is to understand how best to address the distinctive risks and potential of this new technology. There’s immense uncertainty, time pressure, and even higher stakes. That’s a recipe for “needs more research attention (pronto).”1
What AI can already do
For a great up-to-date general intro, check out Andrej Karpathy’s “How I use LLMs”:
Things I personally do (and recommend that you try):
Before watching a YouTube video (including the above!), plug it in to NotebookLM for a “briefing doc” summary. Ask any initial questions you have about the content. Get a sense for whether it’s worth your time to actually watch the whole thing.
Use X/Twitter’s “explain this post” Grok button whenever you don’t immediately understand a tweet. (It can be great for providing missing cultural context, etc.)
Ask a cheap “Deep Search” model (e.g. Gemini or Grok) to do basic consumer research product comparisons for you.
Pay for a Claude subscription2 and:
Use it as an all-purpose personal research assistant, whenever you have a random question about… basically anything in life. (Don’t blindly trust answers; but you may develop a sense of which kinds of questions it can and can’t answer reliably.)
Ask it to draft boilerplate emails—anything in corporate-HR voice should never be written by humans again. We have better things to do.
Upload a paper—or talk/podcast transcript—and ask Claude to summarize its central ideas and arguments before you (decide whether to) read it yourself. (This is much better than the paper’s own introduction.)
You can further ask Claude to suggest insightful objections, or—better yet—explain how the author might best respond to whatever thoughts you have. (Active engagement is better than passive. Remember that you want AI to support and scaffold, not substitute for, your thinking. Aim for a back-and-forth, not one-and-done answers.)
Literature review / synthesis: feed Claude a series of links to online resources and ask it to synthesize them into a 2000-3000 word paper explaining xyz in a way that is intelligible to a philosophy professor (or whoever you are). You might, for example, ask: “Please rewrite to explain all technical material in ordinary English, e.g. explaining the meaning of a formula, rather than giving the formula.”
Ask it to create diagrams for you.
Claude made this illustration for me in moments. All I had to do was ask: “draw a 5x2 grid, with each square numbered from 1 - 10, and a small group of people (labelled G1 - G10) drawn in each square. Then add curved arrows indicating that groups G2 - G10 are all going to move into square 1.” (I’m still delighted by the fact that I can generate diagrams on demand so easily!)
Ask Claude to spec and then create a simple web app (“artefact”)
For example, it made this little “Moloch game” (using the classic example of overfishing to illustrate the tragedy of the commons and “race to the bottom” dynamics), when I asked it to create an educational game for my son that would teach themes from Scott Alexander’s Meditations on Moloch. (It also suggested a range of more ambitious game design ideas, but I suspect anything too much more complex wouldn’t fit into a Claude artefact. Worth experimenting, though!)
Next, I’m looking forward to creating a “choose your own adventure”-style time-travel game to supplement our history lessons.
I think there’s a huge opportunity here for parents to easily create educational games customized to the interests of your child. (If Claude artefacts prove too limited, I may eventually need to explore “vibe coding” larger projects—see below.)4
I’ve recently liked Adobe Firefly for stylized image generation, but it’s unreliable at following complex instructions. Better suggestions welcome!
Just for fun: play around with making custom songs with Suno (first using Claude for lyrics). The results will sound like “AI slop” to everyone else, but there’s something magical about being able to create custom songs for (and with) your kid, or giving voice to a family joke, or a theme song for your philosophy paper, etc.
Things I’ve yet to try but some people might find useful:
OpenAI’s Deep Research (requires paid tier)
WhisperTyping voice dictation and AI integration with your PC.
Co-Reader (AI embedded in a reading app: “Tap a paragraph to see questions. Tap a question to see answers. No typing. No switching apps. Just knowledge on tap.”)
replit for “vibe coding”; or websim for (code-free) prompt-engineered miniapps (example)
OvationVR - practice public speaking (job talks, interviews, etc.) in VR, with an AI-generated audience / interviewer that you can prompt to behave as desired.
Sway AI moderator for class discussion of controversial issues. (Discussed here.)
I’d love to hear more suggestions of novel uses you’ve found for LLMs that others might find valuable too. (Not of the “write a fake term paper” variety.)
What’s coming next
Suggested readings:
Ezra Klein, The Government Knows A.G.I is Coming (in case you’re skeptical). See also Peter Wildeford’s summary and related forecasts. And Kevin Roose, Why I’m Feeling the A.G.I.
Dario Amodei’s Machines of Loving Grace (an optimistic vision of where AI may be going over the next few years).
Zvi’s substack - I’ve learned a lot from this. See, e.g., The Most Forbidden Technique for a fascinating discussion of alignment challenges relating to chain-of-thought architectures. The weekly AI news / links roundups are eye-opening.
Test out Sesame’s conversational voice demo. It’s pretty wild!
Taking AI Welfare Seriously by Rob Long et al. (lots of big names in the field!)
Forethought (MacAskill’s new AI macrostrategy group), and their launch paper: ‘Preparing for the Intelligence Explosion’. (Related 80,000 Hours podcast, almost as long as the name suggests.)
What else would you suggest? Please share useful links in the comments!
That’s not, of course, to say that every academic needs to drop everything and retool into AI ethics. But we need enough of our top thinkers to do so—and, especially, to do so with an eye to the important questions.
Claude seems to be the best at philosophical reasoning, from what I can tell. (See related chat.)
If anyone familiar with these topics is reading this, I’d be curious to hear whether you think it’s a decently accurate summary or not!
I taught myself (BASIC and C/C++) programming as a kid, so the prospect of getting (back) into coding sounds kinda fun to me. YMMV! For an interesting-looking article from an expert, see: Using LLMs to Code.
My read on the current models (I have plus tier ChatGPT) is that it’ll do anything a competent undergrad can. That’s not Amodei’s “18 months to paradise” but a competent 20 year old research assistant is nothing to sneeze at.
I think that philosophers should not only learn more about AI but, even more importantly, have a responsibility to contribute to AI Safety and AI Alignment. There's a non-negligible risk of extinction from AI this century (see pages 23-25 of this report https://forecastingresearch.org/xpt ; every expert survey UNANIMOUSLY hold that risk of extinction from AI OUTRANKS risk of extinction from pandemics and nuclear war (and also climate change and non-anthropogenic risks like asteroids; even if risk of extinction from AI averages around 8%, it's morally urgent and important to help reduce the risk and make it, say, 0.0000001%). And the risks are held on a par by many many reputable folks here (https://www.safe.ai/work/statement-on-ai-risk).
So making AI safe and aligned is not only important from a Longtermist viewpoint, it's also important since you and your children are personally threatened by this risk (several leading thinkers even think that AGI could come in the next 3 to 5 years).
I think the intersection of moral philosophy and AI alignment/governance is very important and sadly neglected. There needs to be more work in "philosophico-technical AI safety". As for me, I have just completed the AI Alignment course and "Writing Intensive" course on the AISafetyFundamentals website. I'm currently writing a paper on the topic "Can AI be made Morally Flexible and Open-Minded? Can it help reduce existential risk from power-seeking and value lock-in? What AI Researchers need to know about Building a Self-Correcting Value System".
These links might be a good place to start for AI Safety: www.aisafety.info , aisafety.com , aisafetybook.com (a free comprehensive online textbook) , aisafetyfundamentals.com