Will MacAskill’s What We Owe the Future releases tomorrow! I was lucky enough to read an advance copy. Here are my thoughts on it.
Introduction
WWOTF defends longtermism: “the idea that positively influencing the longterm future is a key moral priority of our time.” How could we hope to influence the longterm future? The book focuses on two broad possibilities: (i) improving values, including via liberal institutions that leave space for continued moral development rather than premature “value lock-in”, and (ii) reducing the risk of premature human extinction. I’ll discuss each approach in more detail, below.
First, a note on the book’s aim. WWOTF is to longtermism what Animal Liberation was to anti-speciesism. Targeted at a general audience, it advocates for an important kind of moral circle expansion—urging us to take more fully into account morally significant interests or considerations that we otherwise tend to unduly neglect. (Like much of the best applied ethics, it clearly draws inspiration from utilitarian moral theory, without actually committing the reader to anything stronger than the importance of beneficence.) It’s interesting and engaging to read, seamlessly combining vivid thought experiments and philosophical reasoning with empirical research and moral lessons from history to yield revisionary (yet compelling) conclusions about how we ought to live.
Note that, because it’s targeted at a general audience, the book doesn’t probe at edge cases or test longtermist principles in extremis in the way that academic philosophers might prefer. So, don’t expect discussion of whether it would (in principle) be worth torturing everyone alive today in order to reduce extinction risk this century by some tiny fraction of a percentage point. (If you’re after this sort of pure philosophy, see ‘The Case for Strong Longtermism’, Bostrom’s ‘Astronomical Waste’, or Beckstead & Thomas’s ‘A paradox for tiny probabilities and enormous values’.) Instead, this book sensibly focuses on the urgent yet undeniable point that we really ought to take greater care not to wipe ourselves out (or otherwise neglectfully slip into a bad long-term trajectory).
Like many of the most important claims in practical ethics, this shouldn’t be controversial, once it’s drawn to our attention. But it does involve a major change in mindset. So the real value of the book, as I see it, is to bring this basic moral insight to our attention, and guide us in applying this new lens to see things in a new light.
Future Generations Matter
The animating moral principle behind the book is that future people matter too. (Note how moderate a claim this is: “I’m not claiming that the interests of present and future people should always and everywhere be given equal weight. I’m just claiming that future people matter significantly.” Beware of critics who dismiss longtermism on the basis of conflating it with more extreme claims, such as total utilitarianism.)
While the practical recommendations of the book could plausibly be justified even for the sake of young people already alive today, the stakes just explode when you consider the scale of humanity’s potential future. As Will explains in ‘The Beginning of History’:
The fact that humanity is only in its infancy highlights what a tragedy its untimely death would be. There is so much life left to live, but in our youth, our attention flits quickly from one thing to the next, and we stumble around not realizing that some of our actions place us at serious risk. Our powers increase by the day, but our self-awareness and wisdom lag behind. Our story might end before it has truly begun.
Just as some philosophers cast about for inane excuses to deny moral status to non-human animals, so some may do for future generations. And it can be interesting to discuss those proposals in the philosophy seminar room. But I think it shouldn’t really be controversial. As Will puts it, if broken glass left on a hiking trail would later cut a child, you don’t need to know when the cut would occur (whether next week, or next century) in order to know that it is worth preventing.
A complication: as Derek Parfit famously noted, we may initially be drawn to (“person-affecting”) principles that threaten this moral datum. The slightest changes can affect who ends up being conceived in future. As a result, if we (say) burn fossil fuels now, that does not make future individuals worse off than they otherwise would have been, because these individuals would never have existed under a different policy. Different individuals would exist in their place (with higher well-being).
Of course, no sane person concludes from this that burning fossil fuels is great after all (even on the contrary-to-fact supposition that it benefits existing individuals on net). You either find some way to “widen” person-affecting principles so that you can still account for our strong moral reasons to improve the well-being of future generations—that is, you solve what Parfit calls the “non-identity problem” (e.g. by granting that coming into (happy) existence benefits that new person in a morally significant way)—or you adopt an impersonal conception of beneficence instead.
(I guess one does see some anti-EA folks on Twitter boasting about their disregard for “hypothetical future people”, but that just seems so blatantly indecent that I don’t know what to make of it. Would they really be okay with, say, burying landmines under a children’s playground on the condition that the mines are set to be inert for a century?)
I think the biggest challenge to longtermism is more practical: granting that of course the long-term future matters in principle, what could we hope to do about it?
Improving Values (and Institutions)
To address this challenge, MacAskill argues that there are periods of plasticity
…where ideas or events or institutions can take one of many forms, followed by a period of rigidity or ossification. The dynamic is like that of glassblowing: In one period, the glass is still molten and malleable; it can be blown into one of many shapes. After it cools, it becomes rigid, and further change is impossible without remelting.
An obvious example is the founding charter or constitution for new nations or institutions. The flaws of the US constitution, for example, have become increasingly clear in recent years. If any sort of World Government is a possibility in the coming century, it will be vitally important that it be crafted well—something that political philosophers could begin preparing for now.
One thing MacAskill emphasizes is that it often helps to catch problems early, when a wider range of solutions tend to be possible. This is most obviously the case for climate change, which serves as a powerful response to critics of longtermism who recommend ignoring future problems until they become present emergencies.
Other historical lessons are more surprising. For example, chapter 3 on ‘Moral Change’ discusses work from historians suggesting that the abolition of slavery was a much more contingent process than we tend to assume (and a particular Quaker, Benjamin Lay, seems to have played an outsized historical role in provoking this vital improvement to humanity’s values). Chapter 4, on ‘Value Lock-In’, discusses the Hundred Schools of Thought in early Chinese philosophy, and how a mix of violent purges, skillful politicking, and luck led to the eventual supremacy of Confucian thought. How different would history have been if the Mohists had ended up shaping Chinese culture instead?
Closer to home, I confess I find myself more than a little awestruck when I reflect on what philosophers like Peter Singer, Toby Ord, and (yes) Will MacAskill have already achieved, e.g. for advancing the interests of non-human animals and the global poor. It’s hard to know how long their influence will persist (and whether others might have filled their roles in their absence), but it doesn’t seem crazy to think there’s a non-trivial chance that they, too, have helped to shift our values on to a better track. And of course there’s more we can do to try to bolster their odds of success in the meantime, e.g. by engaging with “effective altruist” ideas, improving them where needed, and warding off potential misunderstandings or misuses.
None of this is certain to make a difference, of course. But when the stakes are this high, even small (yet non-trivial) chances of helping can be very well worth pursuing. (This is a principle that all social reformers and activists plausibly rely upon. To require certainty would be paralyzing. We all must act on hope, to some extent. The tricky question, I guess, is which hopes offer the best ratio of importance to feasibility, to motivate moral action.)
One thing I especially like about MacAskill’s discussion is his Millian emphasis on building a morally exploratory world. We know that past values involved terrible mistakes and oversights; we should not expect present values to be perfect either. As a result, we should be especially wary of threats to diversity of thought, whether they come from increasing pressures towards cultural conformity, stifling orthodoxies, or AI takeover. And there’s reason to worry that we’ve already gone astray to some extent:
One way of gauging the current diversity of cultures is to consider the range of responses countries made to the COVID-19 pandemic. There was, of course, some diversity, from the ultrastrict lockdowns in China to the more moderate response in Sweden. But the range of responses was far more limited than it could have been. For example, both the Moderna and the Pfizer-BioNTech vaccines were designed by mid-January 2020… Not a single country allowed human challenge trials of the many vaccines developed in 2020… Not a single country allowed the vaccine to be bought on the free market, prior to testing, by those who understood the risks, even on the condition that they report whether they were subsequently infected.
As with Millian “experiments in living”, greater policy diversity (in situations of high uncertainty) seems valuable because of the high value of information. Things that work well may be adopted by others; things that go badly can be learned from and avoided in future.
Existential Risk
While it plausibly isn’t the most likely outcome, it would seem difficult to deny that there is a non-trivial chance that humanity wipes itself out this century, e.g. through global nuclear war, bio-engineered pandemics, or unaligned Artificial Intelligence. This would be about the worst thing realistically imaginable (unless one thinks there’s a realistic risk of worse-than-nothing dystopian future outcomes — in which case, don’t neglect those either!). As a result, it seems well worth investigating whether there’s anything we can feasibly do to reduce these risks—and if there is, then do it.
Critics like to mock AI risk, especially, as “sci fi fantasy”, which seems extraordinary hubris to me. Even if you think that transformative AI this century is most likely impossible (or whatever), I doubt anyone could be justified in assigning it negligible probability, when expert opinion is so split and recent advances (like DALL-E and GPT-3) also seem like things these critics would not have expected a mere decade ago. And if you grant it even 1% credence— or one tenth of that—that’s more than enough to warrant immense concern, on perfectly ordinary (non-fanatical) expectational grounds. (People aren’t usually so opposed to safety engineering, e.g. for aircraft or nuclear reactors, even when the risks in question are comparably low-probability—or lower!)
I worry that some people just like to mock weird-seeming ideas. So I think a lot of the criticism here is not coming from a very thoughtful place. But if any readers know of a better case for why we should either (i) be >99.9% confident that transformative AI is impossible, or (ii) be fine with ignoring a non-trivial risk of this magnitude, please share it in the comments! My view is that playing Russian Roulette is unwise, and you should invest in removing bullets from the barrel before it’s time to pull the trigger—even if your friends say it’s weird and uncool to worry so.
Anyway, back to the book. WWOTF says much here that’s sensible and won’t be surprising to those who have read other work on the topic. So instead of repeating all that, let me just highlight a couple of points that may be more surprising:
Burning coal is obviously bad for numerous reasons (air pollution, carbon emissions). But one neglected consideration that Will highlights is that keeping coal in the ground may be essential for future people to recover and re-industrialize in the aftermath of civilizational collapse. So it would be a bad idea to burn through our remaining coal reserves even if advancing technology allowed us to do this in a “clean” way.
While it’s tempting to respond to technological risks by suggesting that we slow the advance of technology, technological stagnation (and the associated economic stagnation it would lead to) creates its own risks, including moral backsliding and great power conflict. Just as industrial societies are not sustainable until they develop and deploy green tech, so we may worry that nuclear-armed humanity is not sustainable until we advance further—perhaps we positively need (value-aligned) AI in order to stabilize us and protect against other existential risks. It’s a tricky question.
Conclusion
As is probably clear from the above, I think this is a very important book! The practical upshot:
We can steer civilization onto a better trajectory by delaying the point of value lock-in or by improving the values that guide the future. And we can ensure that we have a future at all by reducing the risk of extinction, collapse, and technological stagnation.
If you’re already broadly sympathetic to EA principles, then the best indication of what you can expect to get from the book may be what Will himself learned:
I take historical contingency, and especially the contingency of values, much more seriously than I did a few years ago. I’m far more worried about the longterm impacts of technological stagnation than I was even last year. Over time, I became reassured about civilization’s resilience in the fact of major catastrophes and then disheartened by the possibility that we might deplete easily accessible fossil fuels in the future, which could make civilizational recovery more difficult.
If you’re not the slightest bit sympathetic to EA, then I don’t know what to say. Hopefully you’ll at least find the book thought-provoking? (Write a reasonable critique and maybe you’ll win $20k!) Reading this book should at least provide one with a much clearer understanding of what longtermism looks like in practice.
The book wraps up with three rules of thumb for improving the future in the face of uncertainty:
First, take actions that we can be comparatively confident are good [e.g. general capacity-building]…
Second, try to increase the number of options open to us…
Third, try to learn more.
Seems like good advice! General capacity-building might flow from direct work, well-targeted donations, or “political activism, spreading good ideas, and having children.”
Re: learning more, maybe start with reading this book!
Overall, I highly recommend it. It’s more sensible and down-to-earth than the most provocative academic papers on the topic, which may be viewed as good or bad depending on what you’re looking for. I expect it’d be a lot of fun to base an undergraduate class around. (In my experience, students love how accessible MacAskill’s popular writing is. And I think this one has more depth than Doing Good Better.) Supplement with some of the papers linked above, to test how far the ideas can be pushed. But don’t forget that you needn’t go all the way to total utilitarianism in order to accept the basic moral insight that future generations matter, too.
Disclosure: I’ve been involved in EA a long time, and more recently have received grant funding from EA orgs for my work on utilitarianism. Utilitarianism.net—a project I’m now very invested in—was originally created by Will, and he recruited me to take over as lead editor in 2021.
Re "Would they really be okay with, say, burying landmines under a children’s playground on the condition that the mines are set to be inert for a century?"
No, but, in doing so, you would kill those children. And people against longtermism are obviously against killing children. ("Blatantly indecent," indeed.)
And there's no obvious step from that prohibition to longtermism, to neutrality, etc. I'm not sure there's even an unobvious step.
I look forward to reading MacAskill's whole book.
Here is a critique. I hope it comes across as constructive, that's what I'm aiming for.
Your post ignores the perspective of suffering focused ethics (a range of views from "reducing suffering matters somewhat more" to "reducing suffering is all that matters"). Ignoring or short-changing that perspective seems to be a pattern in your writing here and on utilitarianism.net . Doing so seems pervasive also among many others in the EA top tier. With top tier I mean people with the highest status in the EA community and/or people who hold positions of power in the most established and well-funded EA organizations like Open Phil and 80000 hours.
I find Magnus Vinding's argument on this very revealing and convincing
https://magnusvinding.com/2022/06/17/dismal-dismissal/
It seems MacAskill's book also continues that problematic pattern, at least after skimming his chapter 8 on population ethics. There appears to be no mention of Boonin, Vinding or other s-focused writers in the book.
It seems to me that you and people in the EA top tier can take a step toward "building a morally exploratory world" by giving suffering focused ethics more space and resources.
As a final note, it appears that you yourself in this text mostly appeal to intuitions about preventing harms or suffering, rather than creating beings with positive wellbeing or improving already positive states. For example your discussion of climate change, "broken glass left on a hiking trail" and "burying landmines".