I recently attended Princeton’s wonderful Farewell Conference for Peter Singer. It was a lovely conference, showcasing both the breadth of Peter’s interests in normative and practical ethics, and the esteem in which he’s held even by many who strongly disagree with some of his views. I wasn’t a speaker at the event, but the nice thing about having a blog is that I can always just share my thoughts here! So here goes…
Peter Singer’s Greatest Contributions
More than any other philosopher alive today, Peter Singer’s work has had incredible practical impact. Consider: (i) Animal Liberation inspired much of the animal rights movement, including the founding of PETA; and (ii) Famine, Affluence, and Morality inspired many to redirect a portion of their personal financial resources to try to do more to help the global poor (including via the Effective Altruism movement). Either one alone would be an extraordinary achievement: far beyond what most philosophers could realistically hope for. The combination is… kind of mind-blowing, when you think about it.
As his work in both animal ethics and global poverty demonstrates, Peter was a master of what I call theory-driven applied ethics—using neglected insights from moral theory to formulate “mid-level principles” that can speak to a much wider audience, and even seem “obvious” in retrospect:
[W]hile Singer in no way assumes utilitarianism in his famous argument for duties of beneficence, I don’t think it’s a coincidence that the originator of this argument was a utilitarian. Different moral theories shape our moral perspectives in ways that make different factors more or less salient to us. (Beneficence is much more central to utilitarianism, even if other theories ought to be on board with it too.) So one fruitful way to do theory-driven applied ethics is to think about what important moral insights tend to be overlooked by conventional morality…
Shifting to a more philosophical assessment, I might vote for Practical Ethics as his best book. (It certainly had a huge impact on me as an undergraduate.)1 It so nicely demonstrates what practical ethics (at its best) can be: providing thought-provoking, systematic, rational arguments that illuminate important ethical issues. Peter’s writing, here as elsewhere, is clear and laser-focused on important considerations. His conclusions are always supported by good reasons. Unlike many bioethicists, he never relies on questionable intuitions or social norms—no mere sense that “this sort of thing just seems wrong”—as though mere enculturation were sufficient reason to maintain a moral norm that lacks any deeper justification. Reading Practical Ethics, one really gets a sense for how to approach ethics as a form of rational inquiry. Whether or not you agree with the book’s conclusions, I think it has incredible philosophical value as a paradigm of how to do practical ethics well.
More generally, I feel like Peter himself is an exemplary role model for moral philosophers.2 In addition to his deep concern for improving the world, I also can’t stress enough how much I appreciate Peter’s intellectual integrity and abiding faith in the value of inquiry. He is one of philosophy’s staunchest defenders of academic freedom, when too many of our colleagues engage in short-sighted politicking to try to silence or suppress viewpoints they regard as harmful. One speaker at the conference described Peter’s work as containing a “missionary” element—he hopes to convince people of his views, and thereby bring about real improvements in the world. But this description could be misleading, because Peter is always a philosopher, never a mere activist, even in his most practical endeavors. His intellectual integrity is unimpeachable. He welcomes reasoned disagreement, wants the strongest arguments for either side to be aired, and simply trusts that the truth will win out in the end. I love that. I think that’s exactly how philosophers should be.
Where we disagree
Many speakers at the conference took pains to distance themselves from Peter’s utilitarianism, even while praising his ecumenical applied work and general beneficence. I guess I’m one of the few who thinks Peter also got a lot right about fundamental ethical theory! But disagreements are more interesting to explore, so here are some of mine:
(1) Theories of well-being — I reject Peter’s hedonism. Not all possible ways of securing some fixed amount of lifetime happiness would be equally good for you. At a minimum, relationships and achievements matter (non-fungibly) too.
(2) Peter seems very dismissive of longtermism, and AI risk in particular. That seems unjustifiably overconfident to me: even a 0.1% chance of AGI within a decade or two would surely justify significant attention to safety/alignment issues now (even just considering the value of protecting the existing global population, let alone future generations). It makes me wonder whether he may have committed the fallacy of maximizing probable value rather than expected value? I’m not really sure how else to make sense of it.
(3) I don’t know that this is a disagreement, exactly, but I find it curious that Peter focuses so much of his animal advocacy on individual consumption choices (i.e. veganism) rather than effective donations. I’d expect the latter to matter more, and also to prove easier for many people (that’s why I’m a “cheeseburger ethicist”).
(4) I’m also unsure about the extent of our theoretical disagreement re: maximizing. I’ve argued that there’s no substantive difference between Peter’s (Sidgwickian) maximizing consequentialism and Norcross’ scalar consequentialism. Both allow for an “ought of most reason” to pick out the ideal action; neither takes this to indicate a sense of “obligation” that it would be blameworthy to violate. (I also argue that it may be misleading to use ‘obligation’ talk to merely pick out what one has most reason to do, if there isn’t any further sense in which it is required.) My impression is that Peter isn’t entirely convinced of my take here, but I don’t have a clear sense of where he disagrees, or why.
More generally, there’s a big difference in “vibes” between our respective approaches to ethical theory. I appreciate Peter’s relentless focus on promoting well-being. But, perhaps as a result, he presents utilitarianism in a way that can seem stark and alienating to many. I’m often interested (for both theoretical and practical/dialectical reasons) in how we can supplement our core consequentialism with additional plausible claims about virtue and fitting attitudes, for example. I see no downside to this conceptual expansion, and think it’s actually quite essential in order to adequately address many objections, such as the (mistaken) worry that utilitarianism neglects the separateness of persons by treating individuals as fungible “mere means” to the aggregate good. My sense (though I could be wrong about this) is that Peter has less interest in such objections, perhaps because it’s just so clear to him that promoting well-being is what truly matters. (I obviously agree with that latter claim; I just think it’s worth making further claims in addition.)
A personal note
Several speakers emphasized Peter’s calm generosity as a mentor, and that certainly matches my own experience. After admiring his work from afar as an undergrad, I was fortunate to learn even more from Peter during grad school at Princeton—as a student in his Sidgwick seminar, as a TA for his Practical Ethics class, and as I developed my dissertation (on which he offered generous comments).
I also benefited immensely from later collaborating with Peter on pandemic ethics. We co-authored a Washington Post op-ed in April 2020 advocating challenge trials and other “experiments on human volunteers”, and later expanded it into an academic paper defending “risky research”. I was a bit stressed about the former—I was still “junior” (pre-tenure) at the time, and had been upset when a senior philosopher called me out on Facebook for blogging about pandemic ethics in a way he disapproved of;3 and I worried that medical experts would be even worse. But Peter was very calm, reminded me that our arguments were clearly defensible, and that others’ turf-policing was a risk one just had to accept if one hoped to reach a wider audience. It was exactly what I needed to hear just then, and helped to put things back in perspective for me. So: thank you, Peter!
Doing Good from the Ivory Tower
Something that’s very striking about Peter’s career is just how unusual it is—not just in its extraordinary effects, but even in simply trying to do such good. Most academics, even most moral philosophers, do not seem to care all that deeply about the world at large. (Shockingly few have so much as signed the Giving What We Can pledge.) Even when society was struck by something as ethically novel and momentous as a pandemic, only a comparatively small number of moral philosophers seemed to think they had anything to contribute to our collective understanding of the emergency.4 I think that reflects poorly on our discipline, and we (both philosophy and society at large) need more practical philosophers like Peter Singer, who draw upon a deep understanding of systematic theorizing in order to productively critique common moral assumptions.
During the conference, it was mentioned how beneficial it was for the world that Princeton University hired Peter, 25 years ago. This amplified his impact in at least two ways: the media were more interested in hearing from a prestigious Ivy-league professor, and the students he taught (and, at least sometimes, influenced) were themselves disproportionately likely to go on to be wealthy and influential.
While this point wasn’t dwelled on, it seems kind of lamentable that—outside of specialist centers like Oxford’s Global Priorities Institute—there is no general internal impetus for elite universities to hire academics of Peter Singer’s ilk (i.e., whose research explicitly aims to impartially improve the world). Academic hiring depends on the idiosyncratic preferences of the hiring committees, and I imagine it would be unusual for them to give any consideration at all to such instrumental value (and that when they do, it’s apt to be highly politicized/ideological).
Now, I do tend to think that the core purpose of the University is to advance knowledge, not (directly) to try to improve the world. But there seems something a bit odd about never giving weight to impartial social value. Especially when one considers all the other non-rational influences on academic hiring, from ideological factors to social connections. At first glance, it seems kind of a shame that there aren’t at least a few endowed chairs at prestigious universities that are explicitly reserved for “world-improving” philosophy. (You could imagine generalizing this so that every department had at least one tenure line that was explicitly reserved for the most socially valuable work that their discipline could offer.) But on second thought, the appeal of this idea may be undermined by the risk of ideological capture.5 We all know what kind of work university administrators (and even many faculty) tend to view as “socially valuable”, and it doesn’t involve impartial regard for the general welfare.
It’s an interesting question whether there’s any way to better incentivize and support impartially valuable work within academia, while avoiding ideological capture. As I wrote in a comment on Daily Nous last month, we might well expect philanthropic funders to be guided by better priorities than those of typical academics. (Imagine if Doctors Without Borders funded medical research.) Maybe philanthropists could endow chairs in impartial beneficence, or global priorities research, or some such?6 I don’t know; I welcome others’ thoughts on what could feasibly improve academic incentives, to encourage more Singerian careers in future.
Doing Good from Outside the Ivory Tower
While Peter’s retirement from Princeton is a sad thing for the university, and in many ways for the larger world too, the good news is that he’ll hopefully have more time now for pursuing his public-philosophical work!
I’m very glad that that work is continuing, and I look forward to hearing more from Peter on his new podcast Lives Well Lived (co-hosted with Kasia de Lazari Radek), as well as his Bold Reasoning substack!
Along with Parfit’s Reasons and Persons, and R.M. Hare’s Moral Thinking.
The only other figure from history who springs to mind as so clearly exemplifying the two key virtues of the moral philosopher—deep and practical beneficentric concern, together with robust intellectual virtue—would be J.S. Mill.
I can’t recall the exact wording now, but it was something about how “practical philosophy is all well and good when the stakes are low, but in high stakes situations we should really avoid rocking the boat, and just trust the establishment to know what’s best.” I felt at the time (and even more strongly now) that this was entirely backwards: high-stakes situations are precisely when it is most important to have philosophical “gadflies” critically examine the conventional wisdom for potential flaws or oversights. I guess I expected other philosophers to share my faith in the value of philosophical reasoning, and was shocked to discover during the pandemic that many do not. But Peter certainly does, and that’s something I deeply appreciate about him.
I think Peter Godfrey-Smith once assembled a list, but I can’t find it now.
It also seems important to avoid overly bureaucratized assessments of the social value of research. The (often farcical) “impact case studies” of UK academia’s “Research Excellence Framework” (REF) demonstrate the potential risks here. The last thing we need is for US academia to more closely resemble the UK’s awful system.
I imagine there could be a bit of an awkward dance between, on the one hand, the funder’s desire to specify sufficient details/constraints to avoid ideological capture, and on the other hand, the university’s desire to maintain a free hand in who they hire. But I’d expect something in this vicinity could work out. “Global priorities research” is still very general and broadly inoffensive, after all, even if it does nudge one in a certain—salutary—direction; it’s not remotely like trying to endow a chair in “climate change denial” (which I agree universities should not be willing to cooperate with). It’s an interesting question just what the boundaries of acceptable external influence on academia should be. Can anyone recommend good work on the ethics of external funding?
I think my main disagreement with Singer is some of the stuff he has said about Israel-Palestine, mostly recent stuff, and some of his comments on capitalism/socialism. Was very disappointing to see the awful "human shields" argument from him. But overall, him along with Chomsky, have probably been the most influential thinkers to how I think about the world. He's the first philosopher I ever read, way back in HS. Weird to see him retire, but at least he's got a podcast and substack.
On your third disagreement with him, it seems like personal consumption choices here matter a lot - if you eat less meat, on average, less animals are tortured and killed. A lot more should be done to fight animal suffering, but at the very least you shouldn't pay for something to experience extreme suffering and death. And, for most people, I would suspect this is actually the easiest thing they could do. If you're living paycheck to paycheck, it's rather hard to donate. If you do donate your income, you can donate to human charities instead. You could also become an activist, but this requires lots of effort and time. Going vegan might require some initial effort, but overall it's almost trivial. Sometimes I have to quickly glance at ingredients on food labels, but you probably should be doing that anyway. It seems like the absolute bare minimum anyone should do, not some major effort or commitment.
I also think one important aspect and benefit of veganism is the social effects. Most people get their ethics from social norms, not through careful reflection and reasoning. We're never going to achieve the desired goals unless non-vegan behavior becomes socially unacceptable. And vegan social norms are rather hard to encourage if you're not a vegan yourself.
Regarding the .01% AGI argument, I think you are making some contentious assumptions there. Basically, I'd argue that argument is wrong for the same reasons Pascal's wager fails.
I mean, if the world is filled with those kinds of risks (be it nuclear war, bioweapons, secular decline, etc etc) it becomes much less clear that attention to AGI doesn't take away from efforts to reduce those other risks.
Also, for the AGI argument to have force you need to think that working on AGI risk is relatively likely to reduce rather than increase that risk and that it won't increase other risks.
For instance, my take on AGI is basically somewhat like my take on law enforcement use of facial recognition. It was always going to happen (if technically possible) and the choice we had was wether to handwring about it so that it was sold by the least responsible company (Clearview) or encourage somewhat more responsible and technically proficient companies (Amazon/google) from offering it.
Basically, I don't think you can avoid the fact that public concern about AGI will create pressure for western countries to regulate and for prestigious computer scientists not to work on it and that seems like a very bad thing. So even if there is a serious risk there we may want to STFU about it if that makes the outcome plausibly worse.
Also, I fear AGI concerns trade off against taking other concerns about AI seriously.