Without spending a lot of time reading academic journals (or well-curated reading lists), it can be hard to tell whose ideas you might find interesting. Even looking at an academic’s professional website, they’re likely—and I’m as guilty of this as anyone—to just list some broad areas of interest, and then list their papers. It’s hard to tell from this generic info just what their papers contribute, or how interesting their work actually is.
Thinking about the problem back in 2020, I attempted a summary of what makes my papers worth reading (and invited other philosophers to do likewise—a few took me up on it, and they were indeed very interesting!). But reflecting more on it, perhaps a paper-by-paper approach is still too granular, and what would be most helpful is for philosophers to summarize their main contributions & ideas, in a big-picture kind of way. So: here goes!
1. Beneficentrism
This idea may not be especially original (it’s basically just channeling what I take to be the implicit normative content behind Effective Altruism), but it is both important and widely neglected. People commonly dismiss reasons of beneficence as “utilitarian”, when they’re really much broader than that. It also influences how I do applied ethics:
One fruitful way to do theory-driven applied ethics is to think about what important moral insights tend to be overlooked by conventional morality. That was basically my approach to pandemic ethics: to those who think along broadly utilitarian lines, it’s predictable that people are going to be way too reluctant to approve superficially “risky” actions (like variolation or challenge trials) even when inaction would be riskier. And when these interventions are entirely voluntary—and the alternative of exposure to greater status quo risks is not—you can construct powerful theory-neutral arguments in their favour. These arguments don’t need to assume utilitarianism. Still, it’s not a coincidence that a utilitarian would notice the problem and come up with such arguments.
2. Telic Ethics and Moral Priorities
Most ethicists focus on deontic questions, seeking to identify what is permissible and impermissible. I think such questions are overrated, and we do better to focus moral inquiry on the telic question of what to prioritize (or take as our moral ends). Advantages of the telic question include:
Accommodating a wider range of ethical theories (making it more suitable as a neutral framework for the shared philosophical project of ethical theorizing)
Greater & more transparent normative authority (making it more suitable as the guiding question around which to frame one’s own moral views)
Encourages appropriate moral ambition (in contrast to the passive negative goal of just “not acting wrongly”), better focusing our attention on the normative questions that matter most.
This might be the most important idea that I’m currently working on. It’s a big shift from how most philosophers think about ethics. But I also think it’s a clear and significant improvement, for all the above reasons.
3. The need for positivity in ethics
This is another distinctive angle that I think can generate a lot of new and neglected insights. Lots of ethics is currently framed negatively, identifying things to avoid (wrongdoing, injustice, suffering, harm, risk, etc.). But there’s a general problem here, namely that an empty universe maximally satisfies all such purely negative goals. If we’re to avoid valorizing the void,1 we should instead identify a positive goal that the negative rule helps to secure or protect. For example:
Prioritize securing sufficiency (rather than avoiding harm) — note that standard harm/benefit asymmetries, with their general discounting of “pure benefits” (rather than, as I suggest as a replacement, discounting only “pure benefits above the sufficiency threshold”, i.e. luxury benefits) absurdly imply discounting saving lives.
Prefer QALYs gained over DALYs lost as a public health metric (though possibly with some discounting of QALYs gained through procreation).
Replace wrongdoing-aversion with enticement towards acting well (or simply towards morally preferable outcomes per se).
Epistemically: prioritize developing good arguments over avoiding bad ones (and using your best judgment over suspending judgment).
In general, we’re biased towards overweighting risks of commission and underweighting risks of omission. A lot of important progress in applied ethics could be made by simply correcting for this bias in every instance, and instead thinking carefully about what moral goals make sense in the context.
4. Valuing Concrete Individuals
Many philosophers think that utilitarianism violates the separateness of persons, regarding people as fungible mere means to aggregate value. But they’re wrong. As I explain in ‘Value Receptacles’, an axiology can be structured so as to assign fundamental value either to concrete particulars (making them non-fungible) or to disjunctive sets of them (making the particular disjuncts fungible). And there’s no barrier to utilitarians assigning separate, non-fungible value to each particular welfare subject.
(This undermines the principled basis for non-aggregative approaches to ethics; all they have left is intuitions about cases.)
In ‘The Right Wrong-Makers’, I generalize the point to (i) provide a novel response to the alienation objection, while (ii) rebutting Stocker’s charge that modern ethical theories commit us to a troubling “moral schizophrenia” (or disharmony between our motives and our normative reasons). The crucial idea is that the abstract features that appear in our theories’ criteria of rightness (e.g. maximizing value) are summary criteria, but the actual normative work is done by more concrete ways of meeting the general criteria. (To help grasp the intuitive distinction, consider that not all wrong acts are wrong for the same reason, though there’s something more general that all wrong acts have in common.)
More generally, my past work thought a lot about non-instrumental valuation. I summarize the main insights in last year’s post: Theses on Mattering. (Even other moral philosophers routinely make basic conceptual mistakes here. So it would be a big change if my theses here were more widely appreciated!)
5. Normative Disambiguation: Fittingness and Deontic Pluralism
As a general methodological stance, I think too many ethicists rely on treating ‘ought’ as a primitive, when it often needs disambiguation. I have several papers that seek to make progress by thinking through the implications of our normative claims (e.g., in terms of what sorts of claims on our agency they make—what attitudes or responses turn out to be warranted or “fitting” in light of the normative claims in question).
For example, one important upshot of my 2012 ‘Fittingness’ paper is that Global Consequentialism is best understood as a mere terminological variant of Act Consequentialism. My posts on Deontic Pluralism and Consequentialism Beyond Action further introduce my general approach here (including the theoretical importance of recognizing the concept of blameworthiness as distinct from that of mere expediency to blame; the way other consequentialists tend to collapse the two is extremely frustrating, and inimical to philosophical dialogue and progress). Mark Significance with Attitudes flags a significant upshot of my approach. And Why Belief is No Game explains the shortcomings of an influential “pragmatist” alternative approach to normativity.
One thing to flag here is that my interest in the abstract philosophy of normativity is more methodological than metaphysical. The most common misinterpretation of my ‘Fittingness’ paper is that people read me as an early representative of the metaphysical “Fittingness-First” camp (in opposition to reasons fundamentalists and others). But in the paper itself I take pains to emphasize that I don’t take myself to be in disagreement with reasons fundamentalists. I’m not committed to fittingness and reasons being metaphysically distinct at all (I’m inclined to think they’re not), in the way they’d need to be for one to be “prior” to the other. Rather, I’m interested in showing how attention to the concept of fittingness (or using the tool of fitting reasons) can illuminate and discipline our first-order normative theorizing.
Other ideas
The above “big five” ideas summarize some of the major themes of my work. If I had to unite them into one over-arching meta-theme, it would perhaps be that I’m most interested in the question of how we can do moral philosophy better.
But I also have a bunch of other interests that aren’t so easily categorized amongst the above. For example:
I’m currently working on a paper exploring diminishing marginal value as an improvement upon prioritarian ethics and possible solution to double or nothing existence gambles.
I have unsettled views in population ethics, but am especially interested in exploring “commonsense” alternatives to totalism such as variable value theories (which effectively assign diminishing marginal value to additional lives) and hybrid (person-directed + undirected reasons) accounts (which effectively involves a kind of partiality towards antecedently existing individuals).
I’m also interested in exploring the parallels between metaethics and philosophy of mind. Helen & I will probably expand our 2013 ‘Mind-Body Meets Metaethics’ paper into a full book someday. I’m also very partial to my 2-D argument against metaethical naturalism, though referees uniformly hate it (and never for the same reason, it seems). I think the core parochialism objection gets at a deep problem for the current (synthetic naturalist) orthodoxy. And I’m excited to someday explore further the idea that substantive boundary disputes pose a fundamental challenge to metaphysically reductive views about both normativity and mentality.
I think my five fallacies of collective harm (and related work, linked therein) also shed a lot of light on problems with common thinking about inefficacy objections.
And my New Paradox of Deontology may be the strongest general objection to deontology on offer.
Your turn!
I worry that some academics are reticent to publicly explain their work because it seems too “self-promotional” or something. But I’d find more such explanations to be really helpful (and I’m sure I’m not the only one)! So, if anyone gives you flack for doing it, feel free to blame me for asking you. I’d just really like to see more academics writing these sorts of summaries. Feel free to post yours in the comments below. Or write it somewhere else—on your own blog, professional website, or public Facebook post—and link to it in the comments below so others can easily find it and see what interesting stuff they might learn from reading more of your work!
Or, to take my own advice here—since one could avoid valorizing the void by simply not existing, and that doesn’t seem like the best way to do it!—this should really be read as saying something more like, “If we’re to appropriately appreciate the good in the world…”
A big idea:
It seems to me that once you get away from utilitarianism, it's almost inevitable that you're gonna end up being moral particularist to some degree.
So far, moral-particularist theories were basically intractable to "analyze". But, in principle, AI might eventually offer tools to explicitly represent (at least approximations to) ultra-complex moral-particularist theories. How would such models be trained? I guess using experimental philosophy questionnaires to elicit people's intuitions.
The tech is not there yet, but could it ever get there? I'd like to hear from the Wittgenstein-Anscombe-inspired particularists.
e.g. https://academic.oup.com/edited-volume/43987/chapter-abstract/371424801?redirectedFrom=fulltext
Regarding 2 (as well as 3 and 5), I'd be interested to hear more how your ideas relate to the ideas of "conequentialization" of moral theories and "scalar ethics".