Social media notoriously directs us into bubbles and echo chambers. But the incentives to engage more with like-minded individuals are more general, and also found in academia. It seems rather rare to find cutting-edge work on “big picture” debates like consequentialism vs deontology, for example. Part of this may stem from hyper-specialization, as people work on the next epicycle of some highly specific sub-debate. But it also seems like we naturally cluster into different philosophical “camps” (utilitarians, Kantians, “common sense” moralists, etc.), to hunker down and work out the internal details of our preferred views. There are good reasons for that, of course—we often learn more from those who are philosophically “closer” to us, and sympathetic to our general approach, whereas trying to understand and explain ourselves to those more distant can be a frustrating experience, rife with mutual misunderstanding (and negative referee reports). But there’s a risk that cross-camp debates end up unduly neglected, which seems especially unfortunate when these are, after all, the bigger and more important questions.
Hence this post. After highlighting some challenges/arguments that I’d be especially keen to see my philosophical opponents address, I’ll invite anyone else to comment with challenges or objections to my views—anything you think I might be unduly neglecting or would benefit from grappling with more explicitly. But first…
A note on the limits of argumentation
Sometimes people assume that an argument they personally find unconvincing is thereby “question-begging” or otherwise worthless. This is a mistake. A determined opponent can always just reject a premise; that’s inevitable. Arguments can’t force people to change their minds, so that isn’t a realistic expectation.
We do better to think of arguments as highlighting neglected costs (of rejecting the conclusion), and inviting those who nonetheless reject our conclusions to (i) seriously consider which costs they’re willing to accept (i.e. which premises to reject), and (ii) suggest any counterarguments that mitigate the apparent cost of their preferred move (or perhaps even show it to be a “feature” rather than a “bug”). In a successful dialectic, everyone leaves with a clearer view of the costs and benefits of the competing views on offer.
A question-begging argument is one that offers no such illumination. The conclusion is so transparently contained within the premises that there is no conceivably “neglected” consideration there to highlight—nothing that might, for example, help to sway a “fence-sitter” who was as-yet-undecided about whether to accept the conclusion. Any such fence-sitter would necessarily be just as undecided about the question-begging premise.
In general, rather than just asking whether you could (or even do) reject this or that premise, it’s often more worthwhile to try to evaluate which claim (the premise or its negation) is overall more plausible. You could believe just about anything, after all. But it’s better to believe things that are more plausible rather than less so. Accordingly, a good objection to an argument involves explaining why it’s more plausible to reject a certain premise than to accept it. This is to make an essentially comparative claim.
Several of my recent posts have tried to explain my moral perspective in a way that I hope will seem plausible and compelling even to some who might not have been inclined to agree with me beforehand. For example, I suspect that at least some (esp. non-philosophers) may be drawn to deontology because they endorse the norms in practice, not because they necessarily believe them to be non-instrumentally justified. So simply highlighting this distinction could go some way towards convincing such individuals.
For another example, my claim that “Competing norms cannot plausibly claim to be more important, in principle, than people’s lives and well-being” is one that strikes me as having a lot of intrinsic credibility. Simply raising it to salience, and noting the conflict with deontological theory, might convince some fence-sitters to lean against the latter, since rejecting this claim seems a real cost of that theory.
I wouldn’t expect either of these points to sway a committed deontologist. (I don’t really expect anything to sway a committed deontologist, though you never know when someone might surprise you.) But I think it’s worth presenting such arguments nonetheless, because they could—especially in combination with further arguments—reasonably sway those who aren’t yet committed.
I’d like for there to be more such argumentative presentations—in both directions—since I don’t think we currently have a good collective grasp on what the balance of reasons in the consequentialism-deontology debate even looks like. (And presumably the same is true of many other important debates.) What would a deontologist say to try to convince a fence-sitter who was starting to feel swayed by my normativity objection and related arguments? Their view is sufficiently alien to me that I’m not able to predict this. But it would be good to know! Conversely, many non-consequentialists say things that lead me to think that they’ve internalized deep misconceptions about my view. (Indeed, I’d say about half of the most common objections to utilitarianism rest on outright misconceptions rather than legitimate differences of opinion. Though that still leaves plenty of room for the latter too, of course!)
Challenges I’d like to highlight
For non-consequentialists
‘Three Arguments for Consequentialism’ summarizes some of my favourite arguments here. Though—especially for the ‘Master Argument’—one would need to follow the supporting links to get the full picture.
As indicated above, I’m very curious what deontologists can say to try to make their fundamental principles sound more intrinsically credible, given their conflict with ex ante Pareto. What makes their interest-independent moralizing appreciably different from conservative moralizing, for example?
I’m also curious how they’d deal with my new paradox of deontology.
And, given how strongly most deontologists rely on intuitive “counterexamples”, I’d like to hear more about why the intuitive accommodations of deontic fictionalism or two-level consequentialism don’t defang these worries, at least to a significant extent. (I grant that some may just brutely intuit that there are additional non-instrumental reasons in these cases. That’s fine. But I’m wondering how confident they can be that this verdict is obvious or decisive, especially when more pure variants of their “counterexamples”, like Martian Harvest, don’t seem the slightest bit intuitively embarrassing to utilitarians—suggesting that there is no real “bullet” to bite here after all, and the strength of the original intuitions instead stems from confounding factors.)
For those who claim, more strongly, that there’s something objectionably “instrumental” or otherwise obviously wrong with the way that consequentialism values people, I’d especially urge engagement with my post, ‘Theses on Mattering’. I really think this is a case where the critics are flatly mistaken. But if I’m wrong about that, I’d love to hear the counterargument.
More generally, I’d love to see critics of utilitarianism engage with the strongest version of the view, rather than the straw-man caricature that’s in their heads.
For Critics of Effective Altruism
Here I’m mostly curious about whether these folks actually reject scope-sensitive beneficentrism, and think there’s something inherently objectionable about the core idea of “doing good better” (which remains bizarrely rare, after all!), or if they secretly endorse this central plank of EA philosophy and are really just quibbling about strategy and edge cases (e.g. the expected value of anti-capitalist politics). Those latter disagreements are still important, of course, but they’re surely best thought of as internal questions about how best to pursue our shared goals, rather than justifying the wholesale opposition and hostility that one tends to actually find from EA’s most vocal critics.
For proponents of narrow person-affecting views
Without committing the Epicurean fallacy, what advantage does the narrow view offer over a principled hybrid that combines impersonal and person-affecting reasons? As I explain the challenge there:
We all agree that you can harm someone by bringing them into a miserable existence, so there’s no basis for denying that you can benefit someone by bringing them into a happy existence. It would be crazy to claim that there is literally no reason to do the latter. And there is no theoretical advantage to making this crazy claim. (As I explain in ‘Puzzles for Everyone’, it doesn’t solve the repugnant conclusion, because we need a solution that works for the intra-personal case — and whatever does the trick there will automatically carry over to the interpersonal version too.) So the narrow person-affecting view really does strike me as entirely unmotivated.
The challenge then carries over to population-ethics objections to longtermism:
[T]his very natural hybrid view still entails the basic longtermist claim that we’ve very strong moral reasons to care about the distant future (and strongly prefer flourishing civilization over extinction). So the notion that longtermism depends on stark totalism is simply a mistake.
It remains sadly common for philosophers to flippantly reject longtermism on the (false) presumption that it depends upon totalism. I’d encourage them to think more carefully on this.
What are the biggest outstanding challenges to my views?
Comments welcome! I’ll try to reply to serious suggestions as time allows.
Sorry to respond late, and at an obnoxious length, but I've been busy, and still wanted to share my thoughts.
On deontology:
I agree that utilitarianism has a much more convincing account of what matters, but I think deontic fictionalism and two-level consequentalism concede a lot more to deontology than you recognize. As commenters pointed out, it is plausible that truly committing to deontic fictionalism requires one to essentially become...a deontologist. For example, one might believe that no human beings will actually follow rules they think are only instrumental and so in order to actually establish useful rules as normative, we have to all affirm that the rules are justified not just for their instrumental value.
I think this is a bit extreme, but I think it points to a general instability in these sorts of ideas. To me, the key issues facing a deontic fictionalist or two-level consequentalist are: what rules should we adopt, and should we allow exceptions, and if so in what cases? These are the questions that ought to distinguish a two-level utilitarian from a true deontologist, even if the only rules that differ are rules of the form "you should affirm that these rules are normative for non-instrumental reasons".
Either we should basically follow the usual deontological rules, but allow for exceptions, or, the same thing but framed differently, we should have a set of rules that in most cases reduces to the standard deontological rules but not always. The difference between the rules a utilitarian will come to and the usual deontological rules can be thought of in two ways: they will be the standard deontological rules, but with a principled way to handle the objections a naive utilitarian would make when its clear that the first-order consequences of following a rule are bad; or they are the rules you are led to by considering how a naive utilitarian would account for increasingly higher-order consequences of their behaviour.
But this is a self-referential problem: if you start by considering higher-order consequences of your actions, those higher-order consequences depend on the set of rules that you expect people to follow...you want to find a fixed point in the set of rules that, if everyone acting under those rules considered all higher-order consequences of actions performed under that set of rules, would give the best outcome. This makes it a very hard, plausibly intractable problem to solve, at least in a satisfactory way. Deontology is a bad solution--it just imposes a set of rules by fiat--but it is at least a stable solution. Utilitarianism, to me, faces the problem of either picking arbitrary cut-offs of how many levels of consequences to follow or of basically endorsing some set of deontological rules, but then allowing unprincipled exceptions if the lower-order consequences of following those rules seem bad enough.
This latter point of view is basically my attempt to characterize deontic fictionalism/two-level consequentialism, and I think the difficulty is that, until utilitarians have a truly competing set of rules, a realistic two-level consequentialism is always just going to look like either a set of unprincipled exceptions to deontology, or an endorsement of deontology but for different reasons. In both cases, I think this concedes that deontology is right that a) moral decision-making should be guided mostly by following "common sense" rules of morality and b) deviations from these rules will be mostly based on ad hoc reasoning, and will be difficult-to-impossible to expand into fully general principles.
I think the argument I've laid out above is a long-winded (sorry) way of saying that utilitarianism is a better moral theory [i]in theory[/i] than deontology, but it is hard to translate that into a better account of [i]moral practice[/i]--98% of the time, utilitarianism will tell you "follow deontological rules", but it will give you better reasons. This is at least a little ironic, since utilitarianism, by its nature, ought to be more concerned with differences in moral practice. In your two-level consequentialism post, you note that
[quote]Theories differ in the verdicts they yield about hypothetical cases (and certain kinds of “ex post” retrospective judgments). But it would be a mistake to take these as carrying over straightforwardly to real-life cases—or even to various “ex ante” judgments, including judgments of the quality of the agent’s intentions, character, or decision-making. Utilitarians can say much more commonsensical things about these sorts of judgments than most people realize.[/quote]
But ex post retrospective judgements shouldn't really be that interesting to a utilitarian; subjective evaluations of events after-the-fact presumably make very little difference to actual outcomes for human beings unless they inform ex ante judgements in future cases; and if our ex ante judgements are more commonsensical, then are we really adding much that's new?
In a sense, utilitarianism seems to me something like a scientific theory of, say, animal behaviour, that is founded in modern atomic physics and so forth, while deontology is like a theory of animal behaviour founded in, like, "elan vital". The former theory is much better grounded theoretically, but the practical difficulties of applying it might mean that it might not actually be a better guide to studying animal behaviour then the latter. "What is the vital force of this frog compelling it do?" might be a better way to think about how frogs act than "What is the outcome of this completely uncomputable simulation of all the atoms in the frog", even if the former is basically completely wrong in its view of the world, and the latter is basically completely right.
Now, I've stated the most extreme version of the case; I think I can anticipate some of your objections, and I probably agree with them. First of all, deontologists do actually endorse some pretty bad rules; as you note, even though theoretically they could be, a lot of deontologists are not beneficentrists--maybe compared to a sufficiently good version of deontology, utilitarianism would be little more than a tweak, but without pressure from utilitarians, we end up with pretty crappy versions of deontology.
What's more, I interpreted everything I said above in the most unflattering way for utilitarianism: in fact, even when it is computationally intractable, using correct first principles to answer questions by just imposing cut-offs can actually be a very powerful tool; no one would actually analyze frog behaviour by simulating a frog at the atomic level, but thinking about frogs as made of atoms is not fruitless! Frog behaviour is influenced by biochemistry, and biochemistry reduces to atoms.
So I don't actually endorse the point of view above; but I think it does capture something true about the difficulties of actually having a practically useful utilitarianism, and why theories like two-level consequentialism defang the deontologist critique by actually ceding a lot of ground to them; that's obviously fine, but I think sometimes you write as if, having shown how the two theories are more compatible than one might think, deontologists should think about moving in a more utilitarian direction...but there are not-crazy reasons to argue that your synthesis actually is a bigger step in the direction of deontology!
Person Affecting views:
Having said a lot above, I'll try be more concise here. I agree there are lots of problems with narrow person affecting views, but I don't think the only solution is to adopt impersonal reasons and the idea that one can be benefited by being brought into existence--Michael St. Jules has some comments in the Epicurean Fallacy post that I think point at other ways to get around at least some of those difficulties. I think all attempts to save the spirit of person-affecting views still don't satisfy Independence of Irrelevant Alternatives, for example, so I don't mean to say that these solutions are equally satisfactory as adopting an impersonal view, much less that there are advantages. It's just, the procreative asymmetry really does feel intuitive to me, so I think it's worth keeping an open mind.
The strongest objection to longtermism is skepticism about the extent of our knowledge. Would the world today be better if people centuries ago had been able and willing to shape our present? I'm inclined to say no; moral and scientific progress has made us, the people of today, better at guiding today's world than Ghenghis Khan, Queen Elizabeth, or whoever would have been from their temporal position. Similarly, I suspect people hundreds of years from now will rightly think the same of us. As we are better qualified to shape our present than our distant ancestors, so will our descendants centuries down the road be better qualified to shape their present.