12 Comments

Sorry to respond late, and at an obnoxious length, but I've been busy, and still wanted to share my thoughts.

On deontology:

I agree that utilitarianism has a much more convincing account of what matters, but I think deontic fictionalism and two-level consequentalism concede a lot more to deontology than you recognize. As commenters pointed out, it is plausible that truly committing to deontic fictionalism requires one to essentially become...a deontologist. For example, one might believe that no human beings will actually follow rules they think are only instrumental and so in order to actually establish useful rules as normative, we have to all affirm that the rules are justified not just for their instrumental value.

I think this is a bit extreme, but I think it points to a general instability in these sorts of ideas. To me, the key issues facing a deontic fictionalist or two-level consequentalist are: what rules should we adopt, and should we allow exceptions, and if so in what cases? These are the questions that ought to distinguish a two-level utilitarian from a true deontologist, even if the only rules that differ are rules of the form "you should affirm that these rules are normative for non-instrumental reasons".

Either we should basically follow the usual deontological rules, but allow for exceptions, or, the same thing but framed differently, we should have a set of rules that in most cases reduces to the standard deontological rules but not always. The difference between the rules a utilitarian will come to and the usual deontological rules can be thought of in two ways: they will be the standard deontological rules, but with a principled way to handle the objections a naive utilitarian would make when its clear that the first-order consequences of following a rule are bad; or they are the rules you are led to by considering how a naive utilitarian would account for increasingly higher-order consequences of their behaviour.

But this is a self-referential problem: if you start by considering higher-order consequences of your actions, those higher-order consequences depend on the set of rules that you expect people to follow...you want to find a fixed point in the set of rules that, if everyone acting under those rules considered all higher-order consequences of actions performed under that set of rules, would give the best outcome. This makes it a very hard, plausibly intractable problem to solve, at least in a satisfactory way. Deontology is a bad solution--it just imposes a set of rules by fiat--but it is at least a stable solution. Utilitarianism, to me, faces the problem of either picking arbitrary cut-offs of how many levels of consequences to follow or of basically endorsing some set of deontological rules, but then allowing unprincipled exceptions if the lower-order consequences of following those rules seem bad enough.

This latter point of view is basically my attempt to characterize deontic fictionalism/two-level consequentialism, and I think the difficulty is that, until utilitarians have a truly competing set of rules, a realistic two-level consequentialism is always just going to look like either a set of unprincipled exceptions to deontology, or an endorsement of deontology but for different reasons. In both cases, I think this concedes that deontology is right that a) moral decision-making should be guided mostly by following "common sense" rules of morality and b) deviations from these rules will be mostly based on ad hoc reasoning, and will be difficult-to-impossible to expand into fully general principles.

I think the argument I've laid out above is a long-winded (sorry) way of saying that utilitarianism is a better moral theory [i]in theory[/i] than deontology, but it is hard to translate that into a better account of [i]moral practice[/i]--98% of the time, utilitarianism will tell you "follow deontological rules", but it will give you better reasons. This is at least a little ironic, since utilitarianism, by its nature, ought to be more concerned with differences in moral practice. In your two-level consequentialism post, you note that

[quote]Theories differ in the verdicts they yield about hypothetical cases (and certain kinds of “ex post” retrospective judgments). But it would be a mistake to take these as carrying over straightforwardly to real-life cases—or even to various “ex ante” judgments, including judgments of the quality of the agent’s intentions, character, or decision-making. Utilitarians can say much more commonsensical things about these sorts of judgments than most people realize.[/quote]

But ex post retrospective judgements shouldn't really be that interesting to a utilitarian; subjective evaluations of events after-the-fact presumably make very little difference to actual outcomes for human beings unless they inform ex ante judgements in future cases; and if our ex ante judgements are more commonsensical, then are we really adding much that's new?

In a sense, utilitarianism seems to me something like a scientific theory of, say, animal behaviour, that is founded in modern atomic physics and so forth, while deontology is like a theory of animal behaviour founded in, like, "elan vital". The former theory is much better grounded theoretically, but the practical difficulties of applying it might mean that it might not actually be a better guide to studying animal behaviour then the latter. "What is the vital force of this frog compelling it do?" might be a better way to think about how frogs act than "What is the outcome of this completely uncomputable simulation of all the atoms in the frog", even if the former is basically completely wrong in its view of the world, and the latter is basically completely right.

Now, I've stated the most extreme version of the case; I think I can anticipate some of your objections, and I probably agree with them. First of all, deontologists do actually endorse some pretty bad rules; as you note, even though theoretically they could be, a lot of deontologists are not beneficentrists--maybe compared to a sufficiently good version of deontology, utilitarianism would be little more than a tweak, but without pressure from utilitarians, we end up with pretty crappy versions of deontology.

What's more, I interpreted everything I said above in the most unflattering way for utilitarianism: in fact, even when it is computationally intractable, using correct first principles to answer questions by just imposing cut-offs can actually be a very powerful tool; no one would actually analyze frog behaviour by simulating a frog at the atomic level, but thinking about frogs as made of atoms is not fruitless! Frog behaviour is influenced by biochemistry, and biochemistry reduces to atoms.

So I don't actually endorse the point of view above; but I think it does capture something true about the difficulties of actually having a practically useful utilitarianism, and why theories like two-level consequentialism defang the deontologist critique by actually ceding a lot of ground to them; that's obviously fine, but I think sometimes you write as if, having shown how the two theories are more compatible than one might think, deontologists should think about moving in a more utilitarian direction...but there are not-crazy reasons to argue that your synthesis actually is a bigger step in the direction of deontology!

Person Affecting views:

Having said a lot above, I'll try be more concise here. I agree there are lots of problems with narrow person affecting views, but I don't think the only solution is to adopt impersonal reasons and the idea that one can be benefited by being brought into existence--Michael St. Jules has some comments in the Epicurean Fallacy post that I think point at other ways to get around at least some of those difficulties. I think all attempts to save the spirit of person-affecting views still don't satisfy Independence of Irrelevant Alternatives, for example, so I don't mean to say that these solutions are equally satisfactory as adopting an impersonal view, much less that there are advantages. It's just, the procreative asymmetry really does feel intuitive to me, so I think it's worth keeping an open mind.

Expand full comment

The strongest objection to longtermism is skepticism about the extent of our knowledge. Would the world today be better if people centuries ago had been able and willing to shape our present? I'm inclined to say no; moral and scientific progress has made us, the people of today, better at guiding today's world than Ghenghis Khan, Queen Elizabeth, or whoever would have been from their temporal position. Similarly, I suspect people hundreds of years from now will rightly think the same of us. As we are better qualified to shape our present than our distant ancestors, so will our descendants centuries down the road be better qualified to shape their present.

Expand full comment

I agree that's a good reason to be skeptical of efforts to narrowly shape the far future. But I don't know of any longtermist projects that would fall afoul of that. The projects I'm aware of are either (i) broad efforts to advance (esp. moral) progress, or (ii) narrowly targeted at efforts in the coming few years and decades to protect against global catastrophic risks. (For a recent historical example: we'd presumably be in a much better place re: climate and pandemics now if my parents' generation had been more guided by longtermist lights!)

Those two classes of project strike me as very worthwhile, and not undermined by the extent of our knowledge. But I guess someone more skeptical than I am might instead try to implement longtermism by (literally) investing resources for future use, Ben Franklin style. So I don't really see any objection to longtermism *per se* here, as opposed to one narrowly-imagined implementation of it.

Expand full comment

Why should we think our notion of what constitutes moral progress won't age similarly to "The White Man's Burden," Manifest Destiny, or any of the other horribles of the last few centuries, other than hubris? Like a nature reserve, we should focus on not wrecking it without trying to garden.

Expand full comment

I'm not sure what that means. If an asteroid is on track to wipe us out, does deflecting it count as "gardening", or could sitting back and doing nothing count as a form of "wrecking"?

In any case, I think a sensible degree of epistemic humility doesn't entail full-blown moral skepticism (as if we should be unsure whether to bother saving innocent lives), but just calls for things like (i) avoiding value lock-in, (ii) encouraging Millian "experiments in living", and (iii) preferring *robustly* good options (e.g. increase human knowledge and capacities) over morally *risky* ones (e.g. trapping humanity in experience machines). These are all standard longtermist ideas: https://rychappell.substack.com/p/review-of-what-we-owe-the-future#%C2%A7improving-values-and-institutions

Expand full comment

I find Nozick's experience machine intuitively powerful: I don't think I would want to be plugged in. This moves me somewhat away from hedonic utilitarianism towards preference utilitarianism, but preference utilitarianism has some other issues I am uncomfortable with (mainly to do with defining what preferences count). How do you think about the experience machine - would you plug in, does this count against hedonic utilitarianism do oyu think?

Expand full comment

Right, I've never understood the appeal of hedonism; its basic problem (which I think the experience machine highlights nicely) is that most of us *clearly* value more than just pleasure, and it seems unmotivated to claim that our ordinary forms of self-concern are in such drastic error as hedonism entails. As I put it on my old blog, hedonism (like egoism) places an *implausible restriction* on what we can reasonably value: https://www.philosophyetc.net/2020/11/hedonism-egoism-and-implausible.html

On the other hand, there are some genuine puzzles about how to best combine hedonic and non-hedonic values, esp. if we think that subjective happiness is *necessary* for an overall good life. I discuss the problem (and a possible, though imperfect, solution) here: https://rychappell.substack.com/p/a-multiplicative-model-of-value-pluralism

So that problem makes me slightly less staunchly anti-hedonistic than I used to be. But I still lean pretty strongly against the view (preferring some form of objective list theory instead): https://www.utilitarianism.net/theories-of-wellbeing/#objective-list-theories

Expand full comment

An objection that moved me away from utilitarianism is a variant on the demandingness problem. I've often found versions of a two-level utilitarianism to be persuasive in solving certain problems (like Railton's paper on personal relations and alienation). But it seems to me that even here one cannot get away from the demandingness problem: given the state of the world it's not clear that it would be best if we formed many personal relationships which took up our time and resources preventing us from doing good elsewhere. Attempts to square these two priorities by utilitarians often feel squirmy.

It's been my belief that one's meta-ethics matters in how serious we should take the demandingness problem. And i've always found most meta-ethics associated with utilitarianism to be too subjectivist to persuade one on this point. It seems as though you'd need a more firmly objectivist meta-ethics if you're going to be able to justify the kinds of moral demands that a utilitarian outlook recommends.

Expand full comment

Yeah, I'm very much a moral realist, so it seems perfectly plausible to me that *ideally*, we really should do vastly more to help others even at grave cost to ourselves. But of course none of us are morally perfect. And there's nothing in utilitarianism (properly understood) that says we should take perfection as the *baseline*, and feel bad whenever we fall short of it. We can simply accept that we're inevitably imperfect, and try to do better on the margins. I discuss the issue more here:

https://rychappell.substack.com/p/caplans-conscience-objection-to-utilitarianism

(That said, I'm also open to the possibility that some degree of partiality is actually intrinsically warranted. It's an issue I'm highly uncertain about, and certainly think people could reasonably go either way on it. I don't see a huge difference between traditional utilitarianism and agent-relative welfarist consequentialism, so even if one moves to the latter, it isn't too far to go!)

Expand full comment

Re moral realism, I would be interested in some sort of dialogue/debate/discussion between you and Joe Carlsmith about metaethics!

Expand full comment
Comment deleted
Apr 30, 2023
Comment deleted
Expand full comment

The objection just seems to be that consequentialism yields unintuitive verdicts about permissibility. But I think no such objection has any force, because consequentialism (properly understood) isn't a theory of permissibility at all, as explained here: https://rychappell.substack.com/p/bleeding-heart-consequentialism#%C2%A7conclusion

That said, if I were to play the permissibility game (without appealing to deontic fictionalism or the like), I think my willpower satisficing account - https://philpapers.org/rec/CHASBE-4 - escapes their objection.

From fn 10 of their paper: "If Billy is permitted to donate only a certain amount, $X, could make a donation of $X+Y which would save several more lives than a donation of $X, but instead makes a donation which saves these additional lives but also somehow kills his rival, this is clearly impermissible, but will still fall above the satisficing line (which was met by a donation of just $X)."

But now ask: could Billy, at no greater expense of effort/willpower, save the additional lives without killing anyone? There are two possibilities: if YES, then that's what he's required to do, on my account. If NO, and assuming that he isn't required to try any harder than it takes to donate the $X, then the relevant question is just which of these two not-excessively-demanding options we should prefer. And I think we clearly should prefer that he saves more lives, even at the cost of his rival's (one) life. It's not like his rival is more important, in principle, than multiple other innocent people who will otherwise die. So I think that's a perfectly comfortable conclusion for the willpower satisficing consequentialist to endorse.

Expand full comment