39 Comments
Feb 23, 2023Liked by Richard Y Chappell

Do you only endorse agreements behind the veil of ignorance that "markedly" improve people's prospects? Under veil of ignorance reasoning, shouldn't you also endorse killing one when there is, e.g 25% chance of saving five, since this would improve everyone's chances of survival ex ante (though not "markedly")?

Expand full comment
author

I'd personally endorse both, but a less committal argument suffices to begin with.

There are some cases where I think VoI reasoning fails. E.g. suppose we had to choose between a 50% chance of total human extinction OR a certainty of 60% of the population dying off. Current individual interests favour the former, but I think we have clear moral reasons to prefer the latter. The problem is that the VoI doesn't work for "different number" population-ethical cases. Fortunately, that limitation isn't applicable in the basic cases discussed in the OP.

Expand full comment

I think that VOI just doesn't have anything to say about population ethics. But once we agree that future potential beings are behind the veil, then it gets total utilitarianism. The alternative would be an averageist view, but that's obviously false.

Expand full comment
Mar 24, 2023·edited Mar 24, 2023Liked by Richard Y Chappell

I find that veil of ignorance arguments against deontology to be problematic. First, when you say "From behind a veil of ignorance, everyone in the situation would rationally endorse killing one to save five", I'm going to assume that by "rational", you are just referring to what there is most self-interested reason to do. In that case, it's not clear why our moral obligations are determined by our self-interested reasons in this way. But more importantly, I think this kind of veil of ignorance styled reasoning implies a strict kind of Utilitarianism, which you've objected to in the past.

For example, from behind the veil of ignorance, a self-interested agent would only want to maximize future prospects for well-being. He wouldn't care about whether the well-being was deserved or not. So from behind the veil of ignorance, self-interested parties would not select principles that give any _intrinsic_ weight to desert (of course, they might give some _instrumental_ weight to desert). But you've previously argued in favor of incorporating facts about desert in our moral reasoning https://www.philosophyetc.net/2021/03/three-dogmas-of-utilitarianism.html. E.g. you say that the interests of the non-innocent should be liable to be discounted. Why would purely self-interested parties care about desert from behind the veil of ignorance?

One answer might be that fully rational agents are also fully moral, and fully moral agents would care about desert because desert is morally relevant. In that case, it's not clear why a deontologist wouldn't also say that fully rational/moral agents would care about rights because rights are morally relevant.

For another example, I don't see why self-interested parties would distinguish between principles that kill persons vs principles that fail to create persons. From the perspective of the agent behind the veil of ignorance, failing to be created is just as much of a loss as being killed. Thus, I would imagine that the self-interested parties would be indifferent between the following two worlds:

* world A: N people live long enough to acquire X utility

* world B: N people live long enough to acquire X/2 utility before they are killed and replaced with another N people who live long enough to acquire X/2 utility.

You've argued elsewhere that the strongest objection to total utilitarianism is that it risks collapsing the distinction between killing and failing to create life. But why would self-interested parties from behind the veil of ignorance care about this distinction?

So while it is plausible that fully rational agents from behind the veil of ignorance would not care about rights, it is equally plausible that they would not care about desert, the distinction between killing vs failing to create, the distinction between person-directed vs undirected reasons, special obligations, etc. So it seems like veil of ignorance style reasoning leads to strict total utilitarianism.

Expand full comment
author

You need to be careful how you use the Veil; I certainly don't think one should blindly defer to it in all cases.

The veil of ignorance only captures reasons that stems from the interests of the parties behind the veil. E.g. if one thinks that environmental or aesthetic value matters non-instrumentally, obviously appeals to the Veil shouldn't change one's mind. Desert strikes me as an aesthetic-like, in this respect.

Similarly, the Veil doesn't help with population ethics. Either you presuppose that those behind the veil *will* actually exist, which cheaply forces average utilitarianism, or you suppose that they may yet be merely possible people, which forces totalism. But neither assumption is obviously "justified" in any deep sense, or reveals actual reasons to prefer one approach to population ethics over the other.

What about rights? Well, as above, if one primarily believes in rights for aesthetic-like reasons, the Veil won't speak to that. But that would seem odd. Usually they're defended *for the sake of* the individuals involved. People claim that respecting rights is part and parcel of *respecting the individual* whose rights are in question. But the Veil argument shows that that very individual had decisive reasons to waive those rights (prior to first opening their eyes, and conditional on others doing likewise), to improve their ex ante prospects.

It's this specific style of veil-based argument I think works, and differentiates veil-based debunkings of deontology from clearly fallacious or inappropriate invocations of the veil.

Expand full comment

It's not clear what to make of the distinction between reasons that stem from the interests of the individuals involved vs reasons that stem from aesthetic value.

On one sense of the distinction, interest-based reasons are just reasons that derive from our general reasons to promote well-being. But I don't think deontologists believe that rights are derived from interest-based reasons in this sense. E.g. presumably all deontologists would agree that two actions might result in equivalent harm to a person's well-being, but only one of the actions might be done in a way that violates that person's rights. It seems unlikely that most or even many deontologists believe that rights just derive from our general reasons to promote well-being.

On another sense of the distinction, interest-based reasons are reasons that influence how we ought to weigh the interests of others in our decision-making. Maybe rights stem from interest-based reasons in this sense, I'm not sure. But it is certainly clear that, say, desert would count as an interest-based (and not just aesthetic) reason in this sense. I.e. whether one deserves some treatment influences how we ought to weigh their interests (e.g., guilty parties have their interests discounted).

So it's not clear how to formulate the distinction between interest-based reasons vs aesthetic reasons such that rights (as they are typically understood by deontologists) are of the former kind whereas things like desert are of the latter kind.

Expand full comment
author

I think it's very natural to say that we have reasons to respect rights *for the sake of* the rights-holder, whereas our (non-instrumental) reasons to give the vicious their just deserts are not "for their sake" (or anyone else's, for that matter), but just because we think it fitting that they be worse off.

The key question, in either case, is whether the affected parties have the power to *waive* the consideration in question. We generally think that people can waive their rights (that's what my pre-commitment argument relies upon), whereas the guilty cannot waive their just punishments (much as they might wish it!).

If you instead thought of retributive punishment as a right of the victim to see their perpetrator harmed, then you could construct a parallel argument to mine to the effect that everyone should rationally pre-commit to waiving such rights when not conducive to overall welfare. But I don't think of desert in this way.

Expand full comment
Feb 24, 2023Liked by Richard Y Chappell

What's the rationale for P1 of the teleological argument? For the man in Bernard Williams' case, *that's my wife* is a reason to save her (rather than the other drowning person). How does that reason come from applying instrumental rationality to the correct moral goals?

Expand full comment
author

The rationale: partly just the thought that instrumental *irrationality* sure doesn't seem an attractive alternative. And partly it just seems an attractive idea that rational action in general is goal-directed, and moral action is (at least in the ideal case) rational.

(I don't think that this by itself begs any questions: we can formulate goals that mesh with deontology. But it does at least put some pressure on the view to clarify *why* its rules and constraints are worth caring about, as I've stressed in several recent posts.)

In Williams' case: presumably protecting your wife from harm should be among your goals (and, for the partialist, a higher-priority goal than helping a stranger).

Expand full comment
Feb 23, 2023·edited Feb 23, 2023Liked by Richard Y Chappell

Richard, have you considered writing a book arguing for consequentialism? Also, I'd recommend reading the suitcase paper in full--it provides one of the best criticisms of deontology I've ever read--much better than was provided in my article.

Expand full comment
author

Yes, a book on "Bleeding Heart Consequentialism" is my next big project (after wrapping up the print edition of utilitarianism.net).

I skimmed Kacper's paper, it does indeed look excellent!

Expand full comment

Great post! I've also been thinking about arguments for utilitarianism along the lines of your pre-commitment argument.

Expand full comment
author

Neat! Let me know if you end up writing a paper on this (or want to coauthor one that expands on the version in this post).

Expand full comment

Well I'm convinced!

One worry I have about the status quo bias argument is that it doesn't explain our more specific intuitions--e.g. why we think you should disrupt the status quo and flip the switch but not push the person.

I also really like the other arguments on utilitarianism.net--especially the point that non-consequentialism has to hold it's sometimes bad to put perfect people in charge of things.

I also think your preference paradox is very compelling, as well as this argument. https://benthams.substack.com/p/wrong-to-do-and-prevent-a-new-problem

Expand full comment
Feb 23, 2023Liked by Richard Y Chappell

"From behind a veil of ignorance, everyone in the situation would rationally endorse killing one to save five, since that markedly increases their chances of survival." Why doesn't this just beg the question since it assumes that all that matters morally is whether I survive (or whether the most survive)? Deontologists don't think, and have never thought, that morality is merely about the ends. They think, roughly, that how we get there matters too. So they'll just deny this. (In particular, they'll deny the "since....".) They'll also say, presumably, that it's rational to deny to it, since it's rational to care about all the morally important stuff (which includes means as well as ends). And so on.

This has the feel of an argument that bolsters the utilitarian's confidence in their own view without having hope of convincing someone who didn't already agree in the first place.

Expand full comment
author

Plenty of deontologists seem to feel the pull of ex ante Pareto -- the idea that we should prefer options that are in *everyone's* (ex ante) best interests [see references in the paper linked in footnote 1]. Nobody (sane) thinks you should just never push people, even if pushing them would help rather than hurt them. So I'm suggesting that rational pre-commitment can similarly change what even deontologists should regard as permissible (and respectful of each individual person) in the circumstances.

Expand full comment

Yes, but there are cases and there are cases. I don't see why a deontologist would accept that it's rational to pre-commit in this particular case.

Should they also pre commit to, e.g., taking 1% of children for organ growth, to supply the remaining 99% of people with organs as/when they need them? (Adjust the numbers such that the expectation works out.) I don't see how it isn't begging the question against them to insist that they just ignore that that system would result in enormous rights violations (something they think matters morally—deeply so).

Expand full comment
author

But it's not a rights violation if the affected person rationally consents to it. And the whole point of the argument is that we've all got decisive reasons to consent in advance to such a system -- to waive any right of ours that would otherwise prevent things from being set up optimally.

Expand full comment

I don't think rights (and rights violations) work like that. "Yesterday, you rationally consented to sleep with me today, so even though you've made it plain that you've changed your mind, I'm not violating your rights by forcing you to sleep with me."

Expand full comment
author
Feb 24, 2023·edited Feb 24, 2023Author

That's a case where the advance consent obviously isn't rational. (More specifically, it isn't rational to *irrevocably* consent to sleep with someone in future.) So, not a counterexample to my principle. My case isn't like that: there's a clear internal rationale why it's in each person's interest to make the irrevocable commitment.

Note that there are other cases where rights seem to work as I suggest. Consider property rights. If we each buy lottery tickets and promise to split any winnings (irrevocably consenting to waive our property rights to half the winnings) then I can't subsequently change my mind (upon learning that I won and you didn't) -- or if I try, it doesn't violate my rights for you to simply take the half that you're owed. Whether it's theft or not depends on what was agreed. And indeed, one natural justification for state redistribution is "social insurance" -- paying for the safety net we would all agree to pay for if given the opportunity in advance of learning whether we're rich or poor.

Expand full comment

It can be rational to *irrevocably* consent to being chopped up and distributed amongst five others, but it can't be rational to *irrevocably* consent to sleep with someone? That can't be right.

And yes, but property rights aren't bodily rights. And so the fact that your argument goes through with, e.g., lottery tickets doesn't mean it will go through with the cases your argument needs it to go through for. (At least, it doesn't mean it will go through for the person you're trying to convince—and, indeed, it won't go through.)

Expand full comment
Feb 23, 2023Liked by Richard Y Chappell

For argument 1, is there someone reason to believe that pre-commitment from behind a veil-of-ignorance would always accord with consequentialist reasoning over deontological? Or is it just a property of these many-vs-few cases, where deontology is worried about violating the rights of the few? I mean, I agree with the argument in this specific case, but could a deontologist cook up versions of argument 1 that pull the other way?

Expand full comment
author

I think it's clearest with regard to these cases of optimal constraints violations. There are other debates, e.g. aggregating headaches vs torture, where I don't think appealing to a veil of ignorance would advance the debate at all. (But I think those are precisely cases in which people are intuitively drawn to non-utilitarian verdicts about *which outcome is better*, rather than denying the broader consequentialist idea that we should bring about the better outcome.)

Expand full comment

Harsanyi showed that if we follow various basic axioms, it can be shown we'd be utilitarians from behind the veil. https://forum.effectivealtruism.org/posts/v89xwH3ouymNmc8hi/harsanyi-s-simple-proof-of-utilitarianism

Expand full comment

Sorry for the late reply; while I think the theorem is very interesting (and sent me down a long rabbit hole, from which I'm only just recovering...so thanks!), I'm still interested in the possibility of peoples' intuitive judgements diverging from utilitarian judgements in VoI cases; I think the requirement that the _group_ be VNM rational in particular is an axiom that people might reject, and I feel like Richard's suggestion of dust specks vs. torture is a case where people would intuitively find the utilitarian prescription pretty unappealing--I get that this case doesn't make a strong contrast to deontology, but it seems to me that certain forms of what we might call "naive VoI reasoning" will fail to endorse utilitarian conclusions.

So even though I think Richard's responses to The Reasonable Person are pretty good, I still think TRP's hypotheticals highlight the ways in which utilitarian conclusions can feel unintuitive, even in VoI cases.

Gabriel's comment has much the same force: I take it that the main point is that by making the trolley situation deterministic, Richard is stacking the deck for utilitarianism--the intuitive judgement that we should sacrifice one to save 5 is a lot more compelling if we are _certain_ we will save the one; but the idea that deontology is defection feels less compelling (to me at any rate!) the more uncertainty we add.

I still think it's easy to argue that even in these cases we're getting hung up on other things, and that this sort of reasoning endorses consequentialism, if not necessarily utilitarianism, but it does make me a little more open to the possibility that someone could construct a parallel scenario that does manage to contrast consequentialism to non-consequentialism in a clear way, where my intuitive judgements favour non-consequentialism.

Expand full comment

As for the master argument, I'd imagine that the deontologist would dispute both 2 and 3. As for 2, she'd might say that she thinks consequentialism must deny various fundamental principles like

--people have rights

--people are separate in a magical and ineffable sense which somehow means that utilitarianism is wrong :)

--you shouldn't kill one person to save a few others.

--intent matters to the significance of actions.

I think that figuring out whether that's true will involve disputing the various principles to which they appeal. The important response is that these are mostly justified based on our intuitions about cases, which are not as reliable as principles, as both you and I have argued at various points.

I don't find 3 that convincing to be honest. I think the deontologist would just say that while utilitarianism can explain away the linguistic intuition that organ harvesting is wrong or that ideal agents wouldn't do it in most circumstances as it is reckless, it can't explain the intuition that organ harvesting really is wrong--that it really shouldn't be done--that one has decisive reason not to do it. Most people have the intuition, I think, that even with perfectly ideal information, you shouldn't kill one person to save 5.

Expand full comment

To the first point, if people had a decisive reason to pre-commit their conditional consent to be killed without deontological constraints, then we would expect many utilitarians contracts (say organ harvesting contracts, especially given declining, yet with uncertainty, health with age). However, we have no such contracts. Rather, every known contract has deontological constraints, which makes it seem that the utilitarian reason isn’t decisive.

I make the claim that utilitarianism’s reasonable rejectability among free agents, evidenced by the absence of pure utilitarian contracts, disqualifies it as an account of morality, given morality’s “acceptance” condition here:

https://neonomos.substack.com/p/what-isnt-morality

Expand full comment
author

I worry that this conflates theoretical and practical issues. There are obvious reasons why we wouldn't *trust* individuals or institutions to successfully implement utilitarian goals via naive methods:

https://rychappell.substack.com/p/naive-vs-prudent-utilitarianism

But that doesn't really speak to the in-principle point that I'm making here.

Expand full comment

Although you make a claim to what people “would” agree to behind a veil of ignorance. This claim can be validated or falsified based on what people have agreed to in insurance contracts. The fact that we haven’t seen any pure utilitarian contracts (although plenty of rule utilitarian ones) makes the claim unlikely.

Also, trust and implementation are an important part of any contract, not a special issue for pure utilitarian ones. It wouldn’t be too hard to validate if 5 members of the contract needed an organ and 1 member of the contact had healthy organs. Even with an error rate of say 5%, this shouldn’t outweigh the five lives saved.

Yet if your argument is that people are so risk averse that they wouldn’t subject themselves to utilitarian consequences given the error costs, even if they would in theory, then this would still make utilitarianism unacceptable and therefore not an account of morality.

Expand full comment
author

Oh, sorry, those were meant to be read as normative claims about what people would *have reason to* agree to from behind the veil -- not a descriptive prediction of what existing people would actually choose. Actual people may suffer from cognitive biases, or be enculturated into making various irrational choices, after all.

Expand full comment

If motivations behind the VoI are prudential, then you can examine your claims empirically. You’d expect profit-maximizing individuals to engage in organ harvesting contracts since to do otherwise would be paying an unnecessary cost. There is no such known agreement, which if anything means that engaging in this type of contract is irrational.

If motivations behind the VoI are pure utilitarian then you’re begging the questions and are just assuming that people would agree to maximize total welfare, even at the expense of their own.

Morality are those sets of rights and duties that people would reasonably accept. EAs are free to make pure utilitarian contracts as I argue in the linked post. But the fact that none of them do is further evidence that this moral agreement isn’t one that should be imposed on the world. Why should we accept an agreement that not even its most fervent proponents accept?

Expand full comment
author
Feb 23, 2023·edited Feb 23, 2023Author

No, it's not empirically guaranteed that people will make prudent contracts. They might not be legally enforceable, for example -- I doubt that any court on Earth would enforce an "organ harvesting contract". (Not to mention that people are often imprudent.) So I reject your assertion that these normative claims can be determined empirically. Looking at actual behaviour does not tell us what it would be rational to choose from behind the veil of ignorance.

Expand full comment

1. How would you define "prudent?" If it's just the pursuit of self-interest, then you can only refer to empirical evidence to find how self-interest is best pursued or how cognitive biases impede that pursuit If something else, then that else needs to be further justified.

2. EAs are free to make contracts as far as unconscionability restrictions would allow. It doesn't have to go as far as organs, but money or property might work as well.

3. Isn't the fact that organ harvesting contracts are illegal evidence for their immorality? If they are so rational to pursue, you'd expect more uproar to make them legal. Like I said, you're more likely acting irrationally if you "do" make this agreement.

4. What sort of evidence would validate or falsify your hypothesis that utilitarianism would be chosen behind the veil of ignorance? If none, is your claim unfalsifiable?

Expand full comment