30 Comments

It seems so obvious that cluelessness wouldn't be a decisive objection. We can see this through the following.

(1) The correct moral theory will be true in all possible worlds.

(2) There are an infinite number of possible worlds in which agents are clueless about whether the correct morality proscribes most actions.

Therefore, the fact that a theory generates moral cluelessness in a world doesn't mean it is false.

If this is true we should accept

(3) The fact that a theory generates moral cluelessness in the actual world doesn't mean it is false.

Even ignoring unintended consequences, as Lenman proposes, doesn't avoid cluelessness. We can imagine possible worlds in which there are very obvious consequences but they're hard to stack up (e.g. each time we move 200 drones bomb 8300^128 earthworms, but save 59^128 cattle).

Expand full comment

Isn't this begging the question? Lenman just outright denies your (2).

Expand full comment

I explain why Lenman's theory results in cluelessness. "Even ignoring unintended consequences, as Lenman proposes, doesn't avoid cluelessness. We can imagine possible worlds in which there are very obvious consequences but they're hard to stack up (e.g. each time we move 200 drones bomb 8300^128 earthworms, but save 59^128 cattle)."

Expand full comment

If each time we move that happens, then I guess we'd just ignore it and get on with our lives?

In any case, Lenman opts for something closer to a virtue-ethical theory at the end of the paper.

Expand full comment

(Oh, I forgot to update the drone numbers when I added ^128 to both 59 and 8300). These are very productive drones.

Expand full comment
Oct 16, 2022Liked by Richard Y Chappell

This was very insightful, it’s been a while since I’ve dipped my toes in consequentialist theory. I’m particularly stuck by your comments on how what fundamentally matters is epistemologically prior to if we can track it.

Good brain food, thanks for sharing!

Expand full comment

This post seems to be making two contradictory arguments. At one point, I thought you were arguing that what matters morally are only the clearly foreseeable consequences of our actions (since only those can enter our EV calculations, and our EV calculations tell us what matters morally). But then you dispute Lenman when he says that all that matters morally are the foreseeable consequences of our actions.

You write:

"if we’ve no idea what the long-term consequences will be, then these “invisible” considerations are (given our evidence) simply silent—speaking neither for nor against any particular option."

If this is actually how EV calculations work, by simply ignoring the unforeseeable consequences of our actions, then by fiat they avoid the problem of cluelessnness—which is essentially that there are unforeseeable consequences of our actions, and those unforeseeable consequences have far more value (and thus morally matter more) than the clearly foreseeable consequences. But then the question is, why should a utilitarian think that EV calculations tell us what matters morally, if what utilitarians care about is the value of outcomes—since EV calculations only purport to tell us about the tiniest sliver of those?

You write:

"I don’t think I need to commit to any particular principle of indifference in order to say that I haven’t yet been presented with any compelling reason to revise my expected value estimate of +1 life saved."

But you seem to acknowledge such evidence earlier in this post when you refer to, "reasons to do with the extreme fragility of who ends up being conceived, such that even tiny changes may presumably ripple out and completely transform the future population." That is a compelling reason to acknowledge that the action of saving a life will not result in just +1 life saved and no other difference in value. Of course we have no idea what that difference in long term value might be, but that's precisely the cluelessness problem.

You write, "To undermine an expected value verdict, you need to show that some alternative verdict is epistemically superior."

Why is that? I take it that the objection to acting on the basis of EV is that we cannot credibly calculate the value of our actions; *any* verdict we propose about the value of our actions is pure fantasy, including any possible proposed alternative. The opponent of EV utilitarianism would not attempt to propose some epistemically superior alternative because the point is that all attempts to come up with a calculation for the value of our actions are futile.

Your point that acting in accordance with EV calculations is "the best we can non-accidentally do!" might be your counterargument to this. I recommend reading Fred Feldman's "Actual Utility, the Objection from Impracticality, and the Move to Expected Utility." Feldman could be seen as responding to your point by saying, "But we can't even do that much because we can't make EV calculations about the value of our actions in the first place!" It's possible Feldman is rejecting your apparent assumption that EV-calculations are only supposed to consider the clearly foreseeable outcomes of actions, but I think he raises problems even for EV-calculations meant to track clearly foreseeable consequences. I believe he raises serious problems for your claim that, "I don’t think it makes sense to question our trust in expected value."

The cluelessness dialectic between you and non-utilitarians (or objective utilitarians, in Feldman's case) seems to be something like this. The non-utilitarians or objective utilitarians think that utilitarianism is about producing the best outcomes. You respond, no, it's about doing the actions recommended by EV calculations. The opponents say, but we can't make credible EV calculations. And you respond, yes we can, so long as we only include clearly foreseeable consequences in our EV calculations. I suppose the opponent would then say this: "if all that matters morally are the consequences of our actions, why do we think only the *clearly foreseeable* consequences of our actions matter morally? Surely pain, suffering, pleasure, life and death matter morally even when not foreseen!" And here is where it seems to me the contradictory aspect of your post seems to arise: you appear to agree with this last objection, in the section titled "Ethics and What Matters." But if you agree with this, you seem to be conceding what I take is the main point of your opponents.

It seems to me your opponents are making this point: "Why is utilitarianism concerned about the best we can non-accidentally do? Isn't it just supposed to care about the best simpliciter?" If utilitarianism is concerned with the mental states of moral agents, and in particular whether outcomes are brought about intentionally or accidentally, utilitarianism may be losing some of its intuitive appeal that was grounded in the very simple idea that all that matters morally is the amount of good and bad in the universe. And indeed, you ultimately seem to concur with this in the "Ethics and What Matters" section, in which you seem to argue against the view that what matters morally is what we can foresee. But then I don't understand how EV calculations are meant to work. If we know that saving a child is not really just +1 life, because of all the unforeseeable consequences, and we know that unforeseeable consequences matter morally—yet our EV calculations are only able to tell us that saving a child is +1 life—why think that following EV calculations is acting in adherence with morality?

One option for utilitarians is to accept 1 and 2 (at the beginning of your post) and reject 3. Your conclusion to your post is friendly to this option, even though you say you think 2 is almost certainly false. I think the tension I see in your post arises because you do not fully embrace 2, and yet in the latter sections of your post, you make points that imply we should embrace 2.

If we accept 1 and 2 but not 3, we could have utilitarianism while accepting the implications of cluelessness that we do not know the best actions for us to take—but that would be okay because a moral theory is not supposed to tell us the best actions we can take. As you write, "what fundamentally matters is epistemically prior to the question of whether we can reliably track it." So why hedge? Why not just embrace that we cannot reliably track what matters morally?

Expand full comment
author
Oct 16, 2022·edited Oct 16, 2022Author

Hi Rhys, thanks for your comment (and the Feldman recommendation)!

You write: "by fiat they avoid the problem of cluelessness—which is essentially that there are unforeseeable consequences of our actions, and those unforeseeable consequences have far more value (and thus morally matter more) than the clearly foreseeable consequences..."

I'm trying to explain why I see no problem here. Repeating the set-up doesn't make it seem any more problematic to me. I agree that the unforeseeable consequences matter more. So what? (You need auxiliary assumptions to get anything more out of this, and it's those further assumptions that I'm going to dispute -- as in my response to Lenman's four objections to EV.)

You ask, "why should a utilitarian think that EV calculations tell us what matters morally?"

We shouldn't! But we should also reject your implicit further assumption that EV can only properly guide us by outputting "what matters". That's to conflate *what matters* with *the rational pursuit of what matters*, as I tried to bring out in my section about "miss[ing] the point of being guided by expected value." We already know from Jackson/Mineshaft cases that rational action needn't be a means to acting objectively rightly. It can be most rational to pick the option we know full well won't maximize "what matters" (i.e., actual value).

As I say, "It’s difficult to express precisely what the point is. But roughly speaking, it’s a way to promote value *as best we can* given the information available to us (balancing stakes and probabilities)." To succeed in this, it doesn't have to have a high (or even non-zero!) probability of getting things objectively right.

> "compelling reason to acknowledge that the action of saving a life will not result in just +1 life saved and no other difference in value"

Yep, but that's still no reason to revise my EV verdict. Assigning EV = +1 does NOT imply that the act "will" result in "just +1 life saved and no other difference in value". In many cases, the EV may represent a value (e.g. fractional lives saved) that we know cannot possibly result. Representing the actual result is not the point. It represents the value of a probability space.

> "Why not just embrace that we cannot reliably track what matters morally?"

I do! I just don't think it follows that we've no idea what to do, since it can be rationally clear what to do even when it's impossible to know how to secure what matters morally (as again demonstrated by mineshaft cases).

For more on the distinction between what matters and its rational pursuit, see my old post, 'What's at Stake in the Objective/Subjective Wrongness Debate?': https://www.philosophyetc.net/2021/04/whats-at-stake-in-objectivesubjective.html

Expand full comment

Thank you for your response and for the link to your earlier post. This also led me to this post: https://www.philosophyetc.net/2013/12/manipulating-moralitys-demands.html , which was helpful as well.

These make me think a lot of the cluelessness debate might be people talking past each other because there is confusion over what consequentialism is. Non-consequentialists think consequentialists believe consequences are all that matter morally—and also that morality is essentially all that matters. But, if I'm understanding right, you're saying this isn't true for EV consequentialists. EV consequentialists also think the wisdom, sensibleness, and rationality of epistemically limited agents (who identify as consequentialists and are aiming for good consequences) is something we care about as well. In contrast, non-consequentialists might have thought consequentialists (or at least utilitarians) valued wisdom, sensibleness and rationality merely instrumentally—to the extent these can produce better consequences. Does this seem right? Do you think the rationality and wisdom of agents matters beyond the instrumental relevance rationality and wisdom have to consequences?

Non-consequentialists making the cluelessness objection might be imagining consequentialists should always have this attitude: "I know what I'm doing is morally inadequate because a fully informed observer would tell me to do something else entirely, but there are no fully informed observers, and I have to do something, so I guess I'll just do what seems like the best action if we pretend there are no unforeseen consequences (even though I know there are obviously unforeseen consequences which are drastically affecting the value of the outcomes I am helping bring about and which decisively determine what the best outcomes are)." And perhaps the non-consequentialists think consequentialism does not properly guide actions if it always leaves its adherents thinking, "I know this will not achieve what I think matters morally but what else can I do?" And then, since these non-consequentialists accept 3, they think this is fatal for consequentialism.

If I'm reading you right, you disagree with the hypothetical non-consequentialist because when it comes to judging actions, you think it is not consequences that matter, but rather the wisdom and rationality of agents who identify as consequentialists and who want to bring about the best consequences. Is that right, or am I missing something?

Expand full comment
author
Oct 17, 2022·edited Oct 17, 2022Author

Well, it's tricky, because there are different senses of "matters" in play here. Or, as I would prefer to say, there are normative properties (relating to rationality etc.) that have important theoretical roles to play, and may properly guide us as agents, even though they do not *matter* (in my preferred sense of being non-instrumentally preferable, or properly featuring in our *goals*).

But yes, we often have good theoretical reasons to judge choices according to whether they constituted a wise or rational pursuit of what matters, rather than according to whether they actually attained what matters. (We should of course prefer actually-good actions over merely rational or well-intentioned ones, but we often have good theoretical reasons to judge actions and agents on bases other than their preferability.)

For more on this, see my post 'Consequentialism Beyond Action' - https://rychappell.substack.com/p/consequentialism-beyond-action - especially the section 'Virtue and Value Agree: What Matters is Value, not Virtue'.

Expand full comment

Dude, why doesn't someone just get it over with an argue for love consequentialism -- do whatever will bring about the most love in the world. Because then every little act of love adds to that and theres no chance of that being a bad thing long term. Damn Richard, you've been working on this shit for years with a family and haven't love in your analysis of morality? Da fuck?

Expand full comment

I agree that Lenman doesn't do a satisfying job of explaining why the same problem doesn't spread over to non-consequentialism. What one wants is a metaphysical way of demarcating those consequences that don't matter from those that do. If it's of interest, I've tried to do that here—although cluelessness only comes in at the end. Obviously you won't buy the moral distinctions the paper relies upon, but it aims to be the makings of a principled, non-consequentialist response, that isn't available to the consequentialist. https://web.mit.edu/tjbb/www/SLL.pdf

Expand full comment

Thanks for pointing to Lenman, his concerns seem similar to mine.

What seems missing from the discussion here is the alternatives being considered. I would roughly divide them into strict consequentialism, permissive consequentialism, and contextual consequentialism. Strict C is just do the math, you need an estimate for everything and if it isn’t in your equation it doesn’t matter. I am not sure what permissive C would be, but something a bit more reasonable than strict C, but still arguably consequentialism (assuming that is what the post advocates). What I am calling contextual consequentialism is the idea that when you are in a high information context, you do the math (hospital budgets) and when you are not you use heuristics based on historical experience (sort of and explore/exploit strategy). Deciding what sort of info environment you are in is a judgement call that depends on your info and your history.

Is contextual consequentialism really consequentialist? Doesn’t matter what the label is. It needn’t dismiss consequences, but it sees them as being limited by the context, with alternatives available in the form of heuristics.

Saving the drowning child is a high info environment. Generalizing that to saving persons elsewhere depends on there being an analogous information context.

I’m not sure what implications this would have for low probability events in the far future. We don’t have heuristics for that, and the info environment is low. Maybe I side with Lenman here. We can wish we had sufficient info to make such decisions, or pretend we do, but using heuristics seems equally justified. One way or the other, we are making educated, well reasoned (we hope) guesses.

Advantages of heuristics include better compatibility with a legal system and its concomitant cultural understanding of what people can expect from each other; and a higher degree of low-effort consensus.

Generating and executing a strategy based on far future predictions and estimates is informationally and socially costly. The risk is that some low probability event that was avoidable only before it was easily foreseen will exploit a bug in our heuristics. This advises a fail-soft strategy, because if you squint right, strict consequentialism is a high info expenditure version of contextual consequentialism, also vulnerable to catastrophic error. So the ultimate question becomes, how do we arrange for humanity to survive a serious catastrophe without actually preventing it?

Expand full comment
author

It's sounds like you have in mind two-level consequentialism? See: https://www.utilitarianism.net/types-of-utilitarianism#multi-level-utilitarianism-versus-single-level-utilitarianism

Consequentialism is, first and foremost, a criterion of right action; it's always an open question what it recommends in practice as the most useful decision procedure. Expected value provides the criterion of rational action, and given computational limitations (plus personal biases, etc.), we certainly should be heavily guided by heuristics in ordinary circumstances. I recommend R.M. Hare's *Moral Thinking* for more on this topic.

Expand full comment
Oct 17, 2022Liked by Richard Y Chappell

So in multi-level utilitarianism, utility is used as a standard for evaluating and criticizing heuristics, but not as a decision procedure?

Expand full comment

That means there is never really a place where you just “do the math,” or rather, if there is, the boundary between “use the heuristics” and “do the math” is not the result of doing some math. It would be more a judgement call, either “let's try this and see if it works” or “someone has developed some math for this, let's use it.”

Expand full comment
author

Right, you can't always calculate when it's appropriate to calculate, on pain of regress. A rational agent just needs to have good judgment about such things. (That said, reflecting in a "cool moment" might help one to get a better sense of which "in the moment" dispositions to inculcate. E.g. you can probably work out that "don't murder" is a better heuristic, in expectation, than "calculate whether killing my enemies would actually be for the best", whereas "Do a cost-benefit analysis of COVID policy" is a better approach than "blindly follow gut instincts and/or hysteria about how best to respond to a novel pandemic".)

Expand full comment

Thanks.

Expand full comment

> It’s surely conceivable that some agents (in some possible worlds) may be irreparably lost on practical matters. Any agents in the benighted epistemic circumstances (of not having the slightest reason to think that any given action of theirs will be positive or negative on net) are surely amongst the strongest possible candidates for being in this deplorable position. So if we conclude (or stipulate) that we are in those benighted epistemic circumstances, we should similarly conclude that we are the possible agents who are irreparably practically lost.

>To suggest that we instead revise our account of what morally matters, merely to protect our presumed (but unearned) status as not totally at sea, strikes me as a transparently illegitimate use of “reflective equilibrium” methodology—akin to wishfully inferring that causal determinism must be false on the basis of incompatibilism plus a belief in free will.

No idea what Lenman would say here, but I think this argument can 100% be made to work. It's a version of the 'moral realism is epistemically self-refuting' argument that specifically applies to a consequentialist theory of the good. Part of our bad epistemic circumstances, if they existed, would be that we would have no epistemic access to 'the good' at all: if we were so epistemically lost as to have no idea about any particular good action, then I don't see how we could at all be justified when reasoning about the good in general. We can then argue by cases as follows: *if* the structure of 'the good', at a metaphysical level, were consequentialist, *then* we couldn't know it; if it weren't consequentialist, we also couldn't know that it was consequentialist (because knowledge is veridical); ergo anyone who claims to know that the good has a consequentialist structure, at a metaphysical level, is wrong. I'm not sure if this argument is sound, because I'm not sure the first premise is true (as you hint, we might know at least some things about the long-term impacts of our actions); but it's definitely valid.

Expand full comment
author
Oct 16, 2022·edited Oct 16, 2022Author

I don't follow. Why would skepticism about our ability to track the future appearance of natural properties like pleasure and pain do anything to undermine our grounds for recognizing that pleasure is good and pain is bad? I think you're conflating two very different questions about "the good". (The philosophical question of which natural properties have the normative property of *being good*, and the empirical question of whether we can reliably track the future appearance of the identified natural properties.)

Even a brain in a vat could still do a priori philosophy, after all.

Expand full comment

Ok, maybe we just differ quite fundamentally on moral epistemology, but I do not see how you can completely sharply separate those two questions: of course they are different questions, but the answers you give to one will affect the answers you give to the other. Any application of (anything even approaching) reflective equilibrium requires the use of judgments about cases; if our judgments about cases are uniformly unjustified, then the theoretical principles we derive from (any methodology even approaching) reflective equilibrium will also be unjustified.

More particularly, on pleasure and pain: presumably we learn 'pleasure good' and 'pain bad' by generalising from specific instances where we judge 'this case, which is a case of pleasure, is good' and 'this case, which is a case of pain, is bad'. If the particular judgments are uniformly unjustified, then I do not see where the justification for the general judgment comes from. Maybe you have a moral epistemology that is externalist to an *incredible* (I would say absurd) extent, so that these general statements are justified purely by the world at large and are completely epistemically unconnected to one's particular judgments; but that would conflict with your invocation of the brain in the vat, which presumes that justification of moral philosophy is internal and so the brain in the vat can be justified no matter its external circumstances. If you allow that at least some general moral principles require even a little bit of internal justification, then they cannot be insulated from the question of justification for particular ethical judgments. And if we are completely 'at sea' about our particular ethical judgments, such that *none* of them are justified, then this undermines the epistemic standing of general moral judgments too.

Expand full comment
author

Are you assuming that "particular" cases must be *actual* cases? It's only the actual world about which we're ignorant. We may (not really, but for present purposes) be logically omniscient about modal space, and consider fully stipulated possible worlds with as much detail as anyone could desire. We just don't know which of those possible worlds is ours, but that doesn't affect our conditional moral judgments (about what verdicts would be objectively true of which fully-specified possible worlds).

More realistically, I think in most "particular cases" we implicitly build in an "all else is equal" clause, stipulating that there are no other morally relevant features of the situation, besides those that have been explicitly built into the case. We can consider "this case, of pleasure, is non-instrumentally good", and "this case, of pain, is non-instrumentally bad", and none of that requires us to locate ourselves in modal space.

For more on "anti-parochial" principles (about how our moral judgments should not depend on which world is actual) see: https://www.philosophyetc.net/2021/02/the-parochialism-of-metaethical.html

Expand full comment

I don't think I'm assuming that all particular cases under consideration must be actual. But presumably, again, we don't *start* with hypothetical cases - you learn about right and wrong as a kid based on reactions (of yourself, your peers, and adults) towards actual cases. It's actually having this experience with a wide range of possible cases that *allows* us to use ceteris paribus clauses, because it gives us confidence that we can identify the relevant features of a situation and we know what exactly we should hold fixed. Again, I don't see how you get to a justified belief in 'pleasure good' without experiencing at least one case of actual pleasure, whether in yourself or in others. (Indeed, I think you probably need both.) When you say 'we can *consider*' various general moral claims without locating ourselves in modal space, I think it's only experience with particular actual cases - built up from childhood - that allows us to even begin to grasp which *considerations* are actually relevant.

There's a great Adam Smith quote here: 'Were it possible that a human creature could grow up to manhood in some solitary place without any communication with his own species, he could no more think of his own character, of the propriety or demerit of his own sentiments and conduct, of the beauty or deformity of his own mind, than of the beauty or deformity of his own face. All these are objects which he cannot easily see, which naturally he does not look at, and upon which he is provided with no mirror to enable him to turn his eyes. Bring him into society, and he is immediately provided with the mirror which he wanted before. It is placed in the countenance and behaviour of those he lives with.'

Expand full comment
author

I don't see how any of that history is epistemically relevant (as opposed to merely causally relevant to getting my brain into the configuration that it's now in). I could be a brain in a vat, or created ex nihilo 5 minutes ago, and it wouldn't make any difference to the epistemic justification of my moral beliefs. (I'm a pretty hardcore internalist about these things!)

If I was created in my current state five minutes ago, then I had no childhood. But I'm still capable of considering hypothetical scenarios. So childhood experience plainly isn't essential to considering hypothetical scenarios. (It's just historically *useful*, in practice, for developing human brains into the right sort of configuration to be able to do this work.)

But for those with more externalist leanings, one's historical moral education can easily be re-interpreted as educating you about pro tanto reasons. Again, nothing about empirical cluelessness casts the slightest bit of doubt on the claim that pain is *non-instrumentally* bad. And that's all you need to do ethics.

Expand full comment

I'm not sure I follow this, but I don't think there's much could be said in this forum to clear up my confusion! I just don't see how you have an internalist moral epistemology while keeping a sharp epistemic wall between judgments about particular actual cases and judgments about general moral principles, unless you intuit the principles directly. (I don't see how a brain in a vat could have a justified belief in 'pain bad', for instance, even if one is an internalist.) But you've tried to explain yourself now, and the problem is clearly with my understanding! Sorry for that.

Expand full comment
Comment deleted
Expand full comment
author

"Long-term" in this context means distant future generations. We can often be reasonably confident about the likely consequences of our acts within our lifetimes.

Expand full comment

Really?

Seems to me like every time we take a shorter route by car, we're taking positive EV in the face of ignorance about much larger effects, e.g. encountering or avoiding catastrophic injury or death, or even substantial damage to our vehicles.

Expand full comment
author

Of course we act under uncertainty all the time, but it's a familiar sort of uncertainty (where we have at least a *rough* sense of the relevant probabilities), not total cluelessness of the sort Lenman is discussing.

Expand full comment