38 Comments

I think giving literally zero weight to simplicity is crazy! Huemer has a good paper on this titled "When is parsimony a virtue." The basic idea is that because simpler theories have fewer manipulable parameters, if they're wrong, it's harder for them to explain data. If you can make up infinite rules, of course you'll be able to explain our intuitions, but if you have to stick to just a few, it's unlikely you could. Thus, simpler theories get a bigger boost from explaining data. I also think that you get simplicity being a virtue if you assume that reality isn't most ways it could be, which seems true. All else equal, you shouldn't posit extra particles that do nothing.

Other myths:

Morality is weird: why? I've never heard an explanation of this that isn't obviously question begging. And weird things exist sometimes--e.g. time, space, fields, numbers, sets, modality, and consciousness.

Utilitarians are in the grip of an attractive sounding theory: I think an idea that a lot of non-utilitairans have is that utilitarians find a nice-sounding theory and then dogmatically cling to it in spite of counterexamples. Now, perhaps some do. But when one really examines many of the counterexamples to utilitarianism--like the repugnant conclusion--it becomes really hard to deny them.

SIA says that views according to which there are more people are a priori more likely: No! It doesn't do that. It updates from the fact that I exist, not says they're more likely a priori.

Everything ever said by continental philosophers!

moral knowledge is uniquely strange: it's no stranger than modal, mathematical, or knowledge about various metaphysical facts.

There are no good arguments for hedonism about well-being: the lopsided lives argument is extremely convincing, and there are other arguments in the vicinity that I've written about.

Caspar Hare didn't prove utilitarianism :P

Expand full comment

I think there's often a correlation between simplicity and inherent plausibility. But it's the latter that matters. If an additional principle is both (i) intrinsically plausible, and (ii) yields more plausible verdicts, without conflicting with any (all-things-considered) more-plausible principles, the mere fact that it is an *additional* principle is no reason at all to reject it.

In other words, while we often have reason to reject more complex theories, it is because they are implausibly gerrymandered (i.e. intrinsically implausible), not just because they are complex.

Expand full comment

If a new principle yields more plausible verdicts, then I think there's a cost it's just outweighed. What is the virtue of not being gerrymandered? If it means not positing new principles that are different from the others, that doesn't explain what's wrong with positing epiphenomenal particles, and if it means not positing new principles then it's just simplicity repackaged.

Expand full comment

It probably doesn't make a great deal of difference whether one gives "literally zero" or just "very little" weight to simplicity relative to intrinsic plausibility. The main point I wanted to convey was that it's a mistake for people to assume that utilitarians must be *prioritizing* simplicity over substantive plausibility, as I certainly don't feel the slightest inclination to do *that*.

I agree that one shouldn't posit extra things for absolutely no reason. I'd be more inclined to say that this is because the extra posit lacks plausibility. If that counts as "simplicity repackaged" then perhaps I'm on board with some simplicity considerations after all. But I'm disposed to give lexical priority to considerations of plausibility when the two come into conflict. I don't particularly feel any impulse to prefer a shorter list of objective goods, for example; I'd simply want to characterize all and only the things that strike me, upon reflection, as seeming objectively good (without any particular concern for what the number of such goods turns out to be).

Expand full comment

Obviously you don't intend that to be more than a very brief sketch your view about this, but I just want emphasize that I think it needs much fleshing out. As I alluded to in my other comment, I think these sort of questions are very difficult and quickly spiral off into the deepest areas of philosophy.

Expand full comment

This all looks interesting, but hard to understand. Two of my favorite documents of all time were Scott Alexander's Non-Libertarian and Consequentialism FAQs. Have things been written with similar clarity regarding some of these more esoteric subjects?

Expand full comment

Just to check: did you follow some of the links and find the content *there* hard to understand, or just the super-brief summaries offered in *this* post?

I think it's probably inevitable that any super-brief summary of "esoteric subjects" will be hard to understand: it takes further background to be able to follow such debates. But I think/hope that most of the linked content is clearly written (though some may presuppose more philosophical background than a general reader is likely to have -- if you want to develop that background, the open access textbook at utilitarianism.net could be a good place to start).

Expand full comment

I only clicked the link to "Dualism all the way down: why there is no paradox of phenomenal judgment", because it sounded very counterintuitive. I didn't see a link to view, I assumed it was paywalled, and the abstract wasn't edifying. But I just looked again and actually there is a link to view it.

My comments on it:

- The "paradox" in section 1 seems... odd. I wouldn't say "Itchiness feels like this" because it's so hard to describe, and the proposition "my z-twin’s belief is not only false, it’s not justified" seems incorrect, in that the belief is only "false" if you treat the word "feels" as if it means "has qualia" which isn't the case under my understanding of epiphenomenalism or p-zombies. My understanding of epiphenomenalism is not that there is any paradox in a human/p-zombie trying to describe itchiness in particular (apart from it being indescribable). The paradox is rather (as Yudkowsky described it) that humans have *elucidated* beliefs in qualia as a concept in general, because *elucidated* beliefs seem to require a separate/independent cause from the real epiphenomenal qualia. Also, point 8 should not be "therefore epiphenomenalism is false" but "therefore epiphenomenalism is unjustified". I expect not to understand what follows without grokking what this part is trying to say, and I find the immediately following paragraphs fairly confusing, but not entirely so, e.g. the diagram appears to describe epiphenomenalism as I understand it, if the thick arrow signifies that information flow is unidirectional.

- I sense that the author equivocates between "epiphenomenalism" and "dualism". If she considers those concepts as the same (rather than epi. ⊂ dualism), we're not on the same page (then again, in recent years I've not been sure what dualism means or whether there is an agreed definition.)

- By §3 my head hurts a bit, and I decide to stop reading, but I think I get the basic idea. I conclude that she only analyzed a strawman. She said in the beginning that "Epiphenomenalist dualists hold that certain physical states give rise to non-physical conscious experiences, but that these non-physical experiences are themselves causally inefficacious". This matches my own understanding of epiphenomenalism, and with this in mind I observe that she hasn't mentioned the *real* paradox of epiphenomenalism, given her own conception of beliefs existing on some sort of dualist plane of existence. The paradox is like so:

1. Information flows from physical world to non-physical phenomenal conscious experience, or in other words, qualia

2. Information flows from qualia to beliefs within the "dual plane of existence" (to coin a phase ― I'm confused why *I* needed to coin a phrase; there should've been some standard pre-existing phrase that the author knew about and used)

3. Beliefs existing in the "dual plane", including beliefs that were directly caused by qualia in the dual plane, are either (i) causally efficacious and therefore not epiphenomenal, or (ii) mysteriously duplicated on the physical plane (and causally efficacious, because e.g. discussions like this one exist in the physical plane).

TBC she discusses *only* 1 and 2 which are nonparadoxical; the paradox is in 3. So sure, Chalmers may conceptualize epiphenomenalism differently than her, but this does doesn't obviate her responsibility to address the problem in her own conceptualization.

Let me know if §3-8 eventually discusses the true paradox.

Expand full comment

"Myth: Aren’t there decent philosophical grounds for denying that there’s anything (non-instrumentally) good about creating future lives?

Reality: Nope. The arguments are all terrible. See here, for example. (But I should stress that substantively terrible arguments can still be philosophically interesting. Just think of Anselm’s ontological argument.)"

This seems like far too strong a claim to make with only one example defending it, and I think there are non-terrible arguments in or based on Frick's work (in a reply to this comment).

I'm also not saying you should show that every individual argument "for denying that there’s anything (non-instrumentally) good about creating future lives" is terrible, but I think this would at least deserve broadly characterizing the types of common views/arguments and pointing out why they're terrible or pointing to writing elsewhere that together does so. I make an attempt to do that for arguments for "anything (non-instrumentally) good about creating future lives" in a reply to this comment.

Expand full comment

I think some of Frick's arguments aren't terrible or can be saved. I think Frick was getting at a general pattern of intuitively wrong conclusions that follow if you don't hold that all of our reasons should be conditional or negative with respect to there being standard bearers (people, promises, justice, other duties) or otherwise instrumental for such reasons. This seems pretty plausible across non-welfare-regarding reasons, and he gives many examples. Then, the same kinds of reasons that plausibly explain how we think we should treat promises, duties, and many other reasons can also better explain intuitions about wellbeing that many people also hold, like that it's not better to have more children with good lives even if it's worse for you and your existing family, or that it's better to adopt or foster children or animals than bring new ones into the world, both all else equal. You don't have to get to extreme cases pushing against aggregation like the Repugnant Conclusion.

Plus, if it were the case that all of our non-welfare-regarding reasons should be standard-affecting in some way (and we do have non-welfare-regarding reasons or intuitions about non-welfare regarding reasons still get at how to treat reasons in the right way, even if ultimately wrong), then this provides support for the claim that all of our reasons should be standard-affecting in the same way, including welfare-regarding ones. This is like assuming symmetry: we require argument for treating similar cases differently and can just deny (find unpersuasive) the arguments that try to identify morally relevant differences. Why should we treat wellbeing differently from promises, justice and other duties?

None of this seems like terrible arguments to me, whether or not we ultimately accept Frick's other arguments or specific account.

In further support of this and for standard-affecting views more generally, inherent reasons to create standard bearers just to satisfy those standards can lead to intuitively perverse replacement-like implications, like:

1. shirking duties in order to pick up new ones in general (https://link.springer.com/article/10.1007/s11098-018-1171-y)

2. involuntarily killing and replacing everyone with happier people (https://www.tandfonline.com/doi/full/10.1080/0020174X.2019.1658631) and the logic of the larder / (mostly) happy animal farming being good, and

3. involuntarily creating new preferences/interests in someone just to satisfy and increase their wellbeing, at the cost of their current interests. For example, make them really care about or be intensely happy counting blades of grass, far more than they cared about their loved ones or their careers. Or, on hybrid views, dramatically and involuntarily increase the weight they give to some objective goods, forcing them to abandon their current priorities in life. This seems way too alienating/paternalistic/disrespectful. (Parfit's "global preferences" don't block this, because you can also replace global preferences with stronger and more satisfied ones).

Preference-affecting views can avoid these problems. They can avoid the intuitively bad cases of the standard Repugnant Conclusion. They can avoid your intrapersonal Repugnant Conclusion. Preference-affecting views focused on global preferences can also avoid fixed preference generalizations of the intrapersonal RC. However, this doesn't *really* carry over to fixed population generalizations of the interpersonal RC, like making the torture of huge numbers of people already being tortured only barely worse each vs torturing one extra person. But all the interpersonal solutions will be alienating to some people if applied to intrapersonal cases, so we have reason to treat them differently.

Expand full comment

> "Why should we treat wellbeing differently from promises, justice and other duties?"

My linked post responding to Frick ('Against Conditional Beneficence') addresses this, esp. in sec. 2.1. The key issue is that many deontic reasons are purely *negative* in nature: it's not that keeping promises is good, but just that breaking them is bad. The analogous claim about people (that our existence can only be neutral or bad, all else equal) seems substantively very implausible.

> "logic of the larder / (mostly) happy animal farming being good"

There could be empirical/psychological reasons to oppose happy farming as corrupting, but fwiw I do think we should agree, on reflection, that it could be good:

https://rychappell.substack.com/p/accepting-merely-comparative-harms

> "dramatically and involuntarily increase the weight they give to some objective goods, forcing them to abandon their current priorities in life. This seems way too alienating/paternalistic/disrespectful."

I think many parents would agree that this can be good. Adults may have rights against certain kinds of "paternalistic" intervention, but that's no reason to deny the possibility that such intervention *could*, in principle, be good for them. (I'd simply oppose it in practice for Millian reasons: I wouldn't trust the intervener to know better than their target.)

> "Preference-affecting views can avoid... your intrapersonal Repugnant Conclusion."

At the cost that they can't explain why it's worth saving the life of someone who is temporarily depressed (to the point of lacking any future-directed desires, or at least any of sufficient strength to outweigh their current suicidal desire). That seems a more clear-cut mistake.

I also wonder how much preferentism helps when we consider an agent who desires future pleasure, but doesn't have any settled dispositions on how to balance quantity vs quality here. Maybe they just have an implicit "in whatever way would be most reasonable" proviso to their general desire for more pleasure. I guess the preferentist would just have to take this desire to be ill-defined? That seems a cost, at least (though not a decisive one).

Expand full comment

>At the cost that they can't explain why it's worth saving the life of someone who is temporarily depressed (to the point of lacking any future-directed desires, or at least any of sufficient strength to outweigh their current suicidal desire). That seems a more clear-cut mistake.

I don't think it's a clear-cut mistake. Letting someone like that die could just mean respecting their wishes, if there really is no option preferable to them. They may be misinformed or biased because of their depression, e.g. they can't imagine themselves happy again, but they're mistaken about how the future could go.

And an objection to your response is that the reason you give to want to save them is also the same kind of reason their parents could have to let them die to focus on having another child with better prospects, or for someone to kill and replace them or anyone else, or to involuntarily manipulate someone into having different preferences. This doesn't seem plausible to me.

I suppose another possibility is that we could mistrust people's stated or apparent global preferences even if they aren't mistaken about how the future will go or how they'll feel. Then, we can idealize their global preferences instead. People typically already have certain dispositions, e.g. to feel joy in response to certain things or come to prefer certain things or come to be grateful for life or look forward to more of it, which we might consider hidden or implicit preferences. Idealization can bring those forward and give them greater weight, but idealization doesn't have to allow replacement with totally different preferences. They are not disposed to preferring absolutely anything. Few people have dispositions to love counting blades of grass. Someone who's *temporarily* depressed has non-depressed dispositions and an idealized global preference that should also reflect their non-depressed states.

Most people also have dispositions towards appreciating what are commonly considered objective goods and disprefering what are commonly considered objective bads. And these dispositions come with specific ranges of relative weights across these apparent goods and bads. We don't get to set those weights to whatever we like for them.

Of course, this is even more ill-defined/vague.

Expand full comment

> "the reason you give to want to save them is also the same kind of reason their parents could have to let them die to focus on having another child with better prospects"

No, I think the reason to save them is that *they* would be better-off as a result of their happier future. That's certainly not a reason to replace them. (There may, independently, be some impersonal reasons to replace them, but I think those are weaker and can typically be ignored.)

Expand full comment

As I pointed out in the same sentence, it's not *just* whole person replacement, but also preference replacement within a person. It may be that we can treat these separately as impersonal vs personal, but we still have to respond to the personal problem. By what standard are you judging that they're better off that doesn't permit (in theory) radical involuntary preference replacement like the examples I gave? Or do you think it's in theory just fine to radically change a person's preferences against their wishes and cause them to abandon their important life projects and attachments for something else as long as they are better off overall, however you define that? (To be clear, I don't think they would be better off in some sense, but I suspect capturing that well requires us to treat some interests in an interest-affecting way.)

Maybe radical preference replacement requires (partial) identity change, so the reasons to do it are actually kind of impersonal? And, this may not apply to temporarily depressed people, say, if they already have hidden preferences to be happy, etc.. (EDIT: Looks like you said so in another reply.)

But then I also wonder if they have hidden preferences to live a life of wireheading, on a constant psychedelic trip or in an experience machine, and why those aren't more important.

And we can probably radically replace people's preferences without changing too much else about their identity. But then maybe identity should be defined primarily in terms of these dispositions, or "multiplicatively" in terms of them, so that radical changes to them do ~totally change identity.

Expand full comment

You're conflating a lot of issues here.

Whether it's "just fine" to paternalistically make someone better-off against their wishes depends upon whether they have *rights* against such interference. (I'm only sympathetic to instrumentally-grounded rights. But I'd sooner accept fundamental rights than deny that life can be good. So, given that superior solution, I don't see any reason to resort to denying that life can be good, in response to the line of argument that you're trying to pursue here. Still, I'd ideally like to avoid both, as below.)

A separate issue is whether it's *true* that an individual could be made genuinely better-off through preference change. I suggested that (i) it seems like this *is* possible through moderate changes, e.g. of the sort that parents try to inculcate in their children; but (ii) sufficiently radical changes would risk changing/undermining their identity, and hence *not* qualify as a benefit to the original individual. I'm not sure what remaining "personal problem" you think is not addressed by this.

[Edit: sorry, I missed the context of which comment you were replying to here. It's not that I think temporarily depressed people have "hidden preferences" for happiness, but just that creating/restoring preferences for good things -- esp. if continuous with their past personality -- can make them better-off without threatening their identity.]

Expand full comment

Ah, I meant the question rhetorically. Even if you can make a decent or even decisive counterargument against, that doesn't make the argument for negative or conditional treatment of all reasons terrible. The argument form is fine and uses typically strong premises (treating cases similarly unless there's a specific reason to do otherwise), it just might be unsound in this case if you're right. (But I don't think you are right, and many would take your premises to be implausible or highly non-obvious. I'm not specifically sold on Frick's account, though.)

But also, besides possibly reasons regarding welfare, are there any other reasons or duties (to people, say) that shouldn’t be valued negatively or conditionally?

I think you're underestimating how bad the preference manipulation is allowed to be. The point is that you can turn someone into any other kind of person of your choice, with any other kinds of preferences, as long as overall preference satisfaction increases. It's not just parents gently guiding their children.The Millian response might work in practice now, because of our currently limited understanding of cognitive neuroscience, but in theory and possibly in the future on humans or artificial consciousness (reprogramming), the intervener can have far more control in deciding what the target will become. It can go basically as far as killing someone and replacing them with someone entirely different of your choice, as long as the preference satisfaction increases overall. That's basically the same problem, anyway.

Hybrid views place some constraints on what you can replace their preferences with, but they don't stop you from choosing with a lot of flexibility.

On whether their global preferences (about future pleasure) are ill-defined, this could be the case and maybe that does count against, but the alternatives face similar problems. How do you come up with objective cardinal values for them to aggregate on their behalf, and why is that not ill-defined (specifically vague)? Isn't this something you're also undecided about?

Expand full comment

> "besides possibly reasons regarding welfare, are there any other reasons or duties (to people, say) that shouldn’t be valued negatively or conditionally?"

This is just the question of whether anything else is good. I'm pretty sympathetic to welfarism, but plausible non-welfarist goods could include things like natural beauty, impressive cultural attainments / perfectionist goods, etc.

> "The point is that you can turn someone into any other kind of person of your choice, with any other kinds of preferences, as long as overall preference satisfaction increases."

It wouldn't be a benefit *to that person* if you changed them so radically as to undermine their personal identity. If you're talking about killing & replacement (which seems very different from *paternalistic* interventions on an individual), one may reject that on grounds of either partiality or deontic rights violation. It's a very familiar idea in ethics that some impartially better worlds aren't ones we should want to see realized: there are other things we should care about in addition to impartial value. The suggestion that there's *nothing* (even pro tanto) better about the happier replacement is a silly overreaction -- like denying that it's in any way better to kill one to save five.

Expand full comment

Turning it around, what are the non-terrible arguments "that there’s anything (non-instrumentally) good about creating future lives"? From what I remember, most do one of the following:

1. take as a premise that something (e.g. happiness, flourishing lives) is worth creating in itself; someone could equally just deny this

2. take some disputable position on certain cases (e.g. utopia is better than a barren rock) that person-affecting views reject without a non-terrible argument for that position or any argument at all; someone could equally just assert the opposite position

3. improperly generalize from arguments that don't rule out wide person-affecting views (e.g. using the non-identity problem)

4. improperly generalize from arguments against specific person-affecting views or from counterarguments to specific person-affecting arguments

5. assume transitivity, the independence of irrelevant alternatives and full (or enough) comparability, or use arguments for them

6. assume symmetry

1-4 are all either invalid or assume too much, and so are terrible. Or, if any of these are non-terrible (I'm not sure what standard you're using to judge), then there are similar non-terrible arguments for "denying that there’s anything (non-instrumentally) good about creating future lives". I'm not sure either way about 5 being terrible, but I don't find them persuasive overall. I think 6 is not terrible, because I think you should have a reason to deny treating relevantly similar cases similarly, but I find some arguments for asymmetry compelling or at least promising, especially actualist(-like) arguments.

Plus, unless I misunderstood, you seemed sympathetic to giving *inherent* partiality towards the actual (in Rethinking the Asymmetry?). I know actualism as a person-affecting view doesn't follow from partiality towards the actual or arguments for it, but the arguments are almost the same. It's hard to square the arguments for actualism being *terrible* with thinking those arguments give you any reason at all to inherently prioritize the actual. Adding to this, once you accept partiality towards the actual, you've already rejected 5 and 6 in general, so, as far as I know, are left only with terrible arguments to reject actualism or non-terrible arguments to do so with similar ones *for* actualism.

Furthermore, views giving inherent priority to the actual can be arbitrarily close to actualism, anyway, e.g. you could give arbitrarily or even infinitely more weight to the actual.

Expand full comment

I don't think you can determine philosophical quality on purely formal grounds: much depends on the substantive content of the claims in question. So, for example, it could be that arguments for the view that "pain is intrinsically good" are all terrible, even if a normative skeptic could "turn it around" and similarly doubt any argument for the claim: "pain is intrinsically bad". The latter view may just be objectively self-justifying in a way that the opposite view is not. (Compare the Moorean response to skepticism.)

Generally speaking, I don't think it's always possible to convince someone sympathetic to a crazy view that their view is crazy. Still, some views -- like "pain is intrinsically good" (or, I would add, "utopia is no better than a barren rock") -- are nonetheless objectively crazy.

> "It's hard to square the arguments for actualism being *terrible* with thinking those arguments give you any reason at all to inherently prioritize the actual."

I disagree. In general, it is very easy to square "we plausibly have *extra* reasons of sort X" with the view that the supporting arguments would be a *terrible* reason to think "we have *only* reasons of sort X".

But again, a lot here depends upon substantive judgment calls in reflective equilibrium. I don't expect to be able to persuade everyone to share my judgments. But I think/hope that *most* probably would, if they read all I've written on the topic.

Expand full comment

If *a lot* here depends on substantive judgement calls in reflective equilibrium, isn't calling the arguments all terrible too strong? I would hope whatever standard you use for deciding whether an argument was terrible wouldn’t depend much on substantive judgement calls. I think all choices of formal grounds require substantive judgement calls, too, but typically much less controversial ones, so I also agree this is at least somewhat unavoidable, but I think the lines you're drawing probably are pretty controversial. That pain is inherently bad, or at least not inherently good, seems far more straightforward and far less controversial.

I think one of the problems is that the argument you ultimately fall back on is to basically just assert lives can have positive value, which I'd guess is ~totally unpersuasive to almost anyone sympathetic to person-affecting views who has thought much at all about them. It's obvious that they’d be denying this or something similar. I think if I judged by standards like you do, I'd consider this a terrible argument, because to me it's obviously silly to say that we have reason to create people for their own sake, and that misses the point of ethics.

Expand full comment

You're free to consider my arguments terrible! It's a judgment call. (If you think what I'm saying is "obviously silly", all things considered, then that sounds a lot like "terrible" to me.)

As I've tried to stress, this sort of substantive judgment differs importantly from purely academic assessments. Many substantively terrible arguments can still contain valuable philosophical insights that make them worth publishing in academic journals, discussing seriously in philosophy seminars, etc. Here I'm just talking about what I think people should actually *believe* at the end of the day. But I freely admit that it's all contestable -- like the existence of the external world, and literally everything else in philosophy.

Expand full comment

How is linking to your own previous blog posts and your wife’s papers *conclusive* evidence for anything? The arrogance is staggering.

Expand full comment

Where are you getting "conclusive" from?

The evidence is not the links, but the arguments contained therein. If the arguments successfully establish what they claim, it is not arrogant to share this info. (Generally speaking, I think it is helpful to share information. I wish more academics would do likewise, hence my invitation at the end of the post.) And I welcome reasoned counterarguments on any of these points.

If you are simply assuming, without argument, that our academic work could not possibly establish what it sets out to establish, then the staggering arrogance is all yours.

A final point: recall that this is my personal blog. No-one is forcing you to read it. If you don't find it valuable, do something else with your time. My general policy is to ban people who leave nasty, low-value comments after a single warning. This is yours.

Expand full comment

“Myth-busting” implies conclusive evidence. I would even claim it implies wide agreement in the field, which doesn’t exist in philosophy.

So you are merely claiming positions you happen to hold are somehow “myth-busting” refutations. In other words, you are merely promoting your views to argue that there is progress in philosophy.

I would expect such arrogance from a freshman in philosophy, not a supposed academic.

Expand full comment

Did you even read the post's introduction? It's very clear on this point:

> "[It's] interesting to consider what mistakes philosophers commonly make, perhaps based on an outdated sense of the philosophical literature. Vanishingly few papers have been read by most philosophers, and most papers are read by vanishingly few philosophers. So it’s hard for new insights to permeate the discipline’s “conventional wisdom”. In this post, I’ll flag some of the persisting “myths” of moral philosophy that tend to bother me the most. Feel free to dispute these or add your own suggestions in the comments!"

If you insist on reading "myth-busting" to mean something different from the explication that I provide, that's on you. I was very clear on what I meant.

And *of course* I think my papers constitute academic progress. I wouldn't bother writing them otherwise! I'm sure many other philosophers *also* have papers that similarly correct common misunderstandings, and I invite them to share theirs too. Your attitude here is utterly idiotic.

Expand full comment
User was indefinitely suspended for this comment. Show
Expand full comment

> "Calling something a mistake (“a myth”) assumes it’s settled and beyond doubt."

No it doesn't. You're foolishly assuming infallibilism about knowledge: I can know that P is mistaken, and correctly assert that P is mistaken, even if my justified true claim remains "open to doubt". This was the #1 "example of solved philosophy" mentioned in my decades-old post, and is universally recognized by philosophers today. Sounds like you could learn something from following those links!

[But banning you now, since you persist in being obnoxious while having nothing of value to add.]

Expand full comment

Utilitarianism might not *need to* devalue individuals, but I think it should. I don't see what's wrong with fungibility, in a way that doesn't sweep under the rug real-world indirect consequences for those with relationships to the "swapped" "individual". Persons seem to be complex aggregations of experiences, not vessels that contain experiences. And unless one accepts some odd belief in pre-existent souls, acceptance of every mundane neutral choice that partially determines which persons come into existence, looks like an acceptance of fungibility already.

Expand full comment

Interesting! I think one can combine Parfitian reductionism about personal identity with the thought that individuals (rather than just the experiences they contain) are what *fundamentally* matter. Otherwise it's very hard to make sense of any kind of loving/cherishing attitudes, and the latter do not seem strictly irrational/unwarranted. G.A. Cohen's "Rescuing Conservatism: a defense of existing value" nicely brings out the importance of non-fungible valuation, I think: https://philpapers.org/rec/COHCR

Expand full comment

That's a really interesting citation on such a topic! I'll see if I can get a hold of it.

The biggest problem for me is similar to the "paralysis argument" against some deontological constraints: every prima facie morally neutral thing we do swaps future persons for other future persons. This seems fine for someone who defends the present-future asymmetry of moral status in other contexts, such as the 200-year landmine in a schoolyard, but for those who don't, thinking there might be something (predictably) bad about altering the precise moment two strangers complete sexual intercourse, seems far more counterintuitive than fungibility. (As does the other escape from the problem, pre-existing souls.)

Expand full comment

It may be that future people can, in effect, be treated as fungible, since their identities aren't really set yet. Also, the pro tanto "regrettability" of bring into existence X rather than Y is precisely balanced by that of bringing Y rather than X. So there's no reason to prefer paralysis.

Expand full comment

You "give literally zero weight to simplicity per se"? I'd like to hear more about that. I wish more philosophers would talk about the (ideally formal) epistemology of moral theory-building, and the connections with phil of science, phil of concepts, and statistics.

[Whoops looks like Bentham's Bulldog wrote something similar below. Oh well.]

Expand full comment

See my reply to the other comment!

Expand full comment