9 Comments

My response here would be the opposite. I say that there is nothing that objectively matters - there is only subjective importance. But doing good for others matters subjectively to those many others, in a way that a personal hobby usually doesn’t!

I don’t know that any of us *has* a reason to care about the desires and preferences of others, except to the extent that we *do* happen to care about them. But to the extent that we do happen to care about attempting to widen the set of perspectives we work for (whether to the universal and objective values that you endorse, or to the wide intersubjective values that I do) then doing good for others matters more for that.

Expand full comment

So donating a billion dollars to effective charities and counting grass would matter equally if everyone happened to have the same subjective assessment and level of care towards those two very different activities? Reductio!

Expand full comment

If everyone happened to have the same level of care to counting grass more than anything else, then effective charities would be effective at getting grass counted! (They would probably also do a lot to preserve life and health of their grass counters, and develop new and effective methods of counting grass.)

Expand full comment

That's besides the point. If your view entails the verdict that literally all activities (including grass-counting) could possibly hold equal worth or mattering, then that suffices for a reductio. Only stubborn, dogmatic, epistemic vicious attachment to a theory could result in biting that bullet.

Expand full comment

Talking about confronting reality head on, I wonder if there is any room for subjective concerns left at all. Maybe we just always have to concern ourselves with what's objectively important, and any departure from that is a moral failure on our part. I fear the presumption to the contrary is wishful thinking.

Expand full comment

Awww is that the end of your QB series already? 🥲 excellent series tho!

And I wholeheartedly agree that the project of beneficence and reasons of morality/beneficence often outweigh personal projects and reasons of prudence or desire-satisfaction or preference.

But how do you think the project of beneficence weighs against other projects such as intellectual projects and aesthetic projects (setting aside morally good outcomes from them)? E.g. Thaddeus Metz's (2013) argues that one lives meaningfully insofar as one orients one's rational self towards doing things with objective value (not necessarily moral), in particular, moral, intellectual and aesthetic value (what Metz sums up as "The Good, the True, the Beautiful"), or more specifically, Metz's fundamentality theory intriguingly adds: towards things pertaining to fundamental conditions.

Additionally, as it happens, this question of comparative weights has personal significance for me: As a twentysomething, longterm career decisions baffle me. I wonder whether I ought to go ahead and be a philosopher (an intellectual project) or go for a lucrative career so as to be able to Earn to Give and maximise my positive expected impact on the world by donating to effective charities.

Expand full comment

I agree that non-moral values can be very meaningful to engage with. More generally, something that's (at least psychologically) challenging for consequentialism is that our sense of "meaningfulness" in life is very non-consequentialist in nature.

An example I like to use in teaching: suppose Superman could just turn a crank all day to generate limitless renewable energy. It'd probably do a lot more good than fighting crime! But it wouldn't *feel* as meaningful to him: he doesn't get to directly see the benefits, bask in the "warm glow" of others' appreciation, etc. If the value we produce ends up too "distant" from us, it's hard to reap the psychological rewards. That can undermine our motivation to do good in indirect ways, even when it's objectively the most worthwhile thing to do.

One possible upshot of this is that's it's worth looking for ways to try to reconcile objective impact with psychological rewards. (Maybe we could set up projector screens in Superman's power generator room, so he can see more of the impact he's having?) Maybe we can only expect earning to give to appeal to people who would find those lucrative careers independently rewarding anyway? Or maybe some people find it easier than others to be motivated by indirect value, and we should celebrate that rare trait when we find it? Lots of open questions here, I think.

Expand full comment

The point/question I was getting at is different, I think. *All else being equal* (including psychological rewards or the *sense* of meaningfulness or lack thereof), how do moral values weigh against nonmoral (e.g. intellectual or aesthetic) values? And which is a weightier determinant of meaningfulness of lives? Crudely: is it better to be Einstein/Spinoza/Proust or Gandhi/Mother Teresa/MLK, pretending their subjective states are equivalent (and setting aside the farmer's moral achievements)? Are the two kinds of value totally incommensurate, and Reason cannot tell us how to decide between them? But surely not *totally*: surely it's better to choose the life of Einstein rather than the life of a person whose only achievement is helping an old lady across the road that one time. And surely it's better to choose the life of MacAskill/Ord over the life of a person whose only achievement is winning a spelling bee.

Expand full comment

It's very tricky; in addition to uncertainty about how to weigh well-being vs excellence, I expect that lives of intellectual excellence will often have (hard to predict but non-trivial in expectation) instrumental value, too.

Considered in isolation, it would seem callous to say that a purely intellectual breakthrough was ("all-things-considered") more important than saving a great many lives. On the other hand, looking at the "big picture", lives are easier to come by than breakthroughs. If I could choose between having 1 more Einstein or 1000 more "normal" people in existence, it seems like the former would be the bigger deal. How best to reconcile these conflicting thoughts seems underexplored. (A quick argument for life-saving: if those 1000 people, and their descendants, have above-replacement fertility then they'll eventually create *so* many more people that their descendants might be expected to include *several* Einsteins - or whatever other non-moral values you might think worth promoting.)

Expand full comment