<blockquote>beneficentrism[sic] closely correlates with utilitarianism in practice, <\blockquote>
Cite? You have provided some cherry-picked anecdotal examples, not data analysis.
A moral theory needs more in order to be true or false. A moral theory includes a standard, which evaluates things (actions, circumstances, intentions, whatever)…
<blockquote>beneficentrism[sic] closely correlates with utilitarianism in practice, <\blockquote>
Cite? You have provided some cherry-picked anecdotal examples, not data analysis.
A moral theory needs more in order to be true or false. A moral theory includes a standard, which evaluates things (actions, circumstances, intentions, whatever) as conforming to the standard or violating it. “X violates standard Y” can be true or false; “standard Y” can’t even be true or false without implicitly adding premises of the form “everyone ought to adopt standard Y.” This is Hume's idea, no “ought” from “is” alone. “I accept standard Y” is much easier to derive than “everyone must accept standard Y by logical necessity or empirical inference.”
So it seems better to speak of why standard Y is superior to standard Z, rather than to speak of standard Y or a moral theory being true. But then, by what standard should we judge that standard Y is superior to standard Z? Do we need a meta standard to judge standards? And then a standard to judge meta standards? Or should we expect Y and Z each to contain ideas about how to judge standards? If they agree on which of them is better, that seems like a win, but the typical cases will involve each picking themselves as superior.
If we grant that our understanding of morality is less than perfect, a moral theory should include principles regarding how our understanding might be improved. When we look at individuals, this is difficult. Persons' moral intuitions derive from generalizations of their experience, using our evolved psychology. Intuition may be the elephant, and theory the rider. At the social level, the various actions and evaluations of persons combine into an intersubjective whole, where everyone influences everyone else's attitudes and beliefs to a greater or lesser degree. This social process seems able to adjust and improve. Ideally, it criticizes itself, and contains space for alternate hypotheses to receive attention and be rejected or incorporated. But it isn’t fool-proof; it produced Stalin, Mao, and Hitler.
I’m not sure what we should conclude, except that the post only considers these issues obliquely, and makes implicit but unexamined assumptions. This might be necessary. Perhaps finding and examining these assumptions will help the discussion to move forward, or maybe they can be taken for granted and left unstated, if we really all accept them.
<blockquote>beneficentrism[sic] closely correlates with utilitarianism in practice, <\blockquote>
Cite? You have provided some cherry-picked anecdotal examples, not data analysis.
A moral theory needs more in order to be true or false. A moral theory includes a standard, which evaluates things (actions, circumstances, intentions, whatever) as conforming to the standard or violating it. “X violates standard Y” can be true or false; “standard Y” can’t even be true or false without implicitly adding premises of the form “everyone ought to adopt standard Y.” This is Hume's idea, no “ought” from “is” alone. “I accept standard Y” is much easier to derive than “everyone must accept standard Y by logical necessity or empirical inference.”
So it seems better to speak of why standard Y is superior to standard Z, rather than to speak of standard Y or a moral theory being true. But then, by what standard should we judge that standard Y is superior to standard Z? Do we need a meta standard to judge standards? And then a standard to judge meta standards? Or should we expect Y and Z each to contain ideas about how to judge standards? If they agree on which of them is better, that seems like a win, but the typical cases will involve each picking themselves as superior.
If we grant that our understanding of morality is less than perfect, a moral theory should include principles regarding how our understanding might be improved. When we look at individuals, this is difficult. Persons' moral intuitions derive from generalizations of their experience, using our evolved psychology. Intuition may be the elephant, and theory the rider. At the social level, the various actions and evaluations of persons combine into an intersubjective whole, where everyone influences everyone else's attitudes and beliefs to a greater or lesser degree. This social process seems able to adjust and improve. Ideally, it criticizes itself, and contains space for alternate hypotheses to receive attention and be rejected or incorporated. But it isn’t fool-proof; it produced Stalin, Mao, and Hitler.
I’m not sure what we should conclude, except that the post only considers these issues obliquely, and makes implicit but unexamined assumptions. This might be necessary. Perhaps finding and examining these assumptions will help the discussion to move forward, or maybe they can be taken for granted and left unstated, if we really all accept them.