And TBF to the critics, I do understand why they react so negatively when people justify weird longtermist projects as EA or using money to do AI safety research.
When it was just bed nets, I think most people weren't too bothered but they see this other shit and think:
Hey those EA folks are lecturing us on doing what has the most impact …
And TBF to the critics, I do understand why they react so negatively when people justify weird longtermist projects as EA or using money to do AI safety research.
When it was just bed nets, I think most people weren't too bothered but they see this other shit and think:
Hey those EA folks are lecturing us on doing what has the most impact and it turns out they are just hypocritically using that to justify giving to whatever cause feels important to them.
And yes, there is alot of truth to that. Of course, I'm inclined to see that as at least people agreeing on the right principle and then making the normal human mistakes about how best to interpret that.
But it's easy to see why this feels like an afront to the sort of person who tends to see the world in less of the STEM/literal way and more of the commentary on values/groups/etc way (probably this is better understood as high vs low decouplers)
They aren't seeing the question about whether we should give in the way that makes the most impact as just an independent question of fact. They tacitly assume that the only reason you say that is to critisize those who are just giving based on what feels important to them.
And so they inherently see EAs as engaged in a kind of moral lecture (we're better than you) and as such respond with the normal anger people feel when a moral scold is revealed to be hypocritically engaged in the same kind of behavior.
--
Ofc I'd prefer philosophy not do this. But then again, I take a very high decoupler approach where I see the only value of philosophy as trying just to figure out what claims are true and tend to see the parts of the subject that don't embrace decoupling (the less analytic stuff) as simply mistakes to be eradicated.
So I'm hardly one to say how to fix this problem since I kinda embody the attitude that upsets the low decouplers in the first place and does see that approach as wrongheaded.
I think Anscombe would disagree with that. Same with her followers: neo-Aristotelians and (to lesser extent) Rawlsians. These people also appeal to Wittgensteinian phil of language. I never read W, but perhaps he would also disagree.
Maybe Quine too. Wasn't his point that everything is connected?
They'd acknowledge some decoupling is good but not total decoupling.
This is why their theories can only be modeled through machine learning, not English.
And TBF to the critics, I do understand why they react so negatively when people justify weird longtermist projects as EA or using money to do AI safety research.
When it was just bed nets, I think most people weren't too bothered but they see this other shit and think:
Hey those EA folks are lecturing us on doing what has the most impact and it turns out they are just hypocritically using that to justify giving to whatever cause feels important to them.
And yes, there is alot of truth to that. Of course, I'm inclined to see that as at least people agreeing on the right principle and then making the normal human mistakes about how best to interpret that.
But it's easy to see why this feels like an afront to the sort of person who tends to see the world in less of the STEM/literal way and more of the commentary on values/groups/etc way (probably this is better understood as high vs low decouplers)
They aren't seeing the question about whether we should give in the way that makes the most impact as just an independent question of fact. They tacitly assume that the only reason you say that is to critisize those who are just giving based on what feels important to them.
And so they inherently see EAs as engaged in a kind of moral lecture (we're better than you) and as such respond with the normal anger people feel when a moral scold is revealed to be hypocritically engaged in the same kind of behavior.
--
Ofc I'd prefer philosophy not do this. But then again, I take a very high decoupler approach where I see the only value of philosophy as trying just to figure out what claims are true and tend to see the parts of the subject that don't embrace decoupling (the less analytic stuff) as simply mistakes to be eradicated.
So I'm hardly one to say how to fix this problem since I kinda embody the attitude that upsets the low decouplers in the first place and does see that approach as wrongheaded.
Yeah, that sort of low-decoupling is just inherently antithetical to philosophy (and academic ideals more generally), IMO.
I think Anscombe would disagree with that. Same with her followers: neo-Aristotelians and (to lesser extent) Rawlsians. These people also appeal to Wittgensteinian phil of language. I never read W, but perhaps he would also disagree.
Maybe Quine too. Wasn't his point that everything is connected?
They'd acknowledge some decoupling is good but not total decoupling.
This is why their theories can only be modeled through machine learning, not English.