24 Comments
Jul 15Liked by Richard Y Chappell

While I agree, I think it is a response to a genuine and persistent bias in our thinking and even how we model action. And while you are correct it is technically false I worry that it's necessary given the complexity of the true reasons to avoid extreme action.

We tend to think and model people by idealizing away psychological constraints and asking about what possible choices they could make. Of course any model has to involve idealizing potentially deterministic physical reality into a set of choices and talking about what's physically possible for normal humans is a useful way to make this distinction. [1]

And while that's useful it tends to create a persistent bias towards people ignoring important psychological constraints. Be it those that prevent a doctor from undetectable killing one patient to save 10 or for someone to harness the backlash effect in a political assassination by engaging in a false flag operation (if assassination isn't useful bc of backlash than in theory a false flag assassination should be).

The problem is that it's not really psychologically possible (or at least likely) for someone to engage in those sort of acts without being hyped up in a way that makes the motives for the act relatively transparent and or makes discovery likely. Suicide bombers have all sorts of religious encouragement and even eco-terrorists require a support structure that assures them they are acting correctly.

But these tend not to be the kind of explanations people find compelling -- in part because it is really kinda hard not to imagine yourself as a free actor when contemplating future choices.

---

1: It's a bit hard to define this in a principled fashion (it's not quite all possible commands the brain could give the body because it doesn't include things like a superhuman ability to ignore crippling pain) but it's a natural idealization to make because we have a relatively clear shared concept about what is involved in normal cases and it doesn't suffer from the kind of diagnolization worries that occur when you consider psychological constraints.

After all, one of the goals of this kind of modeling is to persuade people or guide action and the danger with trying to identify psychological constraints is that they may not be compatible with knowledge that the constraint is used in modeling their potential behavior (or more accurately constraints that only apply when someone doesn't know they are being used to model them aren't that useful).

Expand full comment
Jul 15Liked by Richard Y Chappell

One minor point - I don’t think that assassination being net negative in expectation means that undetected false flag attacks would be net positive. That would be true if the only stakes are zero sum political stakes. But it’s natural to think that one of the downsides of assassination is shared between worlds in which a political ally and a political enemy are assassinated, namely all the costs of a potential increase in political chaos and violence (not to mention the cost of the loss of a life).

Expand full comment

Certainly, but once you say the stakes are sufficiently high the difference between the two outcomes eventually overwhelms that harm.

It may be true in most cases in the actual world but if it's possible for the stakes to be high enough eventually it becomes net positive.

Expand full comment
Jul 16·edited Jul 16

Definitely possible - but if you think that both the assassination on one side and assassination on the other side are high variance in their political consequences, but with expectation close to zero, then the broader social consequences might always dominate, leading to it being net negative in expectation. (Though it does matter whether “close to zero” is actually systematically biased a little bit towards one side, in which case you’re right that raising the stakes enough could make the political expectation larger than these consequences. Though I really don’t have a clear sense of which side of zero we should expect this to be on. Probably it depends on how likely the assassination attempt is to succeed.)

Expand full comment

I assumed that raising the stakes meant asserting that certain outcomes differ by huge amounts in their expected values, e.g., Trump getting elected is going to lead to the death of democracy. I don't think people feel much pressure to claim that the variance isn't that large.

But yes, I agree that is true in a model where you don't have extremely high difference in expectation.

Expand full comment
Jul 16Liked by Richard Y Chappell

> The obviously more reasonable grounds for being opposed to political violence is that (you reasonably expect that) political violence does not help the democratic cause.

Why can't we arrive here via:

Would we want unlawful killing without due process rights to become a universal law? I might be next, the people I think of as the "good guys" might be next, etc.

No matter the stakes.

Expand full comment
author

Yeah, I think that's part of the story. Not so much a purely hypothetical concern about universalizing the behavior (there are possible scenarios where rebellion is genuinely justified, after all, even if you couldn't "universalize" rebellion in general), but more about how violent precedents can be generally destabilizing in harmful and unsettling ways, against the backdrop of a generally stable but potentially teetering polity.

Expand full comment

While I obviously agree with the general statement that stakes can be high in theory, in practice, as you stated, there seems to be very good reason to take extreme caution because of means justifying ends type behavior about things that don’t pass necessary certainty thresholds.

This would apply to people who 1) don’t take the outside view on things, 2) do take the outside view and still come to very extreme conclusions —say, Yudkowsky on AI risk, or 3) normal people who just don’t understand how rules can be helpful (I.e. someone who shoots the president). This seems like it could be the case quite a lot — especially among those whose intuitions lean heavily utilitarian, in which the following of other rules don’t hold weight.

Stakes never being high/ certain enough to justify drastic action may just be a noble lie that will lead to better consequences in general. Because of that, I think we should taboo this ends justify the means talk. While this rule may not work for actual extreme cases (say, AI risk actually does require drastic action), I just think, even if people have this rule, they will reject it in the necessary cases.

Expand full comment
author

Problems arise when one combines: (i) factually high stakes, (ii) non-absolutist ethics, and (iii) naive instrumentalist decision procedures.

(iii) is very clearly the true source of the problem. You could try to lie about (i) or (ii), but the lies are so transparently absurd you will never convince everyone. Why not try the truth about (iii) instead? Seems far more promising to me.

Expand full comment
Jul 15·edited Jul 15Liked by Richard Y Chappell

I think religious tolerance is a counterexample here which shows the benefits.

The religious wars of the past show that priors (or at least what look like them after dogmatic upbringing) are distributed in a way such that if people act rationally in response to their beliefs and aren't subject to pressure to underestimate the stakes bad things might happen. It seems likely that people vary enough in their priors that absent such pressure you would end up with warring factions as we see so often in extremist religious conflict.

And i don't think one should necessarily model what is going on as an attempt to rationally convince people. For instance, the idea of religious tolerance -- implicitly the view that it's not so important if people have the wrong theological views -- wasn't spread so much by rational argumentation but (much like the religious beliefs themselves) via social pressure.

I mean, on traditional views about damnation and theology, it really is essentially infinitely bad to let people worship the wrong god (including on their own deontic divine command moral theory) and virtually unlimited horror could be justified in the name of conversion to the true faith. But we just basically pressured people into accepting religious tolerance -- basically the attitude it was no big deal people didn't believe in the true faith -- even though objectively no good argument for it was used to persuade them.

Expand full comment
author

I agree that social pressure often does more work than rational argumentation. Still, I take it that the good arguments for religious tolerance invoke (i) social contract game theory -- you don't want heretics in power to condemn everyone to Hell by forcing them into sharing their *false* views, and (ii) epistemic humility: you can't just assume that your religious view is the *actually correct* one (given epistemic symmetry with other believers of different creeds), so having to win out in the marketplace of ideas is maybe more truth-conducive than any other neutrally-describable method (especially the "might makes right" approach rejected in (i), above).

If people don't understand the good reasons for liberal tolerance, I'm all for social pressure requiring tolerance anyway. They don't have to understand the reasons. I'm just skeptical that the pressure should take the specific form of relying on dubious background claims, i.e. asserting false reasons.

Expand full comment

Maybe a better way of putting the point is that outside of some narrow academic (and maybe not there) contexts it's not actually possible to seperate the social meaning of asserting that something isn't a big deal from the social pressure to treat it as such.

I don't think you could have said "Its horrible and unthinkable that those non-christians are raising children who won't know the truth faith and will be damned for eternity" and insisted this was incredibly important while also saying "but we catch more flies with honey so pretend you aren't horrified and act like friendly neighbors so that we can convert more of them."

People just can't handle that level of deception/internal conflict.

Expand full comment
Jul 15Liked by Richard Y Chappell

I was just having a conversation last night at dinner about the comparison between vegetarians tolerating meat eaters and religious toleration. I was arguing something like the view “you catch more flies with honey” as the explanation for this toleration.

Expand full comment

Certainly, but I just think it's hard to actually persuade someone to treat someone pleasently if they truly think of them as horrible moral monster.

Yes, there are certain workarounds. The christians have the "we're all sinners" thing but it's limited. I mean imagine trying to get someone to do that with someone they thought was a child molester.

Expand full comment

I think it matters that with religion and vegetarianism, the fact is that there are large populations that share the awful views, so that individual pressure seems more likely to just result in them cutting off social relations with you more than changing their views. With a child molester, if we assume that most others share our views rather than those of the child molester, the pressure could actually change behavior for the better. But if you imagine you’ve just been dropped off in a society full of child molesters, who are open with each other about their behavior, I think it’s more intuitive that attempts at direct pressure are unlikely to be effective.

I guess the interesting question is how does the phase transition happen, as with the rise of abolitionism about slavery (which probably depended in part on the rise of successful societies where no socially prominent people had wealth dependent on enslaving others, so that they could mutually support each other in starting to put pressure on the slaveholding elites in other places).

Expand full comment

As far as I know, I don't feel certain about anything except the possibility that I have assumptions that I'm not aware of. It seems that things can be understood functionally, but not ontologically.

Expand full comment

There's a chance that our universe exists in a false vacuum state. If it ever tunnels through this local minimum, the whole universe would be destroyed, probably one of the worst outcomes possible. How reasonable is it to spend lots of money (and force lots of other people to spend their money too) to research this and mitigate any risks (like running large colliders)?

Expand full comment
author

Off-topic. Feel free to repost under the comments to 'X-risk Agnosticism' if you want to discuss that:

https://www.goodthoughts.blog/p/x-risk-agnosticism

Expand full comment