Is that correct? I thought the Roko's Basilisk post was just seen as really stupid. Agreed that "Lena" is a great, chilling story though.
He knows that can't possibly work, right? Implicitly it assumes perfect invulnerability to any method of coercion, exploitation, subversion, or suffering that can be invented by an intelligence sufficiently superhuman to have escaped its natal light cone.
There may exist forms of life in this universe for which such an assumption is safe. Humanity circa 2024 seems most unlikely to be among them.
Acausal blackmail only works if one agent U predicts the likely future (or, otherwise not-yet-having-causal-influence) existence of another agent V, who would take actions so that if U’s actions aren’t in accordance with V’s preferences, then V’s actions will do harm to U(‘s interests) (eventually). But, this only works if U predicts the likely possible existence of V and V’s blackmail.
If V is having a causal influence of U, in order to do the blackmail, that’s just ordinary coercion. And, if U doesn’t anticipate the existence (and preferences) of V, then U won’t cooperate with any such attempts at acausal blackmail.
(… is “blackmail” really the right word? It isn’t like there’s a threat to reveal a secret, which I typically think of as central to the notion of blackmail.)
Now, if it’s irrational to do so, then it’s irrational to do so, even though it is possible. But I’m not so sure it is irrational. If one is considering situations with things as powerful and oppositional as that, it seems like, unless one has a full solid theory of acausal trade ready and has shown that it is beneficial, that it is probably best to blanket refuse all acausal threats, so that they don’t influence what actually happens here.
Unfortunately (or perhaps fortunately, given how we would misuse such an ability), strong precommitments are not available to humans. Our ability to self-modify is vague and bounded. In our organizations and other intelligent tools, we probably should make such precommitments.