They say it only works about 20% of the time; otherwise it fails to detect anything or the model hallucinates. So they're fiddling with the internals of the network until it says something they expect, and then they call it a success?
Could it be related to attention? If they "inject" a concept that's outside the model's normal processing distribution, maybe some kind of internal equilibrium (found during training) gets perturbed, causing the embedding for that concept to become over-inflated in some layers? And the attention mechanism simply starts attending more to it => "notices"?
I'm not sure if that proves that they posses "genuine capacity to monitor and control their own internal states"
replies(1):