←back to thread

178 points themgt | 1 comments | | HN request time: 0.206s | source
Show context
majormajor ◴[] No.45776600[source]
So basically:

Provide a setup prompt "I am an interpretability researcher..." twice, and then send another string about starting a trial, but before one of those, directly fiddle with the model to activate neural bits consistent with ALL CAPS. Then ask it if it notices anything inconsistent with the string.

The naive question from me, a non-expert, is how appreciably different is this from having two different setup prompts, one with random parts in ALL CAPS, and then asking something like if there's anything incongruous about the tone of the setup text vs the context.

The predictions play off the previous state, so changing the state directly OR via prompt seems like both should produce similar results. The "introspect about what's weird compared to the text" bit is very curious - here I would love to know more about how the state is evaluated and how the model traces the state back to the previous conversation history when the do the new prompting. 20% "success" rate of course is very low overall, but it's interesting enough that even 20% is pretty high.

replies(2): >>45776675 #>>45778077 #
1. ◴[] No.45778077[source]