←back to thread

143 points abirag | 3 comments | | HN request time: 0.609s | source
Show context
filearts ◴[] No.45308754[source]
It is fascinating how similar the prompt construction was to a phishing campaign in terms of characteristics.

  - Authority assertion
  - False urgency
  - Technical legitimacy
  - Security theater
Prompt injection here is like a phishing campaign against an entity with no consciousness or ability to stop and question through self-reflection.
replies(2): >>45309747 #>>45310870 #
1. XenophileJKO ◴[] No.45309747[source]
I'm fairly convinced that with the right training.. the ability of the LLM to be "skeptical" and resilient to these kinds of attacks will be pretty robust.

The current problem is that making the models resistant to "persona" injection is in opposition to much of how the models are also used conversationally. I think this is why you'll end up with hardened "agent" models and then more open conversational models.

I suppose it is also possible that the models can have an additional non-prompt context applied that sets expectations, but that requires new architecture for those inputs.

replies(1): >>45309999 #
2. BarryMilo ◴[] No.45309999[source]
Isn't the whole problem that it's nigh-impossible to isolate context from input?
replies(1): >>45311690 #
3. Terr_ ◴[] No.45311690[source]
Yeah, ultimately the LLM is guess_what_could_come_next(document) in a loop with some I/O either doing something with the latest guess or else appending more content to the document from elsewhere.

Any distinctions inside the document involve the land of statistical patterns and weights, rather than hard auditable logic.