←back to thread

171 points abirag | 1 comments | | HN request time: 0.302s | source
Show context
filearts ◴[] No.45308754[source]
It is fascinating how similar the prompt construction was to a phishing campaign in terms of characteristics.

  - Authority assertion
  - False urgency
  - Technical legitimacy
  - Security theater
Prompt injection here is like a phishing campaign against an entity with no consciousness or ability to stop and question through self-reflection.
replies(2): >>45309747 #>>45310870 #
XenophileJKO ◴[] No.45309747[source]
I'm fairly convinced that with the right training.. the ability of the LLM to be "skeptical" and resilient to these kinds of attacks will be pretty robust.

The current problem is that making the models resistant to "persona" injection is in opposition to much of how the models are also used conversationally. I think this is why you'll end up with hardened "agent" models and then more open conversational models.

I suppose it is also possible that the models can have an additional non-prompt context applied that sets expectations, but that requires new architecture for those inputs.

replies(2): >>45309999 #>>45314130 #
1. dns_snek ◴[] No.45314130[source]
What does "pretty robust" mean, how do you even assess that? How often are you okay with your most sensitive information getting stolen and is everyone else going to be okay with their information being compromised once or twice a year, every time someone finds a reproducible jailbreak?