←back to thread

780 points rexpository | 6 comments | | HN request time: 0.714s | source | bottom
1. xrd ◴[] No.44504585[source]
I have been reading HN for years. The exploits used to be so clever and incredible feats of engineering. LLM exploits are the equivalent of "write a prompt that can trick a toddler."
replies(3): >>44505581 #>>44506142 #>>44507852 #
2. nixpulvis ◴[] No.44505581[source]
And the discussion used to be informative or offering perspective, and not as reactionary.

I'm legitimately disappointed in the discourse on this thread. And I'm not at all bullish on LLMs.

replies(2): >>44506266 #>>44507384 #
3. ◴[] No.44506142[source]
4. krainboltgreene ◴[] No.44506266[source]
That's because we're watching the equivalent of handing many toddlers a blowtorch. If you don't freak out in that scenario, what could possibly move you?
5. ◴[] No.44507384[source]
6. neuroticnews25 ◴[] No.44507852[source]
Basic SQLi, XSS, or buffer overflow attacks are equally trivial and stem from the same underlying problem of confusing instructions with data. Sophistication and creativity arises from bypassing mitigations and chaining together multiple vulnerabilities. I think we'll see the same with prompt injections as the arms race progresses.