/top/
/new/
/best/
/ask/
/show/
/job/
^
slacker news
login
about
←back to thread
Comet AI browser can get prompt injected from any site, drain your bank account
(twitter.com)
645 points
helloplanets
| 1 comments |
24 Aug 25 15:14 UTC
|
HN request time: 0s
|
source
Show context
alexbecker
◴[
24 Aug 25 16:32 UTC
]
No.
45005567
[source]
▶
>>45004846 (OP)
#
I doubt Comet was using any protections beyond some tuned instructions, but one thing I learned at USENIX Security a couple weeks ago is that nobody has any idea how to deal with prompt injection in a multi-turn/agentic setting.
replies(1):
>>45005703
#
hoppp
◴[
24 Aug 25 16:47 UTC
]
No.
45005703
[source]
▶
>>45005567
#
Maybe treat prompts like it was SQL strings, they need to be sanitized and preferably never exposed to external dynamic user input
replies(7):
>>45005949
#
>>45006195
#
>>45006203
#
>>45006809
#
>>45007940
#
>>45008268
#
>>45011823
#
1.
alexbecker
◴[
24 Aug 25 17:49 UTC
]
No.
45006195
[source]
▶
>>45005703
#
The problem is there is no real way to separate "data" and "instructions" in LLMs like there is for SQL
ID:
GO
↑