←back to thread

198 points todsacerdoti | 1 comments | | HN request time: 0.246s | source
Show context
behnamoh ◴[] No.45942195[source]
Any ideas on how to block LLMs from reading/analyzing a PDF? I don't want to submit a paper to journals only for them to use ChatGPT to review it...

(it has happened before)

Edit: I'm starting to get downvoted. Perhaps by the lazy-ass journal reviewrs?

replies(5): >>45942397 #>>45942415 #>>45942504 #>>45942588 #>>45942700 #
1. zb3 ◴[] No.45942504[source]
There's a way - inject garbage prompts, like in the content meant to be the example - humans might understand that this is in an "example" context, but LLMs are likely to fail as prompt injection is an unsolved problem.