←back to thread

371 points ulrischa | 1 comments | | HN request time: 0.208s | source
1. Ozzie_osman ◴[] No.43238413[source]
Great article, but doesn't talk about the potentially _most_ dangerous form of mistakes: an adversarial LLM trying to inject vulnerabilities. I expect this to become a vector soon as people figure out ways to accomplish this