←back to thread

146 points jakozaur | 1 comments | | HN request time: 0.201s | source
Show context
ineedasername ◴[] No.45673195[source]
Yes, of course if you can inject something into context there’s lots can be done. And anything running local will require different security considerations than running remote. Neither of these things make for a paradox.

Also from the article: For example, a small model could easily flag the presence of eval() in the generated code, even if the primary model was tricked into generating it.

People are losing their critical thinking. AI is great, yes, but there’s no need to throw it like a grenade at every problem: There’s nothing in that snippet or surrounding bits from the article that needs an entire model-on-model architecture to resolve. Some keyword filters, other inputs sanitizing processes such as were learned way back in the golden years of sql injection attacks. But these are the lines of BS coming for your CTO’s, spinning them tales about the need for their own prompt-engineered fine tunes w/ laser sighted tokens that will run as edge models and shoot down everything from context injected eval() responses to phishing scams and more, and all require their monthly/annual LoRa for purchasing to stay timely on the attacks. At least if this article is smelling the way I think it is.

replies(1): >>45674042 #
1. gruez ◴[] No.45674042[source]
>Some keyword filters, other inputs sanitizing processes such as were learned way back in the golden years of sql injection attacks.

But that's the thing, keyword filters aren't enough because you can smuggle hidden instructions in any number of ways that don't involve blacklisted words like "eval" or "ignore previous". Moreover "back in the golden years of sql injection attacks", keyword filters were often (mis)used in a misguided way of fixing SQLI exploits, because they can often be bypassed with escape characters and other shenanigans.