Also from the article: For example, a small model could easily flag the presence of eval() in the generated code, even if the primary model was tricked into generating it.
People are losing their critical thinking. AI is great, yes, but there’s no need to throw it like a grenade at every problem: There’s nothing in that snippet or surrounding bits from the article that needs an entire model-on-model architecture to resolve. Some keyword filters, other inputs sanitizing processes such as were learned way back in the golden years of sql injection attacks. But these are the lines of BS coming for your CTO’s, spinning them tales about the need for their own prompt-engineered fine tunes w/ laser sighted tokens that will run as edge models and shoot down everything from context injected eval() responses to phishing scams and more, and all require their monthly/annual LoRa for purchasing to stay timely on the attacks. At least if this article is smelling the way I think it is.