←back to thread

146 points jakozaur | 1 comments | | HN request time: 0.213s | source
Show context
splittydev ◴[] No.45670263[source]
All of these are incredibly obvious. If you have even the slightest idea of what you're doing and review the code before deploying it to prod, this will never succeed.

If you have absolutely no idea what you're doing, well, then it doesn't really matter in the end, does it? You're never gonna recognize any security vulnerabilities (as has happened many times with LLM-assisted "no-code" platforms and without any actual malicious intent), and you're going to deploy unsafe code either way.

replies(2): >>45670324 #>>45671085 #
tcdent ◴[] No.45670324[source]
Sure, you can simplify these observations into just codegen. But the real observation is not that these models are more susceptible to fail when generating code, but that they are more susceptible to jailbreak-type attacks that most people have come to expect to be handled by post training.

Having access to open models is great, and even if their capabilities are somewhat lower than the closed-source SoTA models, and we should be aware of the differences in behavior.

replies(2): >>45673892 #>>45673954 #
1. ◴[] No.45673954[source]