←back to thread

221 points caspg | 1 comments | | HN request time: 0.208s | source
Show context
2024user ◴[] No.42164394[source]
Claude built me a simple react app AND rendered it in it's own UI - including using imports and stuff.

I am looking forward to this type of real time app creation being added into our OSs, browsers, phones and glasses.

replies(2): >>42164564 #>>42164627 #
croes ◴[] No.42164627[source]
That will be a whole new level of malware attack angle.
replies(2): >>42164655 #>>42164684 #
mmsc ◴[] No.42164655[source]
Can you expand on what you mean by this, and why?
replies(1): >>42164759 #
danieldk ◴[] No.42164759[source]
The best vulnerability is one that is hard to detect because it looks like a bug. It's not inconceivable to train an LLM to silently slip vulnerabilities in generated code. Someone who does not have a whole lot of programming experience is unlikely to detect it.

tl;dr it takes running untrusted code to a new level.

replies(2): >>42164794 #>>42165835 #
1. jstummbillig ◴[] No.42165835[source]
Meh. Why would the model makers not be fantastic security vectors? The motivation to not be the company known to "silently slip vulnerabilities in generated code" seems fairly obvious.

People have always been able to slip in errors. I am confused why we assume that a LLM will on average not be better but worse on this front, and I suspect a lot of residual human-bias and copium.