←back to thread

146 points jakozaur | 1 comments | | HN request time: 0.201s | source
1. yalogin ◴[] No.45671397[source]
This is not new right, LLMs are dumb, they just do everything they are told, and so the orchestration before and after the LLM execution holds key. Even without security, ChatGPT or gemini's value is not just in the LLM but the productization of it which is the layers before and after the execution. Similarly if one is executing local LLMs it's imperative to also have proper security rules around the execution.