←back to thread

544 points tosh | 1 comments | | HN request time: 0s | source
Show context
i_love_retros ◴[] No.43466023[source]
Any security risks running these Chinese LLMs on my local computer?
replies(5): >>43466365 #>>43466369 #>>43466411 #>>43468902 #>>43469724 #
1. samuel ◴[] No.43468902[source]
It's an interesting question! In my opinion, if you don't use tools it's very unlikely it can do any harm. I doubt the model files can be engineered to overflow llama.cpp or ollama, or cause any other damage, directly.

But if you use tools, for example for extending its knowledge through web searches, it could be used to exfiltrate information. It could do it by visiting some specially crafted url's to leak parts of your prompts (this includes the contents of documents added to them with RAG).

If given an interpreter, even if sandboxed, could try to do some kind of sabotage or "call home" with locally gathered information, obviously disguised as safe "regular" code.

It's unlikely that a current model that is runnable in "domestic" hardware could have those capabilities, but in the future these concerns will be more relevant.