←back to thread

544 points tosh | 1 comments | | HN request time: 0.312s | source
Show context
i_love_retros ◴[] No.43466023[source]
Any security risks running these Chinese LLMs on my local computer?
replies(5): >>43466365 #>>43466369 #>>43466411 #>>43468902 #>>43469724 #
1. TrueDuality ◴[] No.43466365[source]
Always a possibility with custom runtimes, but the weights alone do not pose any form of malicious code risk. The asterisk there is allowing them to run arbitrary commands on your computer but that is ALWAYS a massive risk with these things. That risk is not from who trained the model.

I could have missed a paper but it seems very unlikely even closed door research has gotten to the stage of maliciously tuning models to surreptitiously backdoor someone's machine in a way that wouldn't be very easy to catch.

Your threat model may vary.