←back to thread

145 points jakozaur | 1 comments | | HN request time: 0.199s | source
Show context
xcf_seetan ◴[] No.45670626[source]
>attackers can exploit local LLMs

I thought that local LLMs means they run on local computers, without being exposed to the internet.

If an attacker can exploit a local LLM, means it already compromised you system and there are better things they can do than trick the LLM to get what they can get directly.

replies(4): >>45670663 #>>45671212 #>>45671663 #>>45672038 #
simonw ◴[] No.45670663[source]
Local LLMs may not be exposed to the internet, but if you want them to do something useful you're likely going to hook them up to an internet-accessing harness such as OpenCode or Claude Code or Codex CLI.
replies(4): >>45670688 #>>45670770 #>>45670832 #>>45670880 #
Der_Einzige ◴[] No.45670770[source]
No, I'm not going to do those things. I find extreme utility in applications that I can do with an LLM in an air-gapped environment.

I will fight and die on the hill that "LLMs don't need the internet to be useful"

replies(2): >>45670828 #>>45670993 #
1. simonw ◴[] No.45670828[source]
Yeah, that's fair. A good LLM (gpt-oss-20b, even some of the smaller Qwens) can be entirely useful offline. I've got good results from Mistral Small 3.2 offline on a flight helping write Python and JavaScript, for example.

Having Claude Code able to try out JSON APIs and pip install extra packages is a huge upgrade from that though!