Most active commenters
  • simonw(4)

←back to thread

145 points jakozaur | 12 comments | | HN request time: 0.011s | source | bottom
Show context
xcf_seetan ◴[] No.45670626[source]
>attackers can exploit local LLMs

I thought that local LLMs means they run on local computers, without being exposed to the internet.

If an attacker can exploit a local LLM, means it already compromised you system and there are better things they can do than trick the LLM to get what they can get directly.

replies(4): >>45670663 #>>45671212 #>>45671663 #>>45672038 #
1. simonw ◴[] No.45670663[source]
Local LLMs may not be exposed to the internet, but if you want them to do something useful you're likely going to hook them up to an internet-accessing harness such as OpenCode or Claude Code or Codex CLI.
replies(4): >>45670688 #>>45670770 #>>45670832 #>>45670880 #
2. ianbutler ◴[] No.45670688[source]
yes and I think better local sandboxing can help out in this case, it’s something ive been thinking about a lot and more and more seems to be the right way to run these things
3. Der_Einzige ◴[] No.45670770[source]
No, I'm not going to do those things. I find extreme utility in applications that I can do with an LLM in an air-gapped environment.

I will fight and die on the hill that "LLMs don't need the internet to be useful"

replies(2): >>45670828 #>>45670993 #
4. simonw ◴[] No.45670828[source]
Yeah, that's fair. A good LLM (gpt-oss-20b, even some of the smaller Qwens) can be entirely useful offline. I've got good results from Mistral Small 3.2 offline on a flight helping write Python and JavaScript, for example.

Having Claude Code able to try out JSON APIs and pip install extra packages is a huge upgrade from that though!

5. xcf_seetan ◴[] No.45670832[source]
Fair enough. Forgive my probably ignorance, but if Claude Code can be attacked like this, doesn’t that means that also foundation LLMs are vulnerable to this, and is not a local LLM thing?
replies(1): >>45671312 #
6. europa ◴[] No.45670880[source]
An LLM can be an “internet in a box” — without the internet!
7. furyofantares ◴[] No.45670993[source]
Is anyone fighting you on that hill?

Someone who finds it useful to have a local llm ingest internet content is not contrary to you finding uses that don't.

replies(1): >>45671484 #
8. simonw ◴[] No.45671312[source]
It's not an LLM thing at all. Prompt injection has always been an attack against software that uses LLMs. LLMs on their own can't be attacked meaningfully (well, you can jailbreak them and trick them into telling you the recipe for meth but that's another issue entirely). A system that wraps an LLM with the ability for it to request tool calls like "run this in bash" is where this stuff gets dangerous.
9. kgwgk ◴[] No.45671484{3}[source]
> Local LLMs may not be exposed to the internet, but if you want them to do something useful you're likely going to hook them up to an internet-accessing harness such as OpenCode or Claude Code or Codex CLI.

is not "someone finding useful to have a local llm ingest internet content" - it was someone suggesting that nothing useful can be done without internet access.

replies(2): >>45671504 #>>45671668 #
10. simonw ◴[] No.45671504{4}[source]
Yeah, I retracted my statement that they can't do anything useful without the internet here: https://news.ycombinator.com/item?id=45670828
11. furyofantares ◴[] No.45671668{4}[source]
I guess I don't read that how you do. It says you're likely to do that, which I take to mean that's a majority use case, not that it's the only use case.
replies(1): >>45671720 #
12. kgwgk ◴[] No.45671720{5}[source]
It also said "but" and "if you want them to do something useful" which made the "likely" sound much less innocent.