←back to thread

416 points floverfelt | 3 comments | | HN request time: 0s | source
Show context
sebnukem2 ◴[] No.45056066[source]
> hallucinations aren’t a bug of LLMs, they are a feature. Indeed they are the feature. All an LLM does is produce hallucinations, it’s just that we find some of them useful.

Nice.

replies(7): >>45056284 #>>45056352 #>>45057115 #>>45057234 #>>45057503 #>>45057942 #>>45061686 #
tptacek ◴[] No.45056284[source]
In that framing, you can look at an agent as simply a filter on those hallucinations.
replies(4): >>45056346 #>>45056552 #>>45056728 #>>45058056 #
Lionga ◴[] No.45056552[source]
Isn't an "agent" not just hallucinations layered on top of other random hallucinations to create new hallucinations?
replies(1): >>45056610 #
1. tptacek ◴[] No.45056610[source]
No, that's exactly what an agent isn't. What makes an agent an agent is all the not-LLM code. When an agent generates Golang code, it runs the Go compiler, which is in the agent's architecture an extension of the agent. The Go compiler does not hallucinate.
replies(1): >>45056862 #
2. Lionga ◴[] No.45056862[source]
The most common "agent" is an letting an LLM run a while loop (“multi-step agent”) [1]

[1] https://huggingface.co/docs/smolagents/conceptual_guides/int...

replies(1): >>45057232 #
3. tptacek ◴[] No.45057232[source]
That's not how Claude Code works (or Gemini, Cursor, or Codex).