←back to thread

A non-anthropomorphized view of LLMs

(addxorrol.blogspot.com)
475 points zdw | 1 comments | | HN request time: 0.203s | source
1. msvana ◴[] No.44497129[source]
This reminds me of the idea that LLMs are simulators. Given the current state (the prompt + the previously generated text), they generate the next state (the next token) using rules derived from training data.

As simulators, LLMs can simulate many things, including agents that exhibit human-like properties. But LLMs themselves are not agents.

More on this idea here: https://www.alignmentforum.org/posts/vJFdjigzmcXMhNTsx/agi-s...

This perspective makes a lot of sense to me. Still, I wouldn't avoid anthropomorphization altogether. First, in some cases, it might be a useful mental tool to understand some aspect of LLMs. Second, there is a lot of uncertainty about how LLMs work, so I would stay epistemically humble. The second argument applies in the opposite direction as well: for example, it's equally bad to say that LLMs are 100% conscious.

On the other hand, if someone argues against anthropomorphizing LLMs, I would avoid phrasing it as: "It's just matrix multiplication." The article demonstrates why this is a bad idea pretty well.