←back to thread

A non-anthropomorphized view of LLMs

(addxorrol.blogspot.com)
475 points zdw | 1 comments | | HN request time: 0s | source
Show context
Al-Khwarizmi ◴[] No.44487564[source]
I have the technical knowledge to know how LLMs work, but I still find it pointless to not anthropomorphize, at least to an extent.

The language of "generator that stochastically produces the next word" is just not very useful when you're talking about, e.g., an LLM that is answering complex world modeling questions or generating a creative story. It's at the wrong level of abstraction, just as if you were discussing an UI events API and you were talking about zeros and ones, or voltages in transistors. Technically fine but totally useless to reach any conclusion about the high-level system.

We need a higher abstraction level to talk about higher level phenomena in LLMs as well, and the problem is that we have no idea what happens internally at those higher abstraction levels. So, considering that LLMs somehow imitate humans (at least in terms of output), anthropomorphization is the best abstraction we have, hence people naturally resort to it when discussing what LLMs can do.

replies(18): >>44487608 #>>44488300 #>>44488365 #>>44488371 #>>44488604 #>>44489139 #>>44489395 #>>44489588 #>>44490039 #>>44491378 #>>44491959 #>>44492492 #>>44493555 #>>44493572 #>>44494027 #>>44494120 #>>44497425 #>>44500290 #
jll29 ◴[] No.44494027[source]
The details in how I talk about LLMs matter.

If I use human-related terminology as a shortcut, as some kind of macro to talk at a higher level/more efficiently about something I want to do that might be okay.

What is not okay is talking in a way that implies intent, for example.

Compare:

  "The AI doesn't want to do that."
versus

  "The model doesn't do that with this prompt and all others we tried."
The latter way of talking is still high-level enough but avoids equating/confusing the name of a field with a sentient being.

Whenever I hear people saying "an AI" I suggest they replace AI with "statistics" to make it obvious how problematic anthropomorphisms may have become:

  *"The statistics doesn't want to do that."
replies(1): >>44494181 #
1. dmitsuki ◴[] No.44494181[source]
The only reason that sounds weird to you is because you have the experience of being human. Human behavior is not magic. It's still just statistics. You go to the bathroom when you have to pee not because some magical concept of consciousness, but because a reciptor in your brain goes off and starts the chain of making you go to the bathroom. AI's are not magic, but nobody has sufficiently provided any proof we are somehow special either.