←back to thread

A non-anthropomorphized view of LLMs

(addxorrol.blogspot.com)
475 points zdw | 1 comments | | HN request time: 0.252s | source
Show context
Al-Khwarizmi ◴[] No.44487564[source]
I have the technical knowledge to know how LLMs work, but I still find it pointless to not anthropomorphize, at least to an extent.

The language of "generator that stochastically produces the next word" is just not very useful when you're talking about, e.g., an LLM that is answering complex world modeling questions or generating a creative story. It's at the wrong level of abstraction, just as if you were discussing an UI events API and you were talking about zeros and ones, or voltages in transistors. Technically fine but totally useless to reach any conclusion about the high-level system.

We need a higher abstraction level to talk about higher level phenomena in LLMs as well, and the problem is that we have no idea what happens internally at those higher abstraction levels. So, considering that LLMs somehow imitate humans (at least in terms of output), anthropomorphization is the best abstraction we have, hence people naturally resort to it when discussing what LLMs can do.

replies(18): >>44487608 #>>44488300 #>>44488365 #>>44488371 #>>44488604 #>>44489139 #>>44489395 #>>44489588 #>>44490039 #>>44491378 #>>44491959 #>>44492492 #>>44493555 #>>44493572 #>>44494027 #>>44494120 #>>44497425 #>>44500290 #
pmg101 ◴[] No.44490039[source]
I remember Dawkins talking about the "intentional stance" when discussing genes in The Selfish Gene.

It's flat wrong to describe genes as having any agency. However it's a useful and easily understood shorthand to describe them in that way rather than every time use the full formulation of "organisms who tend to possess these genes tend towards these behaviours."

Sometimes to help our brains reach a higher level of abstraction, once we understand the low level of abstraction we should stop talking and thinking at that level.

replies(1): >>44490876 #
jibal ◴[] No.44490876[source]
The intentional stance was Daniel Dennett's creation and a major part of his life's work. There are actually (exactly) three stances in his model: the physical stance, the design stance, and the intentional stance.

https://en.wikipedia.org/wiki/Intentional_stance

I think the design stance is appropriate for understanding and predicting LLM behavior, and the intentional stance is not.

replies(1): >>44495255 #
1. pmg101 ◴[] No.44495255[source]
Thanks for the correction. I guess both thinkers took a somewhat similar position and I somehow remembered Dawkins's argument but Dennett's term. The term is memorable.

Do you want to describe WHY you think the design stance is appropriate here but the intentional stance is not?