←back to thread

A non-anthropomorphized view of LLMs

(addxorrol.blogspot.com)
475 points zdw | 7 comments | | HN request time: 1.249s | source | bottom
Show context
Al-Khwarizmi ◴[] No.44487564[source]
I have the technical knowledge to know how LLMs work, but I still find it pointless to not anthropomorphize, at least to an extent.

The language of "generator that stochastically produces the next word" is just not very useful when you're talking about, e.g., an LLM that is answering complex world modeling questions or generating a creative story. It's at the wrong level of abstraction, just as if you were discussing an UI events API and you were talking about zeros and ones, or voltages in transistors. Technically fine but totally useless to reach any conclusion about the high-level system.

We need a higher abstraction level to talk about higher level phenomena in LLMs as well, and the problem is that we have no idea what happens internally at those higher abstraction levels. So, considering that LLMs somehow imitate humans (at least in terms of output), anthropomorphization is the best abstraction we have, hence people naturally resort to it when discussing what LLMs can do.

replies(18): >>44487608 #>>44488300 #>>44488365 #>>44488371 #>>44488604 #>>44489139 #>>44489395 #>>44489588 #>>44490039 #>>44491378 #>>44491959 #>>44492492 #>>44493555 #>>44493572 #>>44494027 #>>44494120 #>>44497425 #>>44500290 #
grey-area ◴[] No.44487608[source]
On the contrary, anthropomorphism IMO is the main problem with narratives around LLMs - people are genuinely talking about them thinking and reasoning when they are doing nothing of that sort (actively encouraged by the companies selling them) and it is completely distorting discussions on their use and perceptions of their utility.
replies(13): >>44487706 #>>44487747 #>>44488024 #>>44488109 #>>44489358 #>>44490100 #>>44491745 #>>44493260 #>>44494551 #>>44494981 #>>44494983 #>>44495236 #>>44496260 #
amelius ◴[] No.44488024[source]
I don't agree. Most LLMs have been trained on human data, so it is best to talk about these models in a human way.
replies(2): >>44488060 #>>44488787 #
4ndrewl ◴[] No.44488060[source]
Even the verb 'trained' is contentious wrt anthropomorphism.
replies(1): >>44488289 #
amelius ◴[] No.44488289[source]
Somewhat true but rodents can also be trained ...
replies(1): >>44488396 #
1. 4ndrewl ◴[] No.44488396[source]
Rodents aren't functions though?
replies(1): >>44488917 #
2. FeepingCreature ◴[] No.44488917[source]
Every computable system, even stateful systems, can be reformulated as a function.

If IO can be functional, I don't see why mice can't.

replies(2): >>44489313 #>>44489966 #
3. psychoslave ◴[] No.44489313[source]
Well, that's a strong claim of equivalence between computationable models and realty.

The consensual view is rather that no map is matching fully the territory, or said otherwise the territory includes ontological components that exceeds even the most sophisticated map that can be ever built.

replies(1): >>44489349 #
4. FeepingCreature ◴[] No.44489349{3}[source]
I believe the consensus view is that physics is computable.
replies(1): >>44489882 #
5. 4ndrewl ◴[] No.44489882{4}[source]
Thanks. I think the original point about the word 'trained' being contentious still stands, as evidenced by this thread :)
6. tempfile ◴[] No.44489966[source]
So you think a rodent is a function?
replies(1): >>44491062 #
7. FeepingCreature ◴[] No.44491062{3}[source]
I think that I am a function.