←back to thread

A non-anthropomorphized view of LLMs

(addxorrol.blogspot.com)
475 points zdw | 1 comments | | HN request time: 0.23s | source
Show context
Al-Khwarizmi ◴[] No.44487564[source]
I have the technical knowledge to know how LLMs work, but I still find it pointless to not anthropomorphize, at least to an extent.

The language of "generator that stochastically produces the next word" is just not very useful when you're talking about, e.g., an LLM that is answering complex world modeling questions or generating a creative story. It's at the wrong level of abstraction, just as if you were discussing an UI events API and you were talking about zeros and ones, or voltages in transistors. Technically fine but totally useless to reach any conclusion about the high-level system.

We need a higher abstraction level to talk about higher level phenomena in LLMs as well, and the problem is that we have no idea what happens internally at those higher abstraction levels. So, considering that LLMs somehow imitate humans (at least in terms of output), anthropomorphization is the best abstraction we have, hence people naturally resort to it when discussing what LLMs can do.

replies(18): >>44487608 #>>44488300 #>>44488365 #>>44488371 #>>44488604 #>>44489139 #>>44489395 #>>44489588 #>>44490039 #>>44491378 #>>44491959 #>>44492492 #>>44493555 #>>44493572 #>>44494027 #>>44494120 #>>44497425 #>>44500290 #
raincole ◴[] No.44488300[source]
I've said that before: we have been anthropomorphizing computers since the dawn of information age.

- Read and write - Behaviors that separate humans from animals. Now used for input and output.

- Server and client - Human social roles. Now used to describe network architecture.

- Editor - Human occupation. Now a kind of software.

- Computer - Human occupation!

And I'm sure people referred their cars and ships as 'her' before the invention of computers.

replies(2): >>44488385 #>>44488543 #
1. whilenot-dev ◴[] No.44488543[source]
I'm not convinced... we use these terms to assign roles, yes, but these roles describe a utility or assign a responsibility. That isn't anthropomorphizing anything, but it rather describes the usage of an inanimate object as tool for us humans and seems in line with history.

What's the utility or the responsibility of AI, what's its usage as tool? If you'd ask me it should be closer to serving insights than "reasoning thoughts".