←back to thread

A non-anthropomorphized view of LLMs

(addxorrol.blogspot.com)
475 points zdw | 6 comments | | HN request time: 0s | source | bottom
Show context
Al-Khwarizmi ◴[] No.44487564[source]
I have the technical knowledge to know how LLMs work, but I still find it pointless to not anthropomorphize, at least to an extent.

The language of "generator that stochastically produces the next word" is just not very useful when you're talking about, e.g., an LLM that is answering complex world modeling questions or generating a creative story. It's at the wrong level of abstraction, just as if you were discussing an UI events API and you were talking about zeros and ones, or voltages in transistors. Technically fine but totally useless to reach any conclusion about the high-level system.

We need a higher abstraction level to talk about higher level phenomena in LLMs as well, and the problem is that we have no idea what happens internally at those higher abstraction levels. So, considering that LLMs somehow imitate humans (at least in terms of output), anthropomorphization is the best abstraction we have, hence people naturally resort to it when discussing what LLMs can do.

replies(18): >>44487608 #>>44488300 #>>44488365 #>>44488371 #>>44488604 #>>44489139 #>>44489395 #>>44489588 #>>44490039 #>>44491378 #>>44491959 #>>44492492 #>>44493555 #>>44493572 #>>44494027 #>>44494120 #>>44497425 #>>44500290 #
raincole ◴[] No.44488300[source]
I've said that before: we have been anthropomorphizing computers since the dawn of information age.

- Read and write - Behaviors that separate humans from animals. Now used for input and output.

- Server and client - Human social roles. Now used to describe network architecture.

- Editor - Human occupation. Now a kind of software.

- Computer - Human occupation!

And I'm sure people referred their cars and ships as 'her' before the invention of computers.

replies(2): >>44488385 #>>44488543 #
1. latexr ◴[] No.44488385[source]
You are conflating anthropomorphism with personification. They are not the same thing. No one believes their guitar or car or boat is alive and sentient when they give it a name or talk to or about it.

https://www.masterclass.com/articles/anthropomorphism-vs-per...

replies(1): >>44488576 #
2. raincole ◴[] No.44488576[source]
But the author used "anthropomorphism" the same way as I did. I guess we both mean "personification" then.

> we talk about "behaviors", "ethical constraints", and "harmful actions in pursuit of their goals". All of these are anthropocentric concepts that - in my mind - do not apply to functions or other mathematical objects.

One talking about a program's "behaviors", "actions" or "goals" doesn't mean they believe the program is sentient. Only "ethical constraints" is suspiciously anthropomorphizing.

replies(1): >>44488617 #
3. latexr ◴[] No.44488617[source]
> One talking about a program's "behaviors", "actions" or "goals" doesn't mean they believe the program is sentient.

Except that is exactly what we’re seeing with LLMs. People believing exactly that.

replies(1): >>44488924 #
4. raincole ◴[] No.44488924{3}[source]
Perhaps a few mentally unhinged people do.

A bit of anecdote: last year I hung out with a bunch of old classmates that I hadn't seen for quite a while. None of them works in tech.

Surprisingly to me, all of them have ChatGPT installed on their phones.

And unsurprisingly to me, none of them treated it like an actual intelligence. That makes me wonder where those who think ChatGPT is sentient come from.

(It's a bit worrisome that several of them thought it worked "like Google search and Google translation combined", even by the time ChatGPT couldn't do web search...!)

replies(2): >>44489025 #>>44491083 #
5. latexr ◴[] No.44489025{4}[source]
> Perhaps a few mentally unhinged people do.

I think it’s more than a few and it’s still rising, and therein lies the issue.

Which is why it is paramount to talk about this now, when we may still turn the tide. LLMs can be useful, but it’s important to have the right mental model, understanding, expectations, and attitude towards them.

6. jibal ◴[] No.44491083{4}[source]
> Perhaps a few mentally unhinged people do.

This is a No True Scotsman fallacy. And it's radically factually wrong.

The rest of your comment is along the lines of the famous (but apocryphal) Pauline Kael line “I can’t believe Nixon won. I don’t know anyone who voted for him.”