←back to thread

A non-anthropomorphized view of LLMs

(addxorrol.blogspot.com)
475 points zdw | 1 comments | | HN request time: 0.21s | source
Show context
Al-Khwarizmi ◴[] No.44487564[source]
I have the technical knowledge to know how LLMs work, but I still find it pointless to not anthropomorphize, at least to an extent.

The language of "generator that stochastically produces the next word" is just not very useful when you're talking about, e.g., an LLM that is answering complex world modeling questions or generating a creative story. It's at the wrong level of abstraction, just as if you were discussing an UI events API and you were talking about zeros and ones, or voltages in transistors. Technically fine but totally useless to reach any conclusion about the high-level system.

We need a higher abstraction level to talk about higher level phenomena in LLMs as well, and the problem is that we have no idea what happens internally at those higher abstraction levels. So, considering that LLMs somehow imitate humans (at least in terms of output), anthropomorphization is the best abstraction we have, hence people naturally resort to it when discussing what LLMs can do.

replies(18): >>44487608 #>>44488300 #>>44488365 #>>44488371 #>>44488604 #>>44489139 #>>44489395 #>>44489588 #>>44490039 #>>44491378 #>>44491959 #>>44492492 #>>44493555 #>>44493572 #>>44494027 #>>44494120 #>>44497425 #>>44500290 #
tempfile ◴[] No.44488604[source]
The "point" of not anthropomorphizing is to refrain from judgement until a more solid abstraction appears. The problem with explaining LLMs in terms of human behaviour is that, while we don't clearly understand what the LLM is doing, we understand human cognition even less! There is literally no predictive power in the abstraction "The LLM is thinking like I am thinking". It gives you no mechanism to evaluate what tasks the LLM "should" be able to do.

Seriously, try it. Why don't LLMs get frustrated with you if you ask them the same question repeatedly? A human would. Why are LLMs so happy to give contradictory answers, as long as you are very careful not to highlight the contradictory facts? Why do earlier models behave worse on reasoning tasks than later ones? These are features nobody, anywhere understands. So why make the (imo phenomenally large) leap to "well, it's clearly just a brain"?

It is like someone inventing the aeroplane and someone looks at it and says "oh, it's flying, I guess it's a bird". It's not a bird!

replies(2): >>44488702 #>>44495703 #
CuriousSkeptic ◴[] No.44488702[source]
> Why don't LLMs get frustrated with you if you ask them the same question repeatedly?

To be fair, I have had a strong sense of Gemini in particular becoming a lot more frustrated with me than GPT or Claude.

Yesterday I had it ensuring me that it was doing a great job, it was just me not understanding the challenge but it would break it down step by step just to make it obvious to me (only to repeat the same errors, but still)

I’ve just interpreted it as me reacting to the lower amount of sycophancy for now

replies(3): >>44489811 #>>44490982 #>>44491762 #
1. danielbln ◴[] No.44489811[source]
In addition, when the boss man asks for the same thing repeatedly then the underling might get frustrated as hell, but they won't be telling that to the boss.