←back to thread

A non-anthropomorphized view of LLMs

(addxorrol.blogspot.com)
475 points zdw | 1 comments | | HN request time: 0.338s | source
Show context
Al-Khwarizmi ◴[] No.44487564[source]
I have the technical knowledge to know how LLMs work, but I still find it pointless to not anthropomorphize, at least to an extent.

The language of "generator that stochastically produces the next word" is just not very useful when you're talking about, e.g., an LLM that is answering complex world modeling questions or generating a creative story. It's at the wrong level of abstraction, just as if you were discussing an UI events API and you were talking about zeros and ones, or voltages in transistors. Technically fine but totally useless to reach any conclusion about the high-level system.

We need a higher abstraction level to talk about higher level phenomena in LLMs as well, and the problem is that we have no idea what happens internally at those higher abstraction levels. So, considering that LLMs somehow imitate humans (at least in terms of output), anthropomorphization is the best abstraction we have, hence people naturally resort to it when discussing what LLMs can do.

replies(18): >>44487608 #>>44488300 #>>44488365 #>>44488371 #>>44488604 #>>44489139 #>>44489395 #>>44489588 #>>44490039 #>>44491378 #>>44491959 #>>44492492 #>>44493555 #>>44493572 #>>44494027 #>>44494120 #>>44497425 #>>44500290 #
grey-area ◴[] No.44487608[source]
On the contrary, anthropomorphism IMO is the main problem with narratives around LLMs - people are genuinely talking about them thinking and reasoning when they are doing nothing of that sort (actively encouraged by the companies selling them) and it is completely distorting discussions on their use and perceptions of their utility.
replies(13): >>44487706 #>>44487747 #>>44488024 #>>44488109 #>>44489358 #>>44490100 #>>44491745 #>>44493260 #>>44494551 #>>44494981 #>>44494983 #>>44495236 #>>44496260 #
1. godelski ◴[] No.44494983[source]
Serendipitous name...

In part I agree with the parent.

  >> it pointless to *not* anthropomorphize, at least to an extent.
I agree that it is pointless to not anthropomorphize because we are humans and we will automatically do this. Willingly or unwillingly.

On the other hand, it generates bias. This bias can lead to errors.

So the real answer is (imo) that it is fine to anthropomorphise but recognize that while doing so can provide utility and help us understand, it is WRONG. Recognizing that it is not right and cannot be right provides us with a constant reminder to reevaluate. Use it, but double check, and keep checking making sure you understand the limitations of the analogy. Understanding when and where it applies, where it doesn't, and most importantly, where you don't know if it does or does not. The last is most important because it helps us form hypotheses that are likely to be testable (likely, not always. Also, much easier said than done).

So I pick a "grey area". Anthropomorphization is a tool that can be helpful. But like any tool, it isn't universal. There is no "one-size-fits-all" tool. Literally, one of the most important things for any scientist is to become an expert at the tools you use. It's one of the most critical skills of *any expert*. So while I agree with you that we should be careful of anthropomorphization, I disagree that it is useless and can never provide information. But I do agree that quite frequently, the wrong tool is used for the right job. Sometimes, hacking it just isn't good enough.