←back to thread

A non-anthropomorphized view of LLMs

(addxorrol.blogspot.com)
475 points zdw | 1 comments | | HN request time: 0.207s | source
Show context
Al-Khwarizmi ◴[] No.44487564[source]
I have the technical knowledge to know how LLMs work, but I still find it pointless to not anthropomorphize, at least to an extent.

The language of "generator that stochastically produces the next word" is just not very useful when you're talking about, e.g., an LLM that is answering complex world modeling questions or generating a creative story. It's at the wrong level of abstraction, just as if you were discussing an UI events API and you were talking about zeros and ones, or voltages in transistors. Technically fine but totally useless to reach any conclusion about the high-level system.

We need a higher abstraction level to talk about higher level phenomena in LLMs as well, and the problem is that we have no idea what happens internally at those higher abstraction levels. So, considering that LLMs somehow imitate humans (at least in terms of output), anthropomorphization is the best abstraction we have, hence people naturally resort to it when discussing what LLMs can do.

replies(18): >>44487608 #>>44488300 #>>44488365 #>>44488371 #>>44488604 #>>44489139 #>>44489395 #>>44489588 #>>44490039 #>>44491378 #>>44491959 #>>44492492 #>>44493555 #>>44493572 #>>44494027 #>>44494120 #>>44497425 #>>44500290 #
grey-area ◴[] No.44487608[source]
On the contrary, anthropomorphism IMO is the main problem with narratives around LLMs - people are genuinely talking about them thinking and reasoning when they are doing nothing of that sort (actively encouraged by the companies selling them) and it is completely distorting discussions on their use and perceptions of their utility.
replies(13): >>44487706 #>>44487747 #>>44488024 #>>44488109 #>>44489358 #>>44490100 #>>44491745 #>>44493260 #>>44494551 #>>44494981 #>>44494983 #>>44495236 #>>44496260 #
fenomas ◴[] No.44488109[source]
When I see these debates it's always the other way around - one person speaks colloquially about an LLM's behavior, and then somebody else jumps on them for supposedly believing the model is conscious, just because the speaker said "the model thinks.." or "the model knows.." or whatever.

To be honest the impression I've gotten is that some people are just very interested in talking about not anthropomorphizing AI, and less interested in talking about AI behaviors, so they see conversations about the latter as a chance to talk about the former.

replies(4): >>44488326 #>>44489402 #>>44489673 #>>44492369 #
latexr ◴[] No.44488326[source]
Respectfully, that is a reflection of the places you hang out in (like HN) and not the reality of the population.

Outside the technical world it gets much worse. There are people who killed themselves because of LLMs, people who are in love with them, people who genuinely believe they have “awakened” their own private ChatGPT instance into AGI and are eschewing the real humans in their lives.

replies(2): >>44488412 #>>44489321 #
Xss3 ◴[] No.44489321[source]
The other day a good friend of mine with mental health issues remarked that "his" chatgpt understands him better than most of his friends and gives him better advice than his therapist.

It's going to take a lot to get him out of that mindset and frankly I'm dreading trying to compare and contrast imperfect human behaviour and friendships with a sycophantic AI.

replies(2): >>44493792 #>>44495382 #
lelanthran ◴[] No.44495382[source]
> The other day a good friend of mine with mental health issues remarked that "his" chatgpt understands him better than most of his friends and gives him better advice than his therapist.

The therapist thing might be correct, though. You can send a well-adjusted person to three renowned therapists and get three different reasons for why they need to continue sessions.

No therapist ever says "Congratulations, you're perfectly normal. Now go away and come back when you have a real problem." Statistically it is vanishingly unlikely that every person who ever visited a therapist is in need of a second (more more) visit.

The main problem with therapy is a lack of objectivity[1]. When people talk about what their sessions resulted in, it's always "My problem is that I'm too perfect". I've known actual bullies whose therapist apparently told them that they are too submissive and need to be more assertive.

The secondary problem is that all diagnosis is based on self-reported metrics of the subject. All improvement is equally based on self-reported metrics. This is no different from prayer.

You don't have a medical practice there; you've got an Imam and a sophisticated but still medically-insured way to plead with thunderstorms[2]. I fail to see how an LLM (or even the Rogerian a-x doctor in Emacs) will do worse on average.

After all, if you're at a therapist and you're doing most of the talking, how would an LLM perform worse than the therapist?

----------------

[1] If I'm at a therapist, and they're asking me to do most of the talking, I would damn well feel that I am not getting my moneys worth. I'd be there primarily to learn (and practice a little) whatever tools they can teach me to handle my $PROBLEM. I don't want someone to vent at, I want to learn coping mechanisms and mitigation strategies.

[2] This is not an obscure reference.

replies(1): >>44497017 #
1. medvezhenok ◴[] No.44497017[source]
Yup, this problem is why I think all therapists should ideally know behavioral genetics and evolutionary psychology (there is at least a plausibly objective measure there which is dissonance between the ancestral environment in which the brain developed and the modern day environment. And at least some amount of psychological problems can be explained by it).

I am a fan of the « Beat Your Genes » podcast, and while some of the prescriptions can be a bit heavy handed, most feel intuitively right. It’s approaching human problems as intelligent mammal problems, as opposed to something in a category of its own.