←back to thread

A non-anthropomorphized view of LLMs

(addxorrol.blogspot.com)
475 points zdw | 1 comments | | HN request time: 0.209s | source
Show context
jillesvangurp ◴[] No.44488298[source]
People anthropomorphize just about anything around them. People talk about inanimate objects like they are persons. Ships, cars, etc. And of course animals are well in scope for this as well, even the ones that show little to no signs of being able to reciprocate the relationship (e.g. an ant). People talk to their plants even.

It's what we do. We can't help ourselves. There's nothing crazy about it and most people are perfectly well aware that their car doesn't love them back.

LLMs are not conscious because unlike human brains they don't learn or adapt (yet). They basically get trained and then they become read only entities. So, they don't really adapt to you over time. Even so, LLMs are pretty good and can fake a personality pretty well. And with some clever context engineering and alignment, they've pretty much made the Turing test irrelevant; at least over the course of a short conversation. And they can answer just about any question in a way that is eerily plausible from memory, and with the help of some tools actually pretty damn good for some of the reasoning models.

Anthropomorphism was kind of a foregone conclusion the moment we created computers; or started thinking about creating one. With LLMs it's pretty much impossible not to anthropomorphize. Because they've actually been intentionally imitate human communication. That doesn't mean that we've created AGIs yet. For that we need some more capability. But at the same time, the learning processes that we use to create LLMs are clearly inspired by how we learn ourselves. Our understanding of how that works is far from perfect but it's yielding results. From here to some intelligent thing that is able to adapt and learn transferable skills is no longer unimaginable.

The short term impact is that LLMs are highly useful tools that have an interface that is intentionally similar to how we'd engage with others. So we can talk and it listens. Or write and it understands. And then it synthesizes some kind of response or starts asking questions and using tools. The end result is quite a bit beyond what we used to be able to expect from computers. And it does not require a lot of training of people to be able to use them.

replies(2): >>44488410 #>>44488461 #
1. latexr ◴[] No.44488410[source]
> People anthropomorphize just about anything around them.

They do not, you are mixing up terms.

> People talk about inanimate objects like they are persons. Ships, cars, etc.

Which is called “personification”, and is a different concept from anthropomorphism.

Effectively no one really thinks their car is alive. Plenty of people think the LLM they use is conscious.

https://www.masterclass.com/articles/anthropomorphism-vs-per...