←back to thread

A non-anthropomorphized view of LLMs

(addxorrol.blogspot.com)
475 points zdw | 1 comments | | HN request time: 0.216s | source
Show context
peeters ◴[] No.44490149[source]
> The moment that people ascribe properties such as "consciousness" or "ethics" or "values" or "morals" to these learnt mappings is where I tend to get lost. We are speaking about a big recurrence equation that produces a new word, and that stops producing words if we don't crank the shaft.

If that's the argument, then in my mind the more pertinent question is should you be anthropomorphizing humans, Larry Ellison or not.

replies(1): >>44495323 #
1. th0ma5 ◴[] No.44495323[source]
I think you to as he is human, but I respect your desire to question it!