←back to thread

A non-anthropomorphized view of LLMs

(addxorrol.blogspot.com)
475 points zdw | 1 comments | | HN request time: 0.335s | source
1. deadbabe ◴[] No.44489924[source]
A person’s anthropomorphization of LLMs is directly related to how well they understand LLMs.

Once you dispel the magic, it naturally becomes hard to use words related to consciousness, or thinking. You will probably think of LLMs more like a search engine: you give an input and get some probable output. Maybe LLMs should be rebranded as “word engines”?

Regardless, anthropomorphization is not helpful, and by using human terms to describe LLMs you are harming the layperson’s ability to truly understand what an LLM is while also cheapening what it means to be human by suggesting we’ve solved consciousness. Just stop it. LLMs do not think, given enough time and patience you could compute their output by hand if you used their weights and embeddings to manually do all the math, a hellish task but not an impossible one technically. There is no other secret hidden away, that’s it.