←back to thread

A non-anthropomorphized view of LLMs

(addxorrol.blogspot.com)
475 points zdw | 1 comments | | HN request time: 0.206s | source
Show context
Timwi ◴[] No.44488337[source]
The author seems to want to label any discourse as “anthropomorphizing”. The word “goal” stood out to me: the author wants us to assume that we're anthropomorphizing as soon as we even so much as use the word “goal”. A simple breadth-first search that evaluates all chess boards and legal moves, but stops when it finds a checkmate for white and outputs the full decision tree, has a “goal”. There is no anthropomorphizing here, it's just using the word “goal” as a technical term. A hypothetical AGI with a goal like paperclip maximization is just a logical extension of the breadth-first search algorithm. Imagining such an AGI and describing it as having a goal isn't anthropomorphizing.
replies(1): >>44490093 #
tdullien ◴[] No.44490093[source]
Author here. I am entirely ok with using "goal" in the context of an RL algorithm. If you read my article carefully, you'll find that I object to the use of "goal" in the context of LLMs.
replies(1): >>44499429 #
Timwi ◴[] No.44499429[source]
If you read the literature on AI safety carefully (which uses the word “goal”), you'll find they're not talking about LLMs either.
replies(1): >>44500347 #
1. tdullien ◴[] No.44500347[source]
I think the Anthropic "omg blackmail" article clearly talks about both LLMs and their "goals".