←back to thread

A non-anthropomorphized view of LLMs

(addxorrol.blogspot.com)
475 points zdw | 2 comments | | HN request time: 0.633s | source
Show context
szvsw ◴[] No.44484909[source]
So the author’s core view is ultimately a Searle-like view: a computational, functional, syntactic rules based system cannot reproduce a mind. Plenty of people will agree, plenty of people will disagree, and the answer is probably unknowable and just comes down to whatever axioms you subscribe to in re: consciousness.

The author largely takes the view that it is more productive for us to ignore any anthropomorphic representations and focus on the more concrete, material, technical systems - I’m with them there… but only to a point. The flip side of all this is of course the idea that there is still something emergent, unplanned, and mind-like. So even if it is a stochastic system following rules, clearly the rules are complex enough (to the tune of billions of operations, with signals propagating through some sort of resonant structure, if you take a more filter impulse response like view of a sequential matmuls) to result in emergent properties. Even if we (people interested in LLMs with at least some level of knowledge of ML mathematics and systems) “know better” than to believe these systems to possess morals, ethics, feelings, personalities, etc, the vast majority of people do not have any access to meaningful understanding of the mathematical, functional representation of an LLM and will not take that view, and for all intents and purposes the systems will at least seem to have those anthropomorphic properties, and so it seems like it is in fact useful to ask questions from that lens as well.

In other words, just as it’s useful to analyze and study these things as the purely technical systems they ultimately are, it is also, probably, useful to analyze them from the qualitative, ephemeral, experiential perspective that most people engage with them from, no?

replies(5): >>44485119 #>>44485130 #>>44485421 #>>44487589 #>>44488863 #
gtsop ◴[] No.44485119[source]
No.

Why would you ever want to amplify a false understanding that has the potential to affect serious decisions across various topics?

LLMs reflect (and badly I may add) aspects of the human thought process. If you take a leap and say they are anything more than that, you might as well start considering the person appearing in your mirror as a living being.

Literally (and I literally mean it) there is no difference. The fact that a human image comes out of a mirror has no relation what so ever with the mirror's physical attributes and functional properties. It has to do just with the fact that a man is standing in front of it. Stop feeding the LLM with data artifacts of human thought and will imediatelly stop reflecting back anything resembling a human.

replies(2): >>44485259 #>>44485325 #
1. degamad ◴[] No.44485325[source]
> Why would you ever want to amplify a false understanding that has the potential to affect serious decisions across various topics?

We know that Newton's laws are wrong, and that you have to take special and general relativity into account. Why would we ever teach anyone Newton's laws any more?

replies(1): >>44485627 #
2. ifdefdebug ◴[] No.44485627[source]
Newton's laws are a good enough approximation for many tasks so it's not a "false understanding" as long as their limits are taken into account.