←back to thread

A non-anthropomorphized view of LLMs

(addxorrol.blogspot.com)
475 points zdw | 1 comments | | HN request time: 0.225s | source
Show context
mewpmewp2 ◴[] No.44485205[source]
My question: how do we know that this is not similar to how human brains work. What seems intuitively logical to me is that we have brains evolved through evolutionary process via random mutations yielding in a structure that has its own evolutionary reward based algorithms designing it yielding a structure that at any point is trying to predict next actions to maximise survival/procreation, of course with a lot of sub goals in between, ultimately becoming this very complex machinery, but yet should be easily simulated if there was enough compute in theory and physical constraints would allow for it.

Because, morals, values, consciousness etc could just be subgoals that arised through evolution because they support the main goals of survival and procreation.

And if it is baffling to think that a system could rise up, how do you think it is possible life and humans came to existence in the first place? How could it be possible? It is already happened from a far unlikelier and strange place. And wouldn't you think the whole World and the timeline in theory couldn't be represented as a deterministic function. And if not then why should "randomness" or anything else bring life to existence.

replies(4): >>44485240 #>>44485258 #>>44485273 #>>44488508 #
cmiles74 ◴[] No.44485258[source]
Maybe the important thing is that we don't imbue the machine with feelings or morals or motivation: it has none.
replies(1): >>44485276 #
mewpmewp2 ◴[] No.44485276[source]
If we developed feelings, morals and motivation due to them being good subgoals for primary goals, survival and procreation why couldn't other systems do that. You don't have to call them the same word or the same thing, but feeling is a signal that motivates a behaviour in us, that in part has developed from generational evolution and in other part by experiences in life. There was a random mutation that made someone develop a fear signal on seeing a predator and increased the survival chances, then due to that the mutation became widespread. Similarly a feeling in a machine could be a signal it developed that goes through a certain pathway to yield in a certain outcome.
replies(1): >>44488433 #
1. Timwi ◴[] No.44488433[source]
The real challenge is not to see it as a binary (the machine either has feelings or it has none). It's possible for the machine to have emergent processes or properties that resemble human feelings in their function and their complexity, but are otherwise nothing like them (structured very differently and work on completely different principles). It's possible to have a machine or algorithm so complex that the question of whether it has feelings is just a semantic debate on what you mean by “feelings” and where you draw the line.

A lot of the people who say “machines will never have feelings” are confident in that statement because they draw the line incredibly narrowly: if it ain't human, it ain't feeling. This seems to me putting the cart before the horse. It ain't feeling because you defined it so.