←back to thread

A non-anthropomorphized view of LLMs

(addxorrol.blogspot.com)
475 points zdw | 1 comments | | HN request time: 0.238s | source
Show context
mewpmewp2 ◴[] No.44485205[source]
My question: how do we know that this is not similar to how human brains work. What seems intuitively logical to me is that we have brains evolved through evolutionary process via random mutations yielding in a structure that has its own evolutionary reward based algorithms designing it yielding a structure that at any point is trying to predict next actions to maximise survival/procreation, of course with a lot of sub goals in between, ultimately becoming this very complex machinery, but yet should be easily simulated if there was enough compute in theory and physical constraints would allow for it.

Because, morals, values, consciousness etc could just be subgoals that arised through evolution because they support the main goals of survival and procreation.

And if it is baffling to think that a system could rise up, how do you think it is possible life and humans came to existence in the first place? How could it be possible? It is already happened from a far unlikelier and strange place. And wouldn't you think the whole World and the timeline in theory couldn't be represented as a deterministic function. And if not then why should "randomness" or anything else bring life to existence.

replies(4): >>44485240 #>>44485258 #>>44485273 #>>44488508 #
bbarn ◴[] No.44485273[source]
I think it's just an unfair comparison in general. The power of the LLM is the zero risk to failure, and lack of consequence when it does. Just try again, using a different prompt, retrain maybe, etc.

Humans make a bad choice, it can end said human's life. The worst choice a LLM makes just gets told "no, do it again, let me make it easier"

replies(1): >>44485318 #
1. mewpmewp2 ◴[] No.44485318[source]
But an LLM model could perform poorly in tests that it is not considered and essentially means "death" for it. But begs the question at which scope should we consider an LLM to be similar to identity of a single human. Are you the same you as you were few minutes back or 10 years back? Is LLM the same LLM it is after it has been trained for further 10 hours, what if the weights are copy pasted endlessly, what if we as humans were to be cloned instantly? What if you were teleported from location A to B instantly, being put together from other atoms from elsewhere?

Ultimately this matters from evolutionary evolvement and survival of the fittest idea, but it makes the question of "identity" very complex. But death will matter because this signals what traits are more likely to keep going into new generations, for both humans and LLMs.

Death, essentially for an LLM would be when people stop using it in favour of some other LLM performing better.