Most active commenters
  • mewpmewp2(4)

←back to thread

A non-anthropomorphized view of LLMs

(addxorrol.blogspot.com)
475 points zdw | 15 comments | | HN request time: 1.286s | source | bottom
1. mewpmewp2 ◴[] No.44485205[source]
My question: how do we know that this is not similar to how human brains work. What seems intuitively logical to me is that we have brains evolved through evolutionary process via random mutations yielding in a structure that has its own evolutionary reward based algorithms designing it yielding a structure that at any point is trying to predict next actions to maximise survival/procreation, of course with a lot of sub goals in between, ultimately becoming this very complex machinery, but yet should be easily simulated if there was enough compute in theory and physical constraints would allow for it.

Because, morals, values, consciousness etc could just be subgoals that arised through evolution because they support the main goals of survival and procreation.

And if it is baffling to think that a system could rise up, how do you think it is possible life and humans came to existence in the first place? How could it be possible? It is already happened from a far unlikelier and strange place. And wouldn't you think the whole World and the timeline in theory couldn't be represented as a deterministic function. And if not then why should "randomness" or anything else bring life to existence.

replies(4): >>44485240 #>>44485258 #>>44485273 #>>44488508 #
2. ants_everywhere ◴[] No.44485240[source]
> My question: how do we know that this is not similar to how human brains work.

It is similar to how human brains operate. LLMs are the (current) culmination of at least 80 years of research on building computational models of the human brain.

replies(3): >>44487857 #>>44488009 #>>44496588 #
3. cmiles74 ◴[] No.44485258[source]
Maybe the important thing is that we don't imbue the machine with feelings or morals or motivation: it has none.
replies(1): >>44485276 #
4. bbarn ◴[] No.44485273[source]
I think it's just an unfair comparison in general. The power of the LLM is the zero risk to failure, and lack of consequence when it does. Just try again, using a different prompt, retrain maybe, etc.

Humans make a bad choice, it can end said human's life. The worst choice a LLM makes just gets told "no, do it again, let me make it easier"

replies(1): >>44485318 #
5. mewpmewp2 ◴[] No.44485276[source]
If we developed feelings, morals and motivation due to them being good subgoals for primary goals, survival and procreation why couldn't other systems do that. You don't have to call them the same word or the same thing, but feeling is a signal that motivates a behaviour in us, that in part has developed from generational evolution and in other part by experiences in life. There was a random mutation that made someone develop a fear signal on seeing a predator and increased the survival chances, then due to that the mutation became widespread. Similarly a feeling in a machine could be a signal it developed that goes through a certain pathway to yield in a certain outcome.
replies(1): >>44488433 #
6. mewpmewp2 ◴[] No.44485318[source]
But an LLM model could perform poorly in tests that it is not considered and essentially means "death" for it. But begs the question at which scope should we consider an LLM to be similar to identity of a single human. Are you the same you as you were few minutes back or 10 years back? Is LLM the same LLM it is after it has been trained for further 10 hours, what if the weights are copy pasted endlessly, what if we as humans were to be cloned instantly? What if you were teleported from location A to B instantly, being put together from other atoms from elsewhere?

Ultimately this matters from evolutionary evolvement and survival of the fittest idea, but it makes the question of "identity" very complex. But death will matter because this signals what traits are more likely to keep going into new generations, for both humans and LLMs.

Death, essentially for an LLM would be when people stop using it in favour of some other LLM performing better.

7. seadan83 ◴[] No.44487857[source]
> It is similar to how human brains operate.

Is it? Do we know how human brains operate? We know the basic architecture of them, so we have a map, but we don't know the details.

"The cellular biology of brains is relatively well-understood, but neuroscientists have not yet generated a theory explaining how brains work. Explanations of how neurons collectively operate to produce what brains can do are tentative and incomplete." [1]

"Despite a century of anatomical, physiological, and molecular biological efforts scientists do not know how neurons by their collective interactions produce percepts, thoughts, memories, and behavior. Scientists do not know and have no theories explaining how brains and central nervous systems work." [1]

[1] https://pmc.ncbi.nlm.nih.gov/articles/PMC10585277/

replies(2): >>44488447 #>>44490070 #
8. suddenlybananas ◴[] No.44488009[source]
It really is not. ANNs bear only a passing resemblance to how neurons work.
9. Timwi ◴[] No.44488433{3}[source]
The real challenge is not to see it as a binary (the machine either has feelings or it has none). It's possible for the machine to have emergent processes or properties that resemble human feelings in their function and their complexity, but are otherwise nothing like them (structured very differently and work on completely different principles). It's possible to have a machine or algorithm so complex that the question of whether it has feelings is just a semantic debate on what you mean by “feelings” and where you draw the line.

A lot of the people who say “machines will never have feelings” are confident in that statement because they draw the line incredibly narrowly: if it ain't human, it ain't feeling. This seems to me putting the cart before the horse. It ain't feeling because you defined it so.

10. Timwi ◴[] No.44488447{3}[source]
> > It is similar to how human brains operate.

> Is it?

This is just a semantic debate on what counts as “similar”. It's possible to disagree on this point despite agreeing on everything relating to how LLMs and human brains work.

11. latexr ◴[] No.44488508[source]
> how do we know that this is not similar to how human brains work.

Do you forget every conversation as soon as you have them? When speaking to another person, do they need to repeat literally everything they said and that you said, in order, for you to retain context?

If not, your brain does not work like an LLM. If yes, please stop what you’re doing right now and call a doctor with this knowledge. I hope Memento (2000) was part of your training data, you’re going to need it.

replies(1): >>44491005 #
12. ants_everywhere ◴[] No.44490070{3}[source]
The part I was referring to is captured in

"The cellular biology of brains is relatively well-understood"

Fundamentally, brains are not doing something different in kind from ANNs. They're basically layers of neural networks stacked together in certain ways.

What we don't know are things like (1) how exactly are the layers stacked together, (2) how are the sensors (like photo receptors, auditory receptors, etc) hooked up?, (3) how do the different parts of the brain interact?, (4) for that matter what do the different parts of the brain actually do?, (5) how do chemical signals like neurotransmitters convey information or behavior?

In the analogy between brains and artificial neural networks, these sorts of questions might be of huge importance to people building AI systems, but they'd be of only minor importance to users of AI systems. OpenAI and Google can change details about how their various transformer layers and ANN layers are connected. The result may be improved products, but they won't be doing anything different from what AIs are doing now in terms the author of this article is concerned about.

replies(1): >>44494549 #
13. mewpmewp2 ◴[] No.44491005[source]
Knowledge of every conversation must be some form of state in our minds, just like for LLMs it could be something retrieved from a database, no? I don't think information storing or retrieval is necessarily the most important achievements here in the first place. It's the emergent abilities that you wouldn't have expected to occur.
14. suddenlybananas ◴[] No.44494549{4}[source]
ANNs don't have action potentials, let alone neurotransmitters.
15. pepa65 ◴[] No.44496588[source]
Sorry, that's just complete bullshit. How LLMs work in no way models how processes in the human brain works.