←back to thread

124 points alphadelphi | 1 comments | | HN request time: 0.223s | source
Show context
csdvrx ◴[] No.43594425[source]
> Returning to the topic of the limitations of LLMs, LeCun explains, "An LLM produces one token after another. It goes through a fixed amount of computation to produce a token, and that's clearly System 1—it's reactive, right? There's no reasoning," a reference to Daniel Kahneman's influential framework that distinguishes between the human brain's fast, intuitive method of thinking (System 1) and the method of slower, more deliberative reasoning (System 2).

Many people believe that "wants" come first, and are then followed by rationalizations. It's also a theory that's supported by medical imaging.

Maybe the LLM are a good emulation of system-2 (their perfomance sugggest it is), and what's missing is system-1, the "reptilian" brain, based on emotions like love, fear, aggression, (etc.).

For all we know, the system-1 could use the same embeddings, and just work in parallel and produce tokens that are used to guide the system-2.

Personally, I trust my "emotions" and "gut feelings": I believe they are things "not yet rationalized" by my system-2, coming straight from my system-1.

I know it's very unpopular among nerds, but it has worked well enough for me!

replies(4): >>43594452 #>>43594494 #>>43594520 #>>43594544 #
1. gessha ◴[] No.43594520[source]
When I took cognitive science courses some years ago, one of the studies that we looked at was one where emotion-responsible parts of the brain were damaged. The result was reduction or complete failure to make decisions.

https://pmc.ncbi.nlm.nih.gov/articles/PMC3032808/