←back to thread

124 points alphadelphi | 1 comments | | HN request time: 0.247s | source
Show context
csdvrx ◴[] No.43594425[source]
> Returning to the topic of the limitations of LLMs, LeCun explains, "An LLM produces one token after another. It goes through a fixed amount of computation to produce a token, and that's clearly System 1—it's reactive, right? There's no reasoning," a reference to Daniel Kahneman's influential framework that distinguishes between the human brain's fast, intuitive method of thinking (System 1) and the method of slower, more deliberative reasoning (System 2).

Many people believe that "wants" come first, and are then followed by rationalizations. It's also a theory that's supported by medical imaging.

Maybe the LLM are a good emulation of system-2 (their perfomance sugggest it is), and what's missing is system-1, the "reptilian" brain, based on emotions like love, fear, aggression, (etc.).

For all we know, the system-1 could use the same embeddings, and just work in parallel and produce tokens that are used to guide the system-2.

Personally, I trust my "emotions" and "gut feelings": I believe they are things "not yet rationalized" by my system-2, coming straight from my system-1.

I know it's very unpopular among nerds, but it has worked well enough for me!

replies(4): >>43594452 #>>43594494 #>>43594520 #>>43594544 #
1. ilaksh ◴[] No.43594494[source]
I think what that shows is that in order for the fast reactions to be useful, they really have to incorporate holistic information effectively. That doesn't mean that slower conscious rational work can't lead to more precision, but does suggest that immediate reactions shouldn't necessarily be ignored. There is an analogy between that and reasoning versus non-reasoning with LLMs.