←back to thread

124 points alphadelphi | 1 comments | | HN request time: 0.249s | source
Show context
csdvrx ◴[] No.43594425[source]
> Returning to the topic of the limitations of LLMs, LeCun explains, "An LLM produces one token after another. It goes through a fixed amount of computation to produce a token, and that's clearly System 1—it's reactive, right? There's no reasoning," a reference to Daniel Kahneman's influential framework that distinguishes between the human brain's fast, intuitive method of thinking (System 1) and the method of slower, more deliberative reasoning (System 2).

Many people believe that "wants" come first, and are then followed by rationalizations. It's also a theory that's supported by medical imaging.

Maybe the LLM are a good emulation of system-2 (their perfomance sugggest it is), and what's missing is system-1, the "reptilian" brain, based on emotions like love, fear, aggression, (etc.).

For all we know, the system-1 could use the same embeddings, and just work in parallel and produce tokens that are used to guide the system-2.

Personally, I trust my "emotions" and "gut feelings": I believe they are things "not yet rationalized" by my system-2, coming straight from my system-1.

I know it's very unpopular among nerds, but it has worked well enough for me!

replies(4): >>43594452 #>>43594494 #>>43594520 #>>43594544 #
kadushka ◴[] No.43594544[source]
There are LLMs which do not generate one token at a time: https://arxiv.org/abs/2502.09992

They do not reason significantly better than autoregressive LLMs. Which makes me question “one token at a time” as the bottleneck.

Also, Lecun has been pushing his JEPA idea for years now - with not much to show for it. With his resources one could hope we would see the benefits of that over the current state of the art models.

replies(1): >>43595048 #
1. financypants ◴[] No.43595048[source]
from the article: LeCun has been working in some way on V-JEPA for two decades. At least it's bold, and, everyone says it won't work until one day it might