←back to thread

124 points alphadelphi | 7 comments | | HN request time: 1.059s | source | bottom
Show context
antirez ◴[] No.43594641[source]
As LLMs do things thought to be impossible before, LeCun adjusts his statements about LLMs, but at the same time his credibility goes lower and lower. He started saying that LLMs were just predicting words using a probabilistic model, like a better Markov Chain, basically. It was already pretty clear that this was not the case as even GPT3 could do summarization well enough, and there is no probabilistic link between the words of a text and the gist of the content, still he was saying that at the time of GPT3.5 I believe. Then he adjusted this vision when talking with Hinton publicly, saying "I don't deny there is more than just probabilistic thing...". He started saying: not longer just simply probabilistic but they can only regurgitate things they saw in the training set, often explicitly telling people that novel questions could NEVER solved by LLMs, with examples of prompts failing at the time he was saying that and so forth. Now reasoning models can solve problems they never saw, and o3 did huge progresses on ARC, so he adjusted again: for AGI we will need more. And so forth.

So at this point it does not matter what you believe about LLMs: in general, to trust LeCun words is not a good idea. Add to this that LeCun is directing an AI lab that as the same point has the following huge issues:

1. Weakest ever LLM among the big labs with similar resources (and smaller resources: DeepSeek).

2. They say they are focusing on open source models, but the license is among the less open than the available open weight models.

3. LLMs and in general all the new AI wave puts CNNs, a field where LeCun worked (but that didn't started himself) a lot more in perspective, and now it's just a chapter in a book that is composed mostly of other techniques.

Btw, other researchers that were in the LeCun side, changed side recently, saying that now "is different" because of CoT, that is the symbolic reasoning they were blabling before. But CoT is stil regressive next token without any architectural change, so, no, they were wrong, too.

replies(15): >>43594669 #>>43594733 #>>43594747 #>>43594812 #>>43594852 #>>43595292 #>>43595501 #>>43595519 #>>43595562 #>>43595668 #>>43596291 #>>43596309 #>>43597354 #>>43597435 #>>43614487 #
1. sorcerer-mar ◴[] No.43594733[source]
> there is no probabilistic link between the words of a text and the gist of the content

How could that possibly be true?

There’s obviously a link between “[original content] is summarized as [summarized”content]

replies(2): >>43594890 #>>43594959 #
2. DrBenCarson ◴[] No.43594890[source]
It’s not true

The idea that meaning is not impacted by language yet is somehow exclusively captured by language is just absolutely absurd

Like saying X+Y=Z but changing X or Y won’t affect Z

replies(2): >>43595435 #>>43596906 #
3. aerhardt ◴[] No.43594959[source]
Yea I'm lost there. If we took n bodies of text x_1 ... x_n, and k different summaries each y_1i ...y_kn , there are many statistical and computational treatments with which you would be able to find extremely strong correlations between y and x...
4. neom ◴[] No.43595435[source]
Language is a symbolic system. From an absolute or spiritual standpoint, meaning transcends pure linguistic probabilities. Language itself emerges as a limited medium for the expression of consciousness and abstract thought. Indeed, to say meaning arises purely from language (as probability alone) or, to deny language influences meaning entirely are both overly simplistic extremes.
replies(1): >>43602489 #
5. bitethecutebait ◴[] No.43596906[source]
... meaning is not always impacted by the specificity or sensitivity of language while sometimes indeed exclusively captured by it, although the exclusivity is more of a time-dependent thing as one could imagine a silent, theatrical piece that captures the very same meaning but the 'phantasiac' is probably constructing the scene(s) out of words ... but then again ... there either was, is or will be at least one Savant to whom this does not apply ... and maybe 'some' deaf and blind person, too ...
6. aerhardt ◴[] No.43602489{3}[source]
"When he to whom one speaks does not understand, and he who speaks himself does not understand, that is metaphysics." - Voltaire

Like I said in another comment, I can think of a dozen statistical and computational methods where if you give me a text and its synthesis I can find a strong probabilistic link between the two.

replies(1): >>43602929 #
7. neom ◴[] No.43602929{4}[source]
"Not everything that counts can be counted, and not everything that can be counted counts." - Someone.

Statistical correlation between text and synthesis undoubtedly exists, but capturing correlation does not imply you've encapsulated meaning itself. My point is precisely that: meaning isn't confined entirely within what we can statistically measure, though it may still be illuminated by it.