←back to thread

124 points alphadelphi | 1 comments | | HN request time: 0.218s | source
Show context
antirez ◴[] No.43594641[source]
As LLMs do things thought to be impossible before, LeCun adjusts his statements about LLMs, but at the same time his credibility goes lower and lower. He started saying that LLMs were just predicting words using a probabilistic model, like a better Markov Chain, basically. It was already pretty clear that this was not the case as even GPT3 could do summarization well enough, and there is no probabilistic link between the words of a text and the gist of the content, still he was saying that at the time of GPT3.5 I believe. Then he adjusted this vision when talking with Hinton publicly, saying "I don't deny there is more than just probabilistic thing...". He started saying: not longer just simply probabilistic but they can only regurgitate things they saw in the training set, often explicitly telling people that novel questions could NEVER solved by LLMs, with examples of prompts failing at the time he was saying that and so forth. Now reasoning models can solve problems they never saw, and o3 did huge progresses on ARC, so he adjusted again: for AGI we will need more. And so forth.

So at this point it does not matter what you believe about LLMs: in general, to trust LeCun words is not a good idea. Add to this that LeCun is directing an AI lab that as the same point has the following huge issues:

1. Weakest ever LLM among the big labs with similar resources (and smaller resources: DeepSeek).

2. They say they are focusing on open source models, but the license is among the less open than the available open weight models.

3. LLMs and in general all the new AI wave puts CNNs, a field where LeCun worked (but that didn't started himself) a lot more in perspective, and now it's just a chapter in a book that is composed mostly of other techniques.

Btw, other researchers that were in the LeCun side, changed side recently, saying that now "is different" because of CoT, that is the symbolic reasoning they were blabling before. But CoT is stil regressive next token without any architectural change, so, no, they were wrong, too.

replies(15): >>43594669 #>>43594733 #>>43594747 #>>43594812 #>>43594852 #>>43595292 #>>43595501 #>>43595519 #>>43595562 #>>43595668 #>>43596291 #>>43596309 #>>43597354 #>>43597435 #>>43614487 #
gcr ◴[] No.43594669[source]
Why is changing one’s mind when confronted with new evidence a negative signifier of reputation for you?
replies(6): >>43594696 #>>43594815 #>>43594919 #>>43595008 #>>43595180 #>>43595298 #
antirez ◴[] No.43594696[source]
Because there were plenty of evidences that the statements were either not correct or not based on enough information, at the time they were made. And to be wrong because of personal biases, and then don't clearly state you were wrong when new evidenced appeared, is not a trait of a good scientist. For instance: the strong summarization abilities where already something that, alone, without any further information, were enough to seriously doubt about the stochastic parrot mental model.
replies(4): >>43594725 #>>43594765 #>>43594771 #>>43595670 #
jxjnskkzxxhx ◴[] No.43594765[source]
I don't see the contradiction between "stochastic parrot" and "strong summarisation abilities".

Where I'm skeptical of LLM skepticism is that people use the term "stochastic parrot" disparagingly, as if they're not impressed. LLMs are stochastic parrots in the sense that they probabilistically guess sequences of things, but isn't it interesting how far that takes you already? I'd never have guessed. Fundamentally I question the intellectual honesty of anyone who pretends they're not surprised by this.

replies(2): >>43594813 #>>43595232 #
1. fragmede ◴[] No.43595232[source]
There are some that would describe LLMs as next word predictors, akin to having a bag of magnetic words, where you put your hand in, rummage around, and just pick a next word and put it on the fridge and eventually form sentences. It's "just" predicting the next word, so as an analogy as to how they work, that seems reasonable. The thing is, when that bag consists of a dozen bags-in-bags, like Russian nesting dolls, and the "bag" has a hundred million words in it, the analogy stops being a useful description. It's like describing humans as multicellular organisms. It's an accurate description of what a human is, but somewhere between a simple hydra with 100,000 cells and a human with 3 trillion cells, intelligence arises. Describing humans as merely multicellular organisms and using hydra as your point of reference isn't going to get you very far.