←back to thread

760 points MindBreaker2605 | 2 comments | | HN request time: 0s | source
Show context
lm28469 ◴[] No.45897524[source]
But wait they're just about to get AGI why would he leave???
replies(1): >>45897571 #
killerstorm ◴[] No.45897571[source]
LeCun always said that LLMs do not lead to AGI.
replies(2): >>45897613 #>>45897683 #
NitpickLawyer ◴[] No.45897683[source]
He also said other things about LLMs that turned out to be either wrong or easily bypassed with some glue. While I understand where he comes from, and that his stance is pure research-y theory driven, at the end of the day his positions were wrong.

Previously, he very publicly and strongly said:

a) LLMs can't do math. They trick us in poetry but that's subjective. They can't do objective math.

b) they can't plan

c) by the very nature of autoregressive arch, errors compound. So the longer you go in your generation, the higher the error rate. so at long contexts the answers become utter garbage.

All of these were proven wrong, 1-2 years later. "a" at the core (gold at IMO), "b" w/ software glue and "c" with better training regimes.

I'm not interested in the will it won't it debates about AGI, I'm happy with what we have now, and I think these things are good enough now, for several usecases. But it's important to note when people making strong claims get them wrong. Again, I think I get where he's coming from, but the public stances aren't the place to get into the deep research minutia.

That being said, I hope he gets to find whatever it is that he's looking for, and wish him success in his endeavours. Between him, Fei Fei Li and Ilya, something cool has to come out of the small shops. Heck, I'm even rooting for the "let's commoditise lora training" that Mira's startup seems to go for.

replies(3): >>45897933 #>>45898169 #>>45905642 #
tonii141 ◴[] No.45898169[source]
a) Still true: vanilla LLMs can’t do math, they pattern-match unless you bolt on tools.

b) Still true: next-token prediction isn’t planning.

c) Still true: error accumulation is mitigated, not eliminated. Long-context quality still relies on retrieval, checks, and verifiers.

Yann’s claims were about LLMs as LLMs. With tooling, you can work around limits, but the core point stands.

replies(2): >>45898248 #>>45898683 #
NitpickLawyer ◴[] No.45898248[source]
a) no, gemini 2.5 was shown to "win" gold w/o tools. - https://arxiv.org/html/2507.15855v1

b) reductionism isn't worth our time. Planning works in the real world, today. (try any agentic tool like cc/codex/whatever). And if you're set on the purist view, there's mounting evidence from anthropic that there is planning in the core of an LLM.

c) so ... not true? Long context works today.

This is simply moving goalposts and nothing more. X can't do Y -> well, here they are doing Y -> well, not like that.

replies(1): >>45898433 #
tonii141 ◴[] No.45898433[source]
a) That "no-tools" win depends on prompt orchestration which can still be categorized as tooling.

b) Next-token training doesn’t magically grant inner long-horizon planners..

c) Long context ≠ robust at any length. Degradation with scale remains.

Not moving goalposts, just keeping terms precise.

replies(1): >>45899019 #
ACCount37 ◴[] No.45899019[source]
My man, you're literally moving all the goalposts as we speak.

It's not just "long context" - you demand "infinite context" and "any length" now. Even humans don't have that. "No tools" is no longer enough - what, do you demand "no prompts" now too? Having LLMs decompose tasks and prompt each other the way humans do is suddenly a no-no?

replies(1): >>45899469 #
1. tonii141 ◴[] No.45899469[source]
I’m not demanding anything, I’m pointing out that performance tends to degrade as context scales, which follows from current LLM architectures as autoregressive models.

In that sense, Yann was right.

replies(1): >>45901699 #
2. snapcaster ◴[] No.45901699[source]
Not sure if you're just someone who doesn't want to ever lose an argument or you're actually coping this hard