Previously, he very publicly and strongly said:
a) LLMs can't do math. They trick us in poetry but that's subjective. They can't do objective math.
b) they can't plan
c) by the very nature of autoregressive arch, errors compound. So the longer you go in your generation, the higher the error rate. so at long contexts the answers become utter garbage.
All of these were proven wrong, 1-2 years later. "a" at the core (gold at IMO), "b" w/ software glue and "c" with better training regimes.
I'm not interested in the will it won't it debates about AGI, I'm happy with what we have now, and I think these things are good enough now, for several usecases. But it's important to note when people making strong claims get them wrong. Again, I think I get where he's coming from, but the public stances aren't the place to get into the deep research minutia.
That being said, I hope he gets to find whatever it is that he's looking for, and wish him success in his endeavours. Between him, Fei Fei Li and Ilya, something cool has to come out of the small shops. Heck, I'm even rooting for the "let's commoditise lora training" that Mira's startup seems to go for.
b) Still true: next-token prediction isn’t planning.
c) Still true: error accumulation is mitigated, not eliminated. Long-context quality still relies on retrieval, checks, and verifiers.
Yann’s claims were about LLMs as LLMs. With tooling, you can work around limits, but the core point stands.
Please learn the basics before you discuss what LLMs can and can't do.
Maybe programming is mostly pattern matching but modern math is built on theory and proofs right?
RL training amounts to pattern matching.
How does an LLM decode Base64? Decode algorithm? No - predictive pattern matching.
An LLM isn't predicting what a person thinks - it's predicting what a person does.