←back to thread

760 points MindBreaker2605 | 1 comments | | HN request time: 0.205s | source
Show context
_giorgio_ ◴[] No.45898012[source]
During his years at Meta, LeCun failed to deliver anything that delivered real value to stockholders, and may have demotivated people working on LLMs—he repeatedly said, "If you are interested in human-level AI, don’t work on LLMs."

His stance is understandable, but hardly the best way to rally a team that needs to push current tech to the limit.

The real issue: Meta is *far behind* Google, Anthropic, and OpenAI.

A radical shift is absolutely necessary - regardless of how much we sympathize with LeCun’s vision.

----

According to Grok, these were LeCun's real contributions at Meta (2013–2025):

----

- PyTorch – he championed a dynamic, open-source framework; now powers 70%+ of AI research

- LLaMA 1–3 – his open-source push; he even picked the name

- SAM / SAM 2 – born from his "segment anything like a baby" vision

- JEPA (I-JEPA, V-JEPA) – his personal bet on non-autoregressive world models

----

Everything else (Movie Gen, LLaMA 4, Meta AI Assistant) came after he left or was outside his scope.

replies(3): >>45898070 #>>45898365 #>>45899392 #
1. rhubarbtree ◴[] No.45899392[source]
I think there’s something to be said for keeping up in the LLM space even if you don’t think it’s the path to AGI.

Skills may transfer to other research areas, lessons may be learnt, closing the feedback loop with usage provides more data and opportunities for learning. It also creates a culture where bullshit isn’t possible, as the thing has to actually work. Academic research often ends up serving no one but the researchers, because there is little or no incentive to produce real knowledge.