←back to thread

760 points MindBreaker2605 | 4 comments | | HN request time: 0.001s | source
Show context
sebmellen ◴[] No.45897467[source]
Making LeCun report to Wang was the most boneheaded move imaginable. But… I suppose Zuckerberg knows what he wants, which is AI slopware and not truly groundbreaking foundation models.
replies(20): >>45897481 #>>45897498 #>>45897518 #>>45897885 #>>45897970 #>>45897978 #>>45898040 #>>45898053 #>>45898092 #>>45898108 #>>45898186 #>>45898539 #>>45898651 #>>45898727 #>>45899160 #>>45899375 #>>45900884 #>>45900885 #>>45901421 #>>45903451 #
xuancanh ◴[] No.45897885[source]
In industry research, someone in a chief position like LeCun should know how to balance long-term research with short-term projects. However, for whatever reason, he consistently shows hostility toward LLMs and engineering projects, even though Llama and PyTorch are two of the most influential projects from Meta AI. His attitude doesn’t really match what is expected from a Chief position at a product company like Facebook. When Llama 4 got criticized, he distanced himself from the project, stating that he only leads FAIR and that the project falls under a different organization. That kind of attitude doesn’t seem suitable for the face of AI at the company. It's not a surprise that Zuck tried to demote him.
replies(13): >>45897942 #>>45898142 #>>45898331 #>>45898661 #>>45898893 #>>45899157 #>>45899354 #>>45900094 #>>45900130 #>>45900230 #>>45901443 #>>45901631 #>>45902275 #
throwaw12 ◴[] No.45897942[source]
I would pose a question differently, under his leadership did Meta achieve good outcome?

If the answer is yes, then better to keep him, because he has already proved himself and you can win in the long-term. With Meta's pockets, you can always create a new department specifically for short-term projects.

If the answer is no, then nothing to discuss here.

replies(5): >>45897962 #>>45898150 #>>45898191 #>>45899393 #>>45900070 #
HarHarVeryFunny ◴[] No.45899393[source]
LeCun was always part of FAIR, doing research, not part of the LLM/product group, who reported to someone else.
replies(1): >>45901322 #
1. rockinghigh ◴[] No.45901322[source]
Wasn't the original LLaMA developed by FAIR Paris?
replies(1): >>45902034 #
2. HarHarVeryFunny ◴[] No.45902034[source]
I hadn't heard that, but he was heavily involved in a cancelled project called Galactica that was an LLM for scientific knowledge.
replies(1): >>45903099 #
3. fooker ◴[] No.45903099[source]
Yeah that stuff generated embarrassingly wrong scientific 'facts' and citations.

That kind of hallucination is somewhat acceptable for something marketed as a chatbot, less so for an assistant helping you with scientific knowledge and research.

replies(1): >>45905233 #
4. baobabKoodaa ◴[] No.45905233{3}[source]
I thought it was weird at the time how much hate Galactica got for its hallucinations compared to hallucinations of competing models. I get your point and it partially explains things. But it's not a fully satisfying explanation.