Most active commenters

    ←back to thread

    765 points MindBreaker2605 | 27 comments | | HN request time: 2.023s | source | bottom
    Show context
    sebmellen ◴[] No.45897467[source]
    Making LeCun report to Wang was the most boneheaded move imaginable. But… I suppose Zuckerberg knows what he wants, which is AI slopware and not truly groundbreaking foundation models.
    replies(20): >>45897481 #>>45897498 #>>45897518 #>>45897885 #>>45897970 #>>45897978 #>>45898040 #>>45898053 #>>45898092 #>>45898108 #>>45898186 #>>45898539 #>>45898651 #>>45898727 #>>45899160 #>>45899375 #>>45900884 #>>45900885 #>>45901421 #>>45903451 #
    xuancanh ◴[] No.45897885[source]
    In industry research, someone in a chief position like LeCun should know how to balance long-term research with short-term projects. However, for whatever reason, he consistently shows hostility toward LLMs and engineering projects, even though Llama and PyTorch are two of the most influential projects from Meta AI. His attitude doesn’t really match what is expected from a Chief position at a product company like Facebook. When Llama 4 got criticized, he distanced himself from the project, stating that he only leads FAIR and that the project falls under a different organization. That kind of attitude doesn’t seem suitable for the face of AI at the company. It's not a surprise that Zuck tried to demote him.
    replies(13): >>45897942 #>>45898142 #>>45898331 #>>45898661 #>>45898893 #>>45899157 #>>45899354 #>>45900094 #>>45900130 #>>45900230 #>>45901443 #>>45901631 #>>45902275 #
    1. throwaw12 ◴[] No.45897942[source]
    I would pose a question differently, under his leadership did Meta achieve good outcome?

    If the answer is yes, then better to keep him, because he has already proved himself and you can win in the long-term. With Meta's pockets, you can always create a new department specifically for short-term projects.

    If the answer is no, then nothing to discuss here.

    replies(5): >>45897962 #>>45898150 #>>45898191 #>>45899393 #>>45900070 #
    2. rw2 ◴[] No.45897962[source]
    I believe that the fact that Chinese models are beating the crap of of Llama means it's a huge no.
    replies(1): >>45898163 #
    3. xuancanh ◴[] No.45898150[source]
    Meta did exactly that, kept him but reduced his scope. Did the broader research community benefit from his research? Absolutely. But did Meta achieve a good outcome? Probably not.

    If you follow LeCun on social media, you can see that the way FAIR’s results are assessed is very narrow-minded and still follows the academic mindset. He mentioned that his research is evaluated by: "Research evaluation is a difficult task because the product impact may occur years (sometimes decades) after the work. For that reason, evaluation must often rely on the collective opinion of the research community through proxies such as publications, citations, invited talks, awards, etc."

    But as an industry researcher, he should know how his research fits with the company vision and be able to assess that easily. If the company's vision is to be the leader in AI, then as of now, he seems to have failed that objective, even though he has been at Meta for more than 10 years.

    replies(2): >>45898292 #>>45898602 #
    4. amelius ◴[] No.45898163[source]
    Why? The Chinese are very capable. Most DL papers have at least one Chinese name on it. That doesn't mean they are Chinese but it's telling.
    replies(2): >>45898196 #>>45898335 #
    5. UrineSqueegee ◴[] No.45898196{3}[source]
    is an american model chinese because chinese people were in the team?
    replies(2): >>45898372 #>>45898389 #
    6. nsonha ◴[] No.45898292[source]
    Also he always sounds like "I know this will not work". Dude are you a researcher? You're supposed to experiment and follow the results. That's what separates you from oracles and freaking philosophers or whatever.
    replies(4): >>45898333 #>>45898783 #>>45899067 #>>45899161 #
    7. yawnxyz ◴[] No.45898333{3}[source]
    he probably predicted the asymptote everyone is approaching right now
    replies(1): >>45899048 #
    8. rob_c ◴[] No.45898335{3}[source]
    most papers are also written in the same language, what's your point?
    9. rat9988 ◴[] No.45898372{4}[source]
    What are these chinese labs made of?
    replies(2): >>45898419 #>>45898574 #
    10. danielbln ◴[] No.45898389{4}[source]
    There is no need for that tone here.
    11. 4ggr0 ◴[] No.45898419{5}[source]
    500 remote indian workers (/s)
    12. ◴[] No.45898574{5}[source]
    13. ◴[] No.45898602[source]
    14. uoaei ◴[] No.45898783{3}[source]
    He's speaking to the entire feedforward Transformer-based paradigm. He sees little point in continuing to try to squeeze more blood out of that stone and instead move on to more appropriate ways to model ontologies per se rather than the crude-for-what-we-use-them-for embedding-based methods that are popular today.

    I really resonate with his view due to my background in physics and information theory. I for one welcome his new experimentation in other realms while so many still hack away at their LLMs in pursuit of SOTA benchmarks.

    replies(1): >>45898917 #
    15. fhd2 ◴[] No.45898917{4}[source]
    If the LLM hype doesn't cool down fast, we're probably looking at another AI winter. Appears to me like he's just trying to ensure he'll have funding for chasing the global maximum going forward.
    replies(1): >>45899138 #
    16. brazukadev ◴[] No.45899048{4}[source]
    So did I after trying llama/Meta AI
    17. lukan ◴[] No.45899067{3}[source]
    Philosophers are usually more aware of their not knowing than you seem to give them credit for. (And oracles are famously vague, too).
    18. re-thc ◴[] No.45899138{5}[source]
    > If the LLM hype doesn't cool down fast, we're probably looking at another AI winter.

    Is the real bubble ignorance? Maybe you'll cool down but the rest of the world? There will just be more DeepSeek and more advances until the US loses its standing.

    replies(1): >>45904742 #
    19. teleforce ◴[] No.45899161{3}[source]
    Do you know that all formally trained researchers have Doctor of Philosophy or PhD to their name? [1]

    [1] Doctor of Philosophy:

    https://en.wikipedia.org/wiki/Doctor_of_Philosophy

    replies(1): >>45900115 #
    20. HarHarVeryFunny ◴[] No.45899393[source]
    LeCun was always part of FAIR, doing research, not part of the LLM/product group, who reported to someone else.
    replies(1): >>45901322 #
    21. anotherd1p ◴[] No.45900070[source]
    then we should ask: will Meta come close enough to the fulfillment of the promises made, or will it keep achieving good enough outcomes?
    22. anotherd1p ◴[] No.45900115{4}[source]
    If academia is in question, then so are their titles. When I see "PhD", I read "we decided that he was at least good enough for the cause" PhD, or PhD (he fulfilled the criteria).
    23. rockinghigh ◴[] No.45901322[source]
    Wasn't the original LLaMA developed by FAIR Paris?
    replies(1): >>45902034 #
    24. HarHarVeryFunny ◴[] No.45902034{3}[source]
    I hadn't heard that, but he was heavily involved in a cancelled project called Galactica that was an LLM for scientific knowledge.
    replies(1): >>45903099 #
    25. fooker ◴[] No.45903099{4}[source]
    Yeah that stuff generated embarrassingly wrong scientific 'facts' and citations.

    That kind of hallucination is somewhat acceptable for something marketed as a chatbot, less so for an assistant helping you with scientific knowledge and research.

    replies(1): >>45905233 #
    26. uoaei ◴[] No.45904742{6}[source]
    How is it a foregone conclusion that squeezing the stone will continue to produce blood?
    27. baobabKoodaa ◴[] No.45905233{5}[source]
    I thought it was weird at the time how much hate Galactica got for its hallucinations compared to hallucinations of competing models. I get your point and it partially explains things. But it's not a fully satisfying explanation.