←back to thread

760 points MindBreaker2605 | 2 comments | | HN request time: 0s | source
Show context
sebmellen ◴[] No.45897467[source]
Making LeCun report to Wang was the most boneheaded move imaginable. But… I suppose Zuckerberg knows what he wants, which is AI slopware and not truly groundbreaking foundation models.
replies(20): >>45897481 #>>45897498 #>>45897518 #>>45897885 #>>45897970 #>>45897978 #>>45898040 #>>45898053 #>>45898092 #>>45898108 #>>45898186 #>>45898539 #>>45898651 #>>45898727 #>>45899160 #>>45899375 #>>45900884 #>>45900885 #>>45901421 #>>45903451 #
xuancanh ◴[] No.45897885[source]
In industry research, someone in a chief position like LeCun should know how to balance long-term research with short-term projects. However, for whatever reason, he consistently shows hostility toward LLMs and engineering projects, even though Llama and PyTorch are two of the most influential projects from Meta AI. His attitude doesn’t really match what is expected from a Chief position at a product company like Facebook. When Llama 4 got criticized, he distanced himself from the project, stating that he only leads FAIR and that the project falls under a different organization. That kind of attitude doesn’t seem suitable for the face of AI at the company. It's not a surprise that Zuck tried to demote him.
replies(13): >>45897942 #>>45898142 #>>45898331 #>>45898661 #>>45898893 #>>45899157 #>>45899354 #>>45900094 #>>45900130 #>>45900230 #>>45901443 #>>45901631 #>>45902275 #
blutoot ◴[] No.45898661[source]
These are the types that want academic freedom in a cut-throat industry setup and conversely never fit into academia because their profiles and growth ambitions far exceed what an academic research lab can afford (barring some marquee names). It's an unfortunate paradox.
replies(3): >>45898951 #>>45899099 #>>45902308 #
sigbottle ◴[] No.45898951[source]
Maybe it's time for Bell Labs 2?

I guess everyone is racing towards AGI in a few years or whatever so it's kind of impossible to cultivate that environment.

replies(13): >>45899122 #>>45899204 #>>45899373 #>>45899504 #>>45899663 #>>45899866 #>>45900147 #>>45900934 #>>45900995 #>>45901066 #>>45902188 #>>45902731 #>>45905111 #
gtech1 ◴[] No.45899663[source]
This sounds crazy. We don't even know/can't define what human intelligence is or how it works , but we're trying to replicate it with AGI ?
replies(5): >>45899845 #>>45899912 #>>45899913 #>>45899981 #>>45900436 #
1. meindnoch ◴[] No.45900436{5}[source]
Hi there! :) Just wanted to gently flag that one of the terms (beginning with the letter "r") in your comment isn't really aligned with the kind of inclusive language we try to encourage across the community. Totally understand it was likely unintentional - happens to all of us! Going forward, it'd be great to keep things phrased in a way that ensures everyone feels welcome and respected. Thanks so much for taking the time to share your thoughts here!
replies(1): >>45900822 #
2. gtech1 ◴[] No.45900822[source]
My apologies, I have edited my comment.