←back to thread

760 points MindBreaker2605 | 1 comments | | HN request time: 0.001s | source
Show context
sebmellen ◴[] No.45897467[source]
Making LeCun report to Wang was the most boneheaded move imaginable. But… I suppose Zuckerberg knows what he wants, which is AI slopware and not truly groundbreaking foundation models.
replies(20): >>45897481 #>>45897498 #>>45897518 #>>45897885 #>>45897970 #>>45897978 #>>45898040 #>>45898053 #>>45898092 #>>45898108 #>>45898186 #>>45898539 #>>45898651 #>>45898727 #>>45899160 #>>45899375 #>>45900884 #>>45900885 #>>45901421 #>>45903451 #
xuancanh ◴[] No.45897885[source]
In industry research, someone in a chief position like LeCun should know how to balance long-term research with short-term projects. However, for whatever reason, he consistently shows hostility toward LLMs and engineering projects, even though Llama and PyTorch are two of the most influential projects from Meta AI. His attitude doesn’t really match what is expected from a Chief position at a product company like Facebook. When Llama 4 got criticized, he distanced himself from the project, stating that he only leads FAIR and that the project falls under a different organization. That kind of attitude doesn’t seem suitable for the face of AI at the company. It's not a surprise that Zuck tried to demote him.
replies(13): >>45897942 #>>45898142 #>>45898331 #>>45898661 #>>45898893 #>>45899157 #>>45899354 #>>45900094 #>>45900130 #>>45900230 #>>45901443 #>>45901631 #>>45902275 #
blutoot ◴[] No.45898661[source]
These are the types that want academic freedom in a cut-throat industry setup and conversely never fit into academia because their profiles and growth ambitions far exceed what an academic research lab can afford (barring some marquee names). It's an unfortunate paradox.
replies(3): >>45898951 #>>45899099 #>>45902308 #
sigbottle ◴[] No.45898951[source]
Maybe it's time for Bell Labs 2?

I guess everyone is racing towards AGI in a few years or whatever so it's kind of impossible to cultivate that environment.

replies(13): >>45899122 #>>45899204 #>>45899373 #>>45899504 #>>45899663 #>>45899866 #>>45900147 #>>45900934 #>>45900995 #>>45901066 #>>45902188 #>>45902731 #>>45905111 #
gtech1 ◴[] No.45899663[source]
This sounds crazy. We don't even know/can't define what human intelligence is or how it works , but we're trying to replicate it with AGI ?
replies(5): >>45899845 #>>45899912 #>>45899913 #>>45899981 #>>45900436 #
Obscurity4340 ◴[] No.45899845{5}[source]
If an LLM can pass a bar exam, isn't that at least a decent proof of concept or working model?
replies(3): >>45900030 #>>45900196 #>>45900397 #
1. skeeter2020 ◴[] No.45900196{6}[source]
Or does this just prove lawyers are artificially intelligent?

yes, a glib response, but think about it: we define an intelligence test for humans, which by definition is an artificial construct. If we then get a computer to do well on the test we haven't proved it's on par with human intelligence, just that both meet some of the markers that the test makers are using as rough proxies for human intelligence. Maybe this helps signal or judge if AI is a useful tool for specific problems, but it doesn't mean AGI