Most active commenters
  • gtech1(5)
  • re-thc(4)
  • anotherd1p(4)

←back to thread

765 points MindBreaker2605 | 43 comments | | HN request time: 1.071s | source | bottom
Show context
sebmellen ◴[] No.45897467[source]
Making LeCun report to Wang was the most boneheaded move imaginable. But… I suppose Zuckerberg knows what he wants, which is AI slopware and not truly groundbreaking foundation models.
replies(20): >>45897481 #>>45897498 #>>45897518 #>>45897885 #>>45897970 #>>45897978 #>>45898040 #>>45898053 #>>45898092 #>>45898108 #>>45898186 #>>45898539 #>>45898651 #>>45898727 #>>45899160 #>>45899375 #>>45900884 #>>45900885 #>>45901421 #>>45903451 #
xuancanh ◴[] No.45897885[source]
In industry research, someone in a chief position like LeCun should know how to balance long-term research with short-term projects. However, for whatever reason, he consistently shows hostility toward LLMs and engineering projects, even though Llama and PyTorch are two of the most influential projects from Meta AI. His attitude doesn’t really match what is expected from a Chief position at a product company like Facebook. When Llama 4 got criticized, he distanced himself from the project, stating that he only leads FAIR and that the project falls under a different organization. That kind of attitude doesn’t seem suitable for the face of AI at the company. It's not a surprise that Zuck tried to demote him.
replies(13): >>45897942 #>>45898142 #>>45898331 #>>45898661 #>>45898893 #>>45899157 #>>45899354 #>>45900094 #>>45900130 #>>45900230 #>>45901443 #>>45901631 #>>45902275 #
blutoot ◴[] No.45898661[source]
These are the types that want academic freedom in a cut-throat industry setup and conversely never fit into academia because their profiles and growth ambitions far exceed what an academic research lab can afford (barring some marquee names). It's an unfortunate paradox.
replies(3): >>45898951 #>>45899099 #>>45902308 #
1. sigbottle ◴[] No.45898951[source]
Maybe it's time for Bell Labs 2?

I guess everyone is racing towards AGI in a few years or whatever so it's kind of impossible to cultivate that environment.

replies(13): >>45899122 #>>45899204 #>>45899373 #>>45899504 #>>45899663 #>>45899866 #>>45900147 #>>45900934 #>>45900995 #>>45901066 #>>45902188 #>>45902731 #>>45905111 #
2. belter ◴[] No.45899122[source]
> I guess everyone is racing towards AGI in a few years

A pipe dream sustaining the biggest stock market bubble in history. Smart investors are jumping to the next bubble already...Quantum...

replies(1): >>45899178 #
3. re-thc ◴[] No.45899178[source]
> A pipe dream sustaining the biggest stock market bubble in history

This is why we're losing innovation.

Look at electric cars, batteries, solar panels, rare earths and many more. Bubble or struggle for survival? Right, because if US has no AI the world will have no AI? That's the real bubble - being stuck in an ancient world view.

Meta's stock has already tanked for "over" investing in AI. Bubble, where?

replies(1): >>45899194 #
4. belter ◴[] No.45899194{3}[source]
2 Trillion dollars in Capex to get code generators with hallucinations, that run at a loss, and you ask where is the Bubble?
replies(1): >>45899264 #
5. ryukoposting ◴[] No.45899204[source]
The Bell Labs we look back on was only the result of government intervention in the telecom monopoly. The 1956 consent decree forced Bell to license thousands of its patents, royalty free, to anyone who wanted to use them. Any patent not listed in the consent decree was to be licensed at "reasonable and nondiscriminatory rates."

The US government basically forced AT&T to use revenue from its monopoly to do fundamental research for the public good. Could the government do the same thing to our modern megacorps? Absolutely! Will it? I doubt it.

https://www.nytimes.com/1956/01/25/archives/att-settles-anti...

replies(1): >>45899620 #
6. re-thc ◴[] No.45899264{4}[source]
> 2 Trillion dollars in Capex to get code generators with hallucinations

You assume that's the only use of it.

And are people not using these code generators?

Is this an issue with a lost generation that forgot what Capex is? We've moved from Capex to Opex and now the notion is lost, is it? You can hire an army of software developers but can't build hardware.

Is it better when everyone buys DeepSeek or a non-US version? Well then you don't need to spend Capex but you won't have revenue either.

replies(1): >>45899333 #
7. littlestymaar ◴[] No.45899333{5}[source]
Deepseek somehow didn't need $2T to happen.
replies(3): >>45899379 #>>45900047 #>>45900088 #
8. HarHarVeryFunny ◴[] No.45899373[source]
It seems DeepMind is the closest thing to a well funded blue-sky AI research group, even despite the merger with Google Brain and now more of a product focus.
9. re-thc ◴[] No.45899379{6}[source]
Because you know how much they spent.

And that $2T you're referring to includes infrastructure like energy, data centers, servers and many things. DeepSeek rents from others. Someone is paying.

10. aatd86 ◴[] No.45899620[source]
Used to be a Google X. Not sure at what scale it was. But if any state/central bank was clever they would subsidize this. That's a better trickle down strategy. Until we get to agi and all new discoveries are autonomously led by AI that is :p
replies(1): >>45904500 #
11. sllabres ◴[] No.45899628[source]
If you are (obviously) interested in the matter you might find one of the Bell Labs articles discussed on HN:

"Why Bell Labs Worked" [1]

"The Influence of Bell Labs" [2]

"Bringing back the golden days of Bell Labs" [3]

"Remembering Bell Labs as legendary idea factory prepares to leave N.J. home" [4] or

"Innovation and the Bell Labs Miracle" [5]

interesting too.

[1] https://news.ycombinator.com/item?id=43957010 [2] https://news.ycombinator.com/item?id=42275944 [3] https://news.ycombinator.com/item?id=32352584 [4] https://news.ycombinator.com/item?id=39077867 [5] https://news.ycombinator.com/item?id=3635489

replies(1): >>45901536 #
12. gtech1 ◴[] No.45899663[source]
This sounds crazy. We don't even know/can't define what human intelligence is or how it works , but we're trying to replicate it with AGI ?
replies(5): >>45899845 #>>45899912 #>>45899913 #>>45899981 #>>45900436 #
13. Obscurity4340 ◴[] No.45899845[source]
If an LLM can pass a bar exam, isn't that at least a decent proof of concept or working model?
replies(3): >>45900030 #>>45900196 #>>45900397 #
14. diego_sandoval ◴[] No.45899866[source]
The fact that people invest on the architecture that keeps getting increasingly better results is a feature, not a bug.

If LLMs actually hit a plateau, then investment will flow towards other architectures.

replies(1): >>45900215 #
15. cantor_S_drug ◴[] No.45899912[source]
Intelligence and human health can't be defined neatly. They are what we call suitcase words. If there exists a physiological tradeoff between medical research about whether to live till 500 years or to be able to lift 1000kg when a person is in youth, those are different dimensions / directions across we can make progress. Same happens for intelligence. I think we are on right track.
16. afthonos ◴[] No.45899913[source]
Man, why did no one tell the people who invented bronze that they weren’t allowed to do it until they had a correct definition for metals and understood how they worked? I guess the person saying something can’t be done should stay out of the way of the people doing it.
replies(2): >>45899989 #>>45900146 #
17. anotherd1p ◴[] No.45899954[source]
I always take a bird's eye kind of view on things like that, because however close I get, it always loops around to make no sense.

> is massively monopolistic and have unbounded discretionary research budget

that is the case for most megacorps. if you look at all the financial instruments.

modern monopolies are not equal to single corporation domination. modern monopolies are portfolios who do business using the same methods and strategies.

the problem is that private interests strive mostly for control, not money or progress. if they have to spend a lot of money to stay in control of (their (share of the)) segments, they will do that, which is why stuff like the current graph of investments of, by and for AI companies and the industries works.

A modern equivalent and "breadth" of a Bell Labs (et. al) kind of R&D speed could not be controlled and would 100% result in actual Artificial Intelligence vs all those white labelababbebel (sry) AI toys we get now.

Post WW I and II "business psychology" have build a culture that cannot thrive in a free world (free as in undisturbed and left to all devices available) for a variety of reasons, but mostly because of elements with a medieval/dark-age kind of aggressive tendency to come to power and maintain it that way.

In other words: not having a Bell Labs kind of setup anymore ensures that the variety of approaches taken on large scales aka industry-wide or systemic, remains narrow enough.

18. anotherd1p ◴[] No.45899981[source]
stretching the infinite game is exactly that, yes, "This is the way"
19. gtech1 ◴[] No.45899989{3}[source]
I'm not sure what 'inventing bronze' is supposed to be. 'Inventing' AGI is pretty much equivalent to creating new life, from scratch. And we don't have an idea on how to do that either, or how life came to be.
20. anotherd1p ◴[] No.45900030{3}[source]
I love this application of AI the most but as many have stated elsewhere: mathematical precision in law won't work, or rather, won't be tolerated.
21. anotherd1p ◴[] No.45900047{6}[source]
all that led up to Deepseek needed more. don't forget where it all comes from.
22. matt3D ◴[] No.45900088{6}[source]
I think the argument can be made that Deepseek is a state sponsored needle looking to pop another states bubble.

If Deepseek is free it undermines the value of LLMs, so the value of these US companies is mainly speculation/FOMO over AGI.

replies(1): >>45900589 #
23. skeeter2020 ◴[] No.45900146{3}[source]
>> I guess the person saying something can’t be done should stay out of the way of the people doing it.

I'll happily step out of the way once someone simply tells me what it is you're trying to accomplish. Until you can actually define it, you can't do "it".

replies(2): >>45900270 #>>45900419 #
24. blueboo ◴[] No.45900147[source]
We call it “legacy DeepMind”
25. skeeter2020 ◴[] No.45900196{3}[source]
Or does this just prove lawyers are artificially intelligent?

yes, a glib response, but think about it: we define an intelligence test for humans, which by definition is an artificial construct. If we then get a computer to do well on the test we haven't proved it's on par with human intelligence, just that both meet some of the markers that the test makers are using as rough proxies for human intelligence. Maybe this helps signal or judge if AI is a useful tool for specific problems, but it doesn't mean AGI

26. esafak ◴[] No.45900215[source]
At which point companies that had the foresight to investigate those architectures earlier on will have the lead.
27. gtech1 ◴[] No.45900270{4}[source]
no bro, others have done 'it' without even knowing what they were doing!
28. staticman2 ◴[] No.45900397{3}[source]
I don't think the bar exam is scientifically designed to measure intelligence so that was an odd example. Citing the bar exam is like saying it passes the "Game of thrones trivia" exam so it must be intelligent.

As for IQ tests and the like, to the extent they are "scientific" they are designed based on empirical observations of humans. It is not designed to measure the intelligence of a statistical system containing a compressed version of the internet.

29. afthonos ◴[] No.45900419{4}[source]
The big tech companies are trying to make machines that replace all human labor. They call it artificial intelligence. Feel free to argue about definitions.
replies(1): >>45900868 #
30. meindnoch ◴[] No.45900436[source]
Hi there! :) Just wanted to gently flag that one of the terms (beginning with the letter "r") in your comment isn't really aligned with the kind of inclusive language we try to encourage across the community. Totally understand it was likely unintentional - happens to all of us! Going forward, it'd be great to keep things phrased in a way that ensures everyone feels welcome and respected. Thanks so much for taking the time to share your thoughts here!
replies(1): >>45900822 #
31. re-thc ◴[] No.45900589{7}[source]
> the argument can be made that Deepseek is a state sponsored needle looking to pop another states bubble

Who says they don't make money? Same with open source software that offer a hosted version.

> If Deepseek is free it undermines the value of LLMs, so the value of these US companies is mainly speculation/FOMO over AGI

Freemium, open source and other models all exist. Does it undermine the value of e.g. Salesforce?

32. gtech1 ◴[] No.45900822{3}[source]
My apologies, I have edited my comment.
33. gtech1 ◴[] No.45900868{5}[source]
No no, let's define labor (labour?) first.
replies(1): >>45905921 #
34. ambicapter ◴[] No.45900934[source]
Why would Bell Labs be a good fit? It was famous for embedding engineers with the scientists to direct research in a more results-oriented fashion.
35. musebox35 ◴[] No.45900995[source]
Google Deepmind is the closest lab to that idea because Google is the only entity that is big enough to get close to the scale of AT&T. I was skeptical that the Deepmind and Google Brain merge would be successful but it seems to have worked surprisingly well. They are killing it with LLMs and image editing models. They are also backing the fastest growing cloud business in the world and collecting Nobel prizes along the way.
36. ximeng ◴[] No.45901066[source]
https://www.startuphub.ai/ai-news/ai-research/2025/sam-altma...

Like the new spin out Episteme from OpenAI?

37. mysfi ◴[] No.45901536{3}[source]
I became interested in the matter reading this thread and vaguely remember reading a couple of the articles. Saved them all in NotebookLM to get an audio overview and to read later. Thanks!
38. meekaaku ◴[] No.45902188[source]
I am of the opinion that splitting AT&T and hence Bell Labs was a net negative for America and rest of the world.

We are yet to create lab as foundational as Bell Labs.

39. red2awn ◴[] No.45902731[source]
I'd argue SSI and Thinking Machines Lab seem to that environment you are thinking about. Industry labs that focuses on research without immediate product requirement.
40. williamDafoe ◴[] No.45904500{3}[source]
Google X is a complete failure. Maybe they had fei-fei on staff for a short while but most of her work was done elsewhere.
replies(1): >>45905604 #
41. stocksinsmocks ◴[] No.45905111[source]
I thought that was Google. Regulators pretend not to notice their monopoly, they probably get large government contracts for social engineering and surveillance laundered through advertising, and the “don’t be evil” part is they make some open source contributions
42. aatd86 ◴[] No.45905604{4}[source]
Didn't the current LLMs stem from this...? Or it might be Google Brain instead. For Google X, there is Waymo? I know a lot of stuff didn't pan out. This is expected. These were 'moonshots'.

But the principle is there. I think that when a company sits on a load of cash, that's what they should do. Either that or become a kind of alternative investments allocator. These are risky bets. But they should be incentivized to take those risks. From a fiscal policy standpoint for instance. Well it probably is the case already via lower taxation of capital gains and so on. But there should probably exist a more streamlined framework to make sure incentives are aligned.

And/or assigned government projects? Besides implementing their Cloud infrastructure that is...

43. CamperBob2 ◴[] No.45905921{6}[source]
Whatever you're doing for money that you wouldn't do if you didn't need money.