←back to thread

760 points MindBreaker2605 | 1 comments | | HN request time: 0s | source
Show context
sebmellen ◴[] No.45897467[source]
Making LeCun report to Wang was the most boneheaded move imaginable. But… I suppose Zuckerberg knows what he wants, which is AI slopware and not truly groundbreaking foundation models.
replies(20): >>45897481 #>>45897498 #>>45897518 #>>45897885 #>>45897970 #>>45897978 #>>45898040 #>>45898053 #>>45898092 #>>45898108 #>>45898186 #>>45898539 #>>45898651 #>>45898727 #>>45899160 #>>45899375 #>>45900884 #>>45900885 #>>45901421 #>>45903451 #
gnaman ◴[] No.45897498[source]
He is also not very interested in LLMs, and that seems to be Zuck's top priority.
replies(2): >>45897523 #>>45898412 #
tinco ◴[] No.45897523[source]
Yeah I think LeCun is underestimating the impact that LLM's and Diffusion models are going to have, even considering the huge impact they're already having. That's no problem as I'm sure whatever LeCun is working on is going to be amazing as well, but an enterprise like Facebook can't have their top researcher work on risky things when there's surefire paths to success still available.
replies(12): >>45897552 #>>45897567 #>>45897579 #>>45897666 #>>45897673 #>>45898027 #>>45898041 #>>45898615 #>>45898873 #>>45899785 #>>45900106 #>>45900288 #
raverbashing ◴[] No.45897552{3}[source]
Yeah honestly I'm with the LLM people here

If you think LLMs are not the future then you need to come with something better

If you have a theoretical idea that's great, but take to at least GPT2 level first before writing off LLMs

Theoretical people love coming up with "better ideas" that fall flat or have hidden gotchas when they get to practical implementation

As Linus says, "talk is cheap, show me the code".

replies(8): >>45897575 #>>45897593 #>>45897604 #>>45897623 #>>45897626 #>>45897771 #>>45897786 #>>45906111 #
whizzter ◴[] No.45897786{4}[source]
LLM's are probably always going to be the fundamental interface, the problem they solved was related to the flexibility of human languages allowing us to have decent mimikry's.

And while we've been able to approximate the world behind the words, it's just full of hallucinations because the AI's lack axiomatic systems beyond much manually constructed machinery.

You can probably expand the capabilties by attaching to the front-end but I suspect that Yann is seeing limits to this and wants to go back and build up from the back-end of world reasoning and then _among other things_ attach LLM's at the front-end (but maybe on equal terms with vision models that allows for seamless integration of LLM interfacing _combined_ with vision for proper autonomous systems).

replies(1): >>45898465 #
1. rob_c ◴[] No.45898465{5}[source]
> because the AI's lack axiomatic systems beyond much manually constructed machinery.

Oh god, that is massively under-selling their learning ability. These models are able to extract and reply with why jokes are funny without even knowing basic vocab, yet there are pure-code models out there with lingual rules baked in from day one which still struggle with basic grammar.

The _point_ of LLMs arguably is there ability to learn any pattern thrown at it with enough compute. With an exception to learning how logical processes work, and pure LLMs only see "time" in the sense of a paragraph begins and ends.

At the least they have taught computers, "how to language", which in regards to how to interact with a machine is a _huge_ step forward.

Unfortunately the financial incentives are split between agentic model usage (taking the idea of a computerised butler further), maximizing model memory and raw learning capacity (answering all problems at any time), and long-range consistency (longer ranges give better stable results due to a few reasons, but we're some way from seeing an LLM with a 128k experts and 10e18 active tokens).

I think in terms of building the perfect monkey butler we already have most or all of the parts. With regard to a model which can dynamically learn on the fly... LLMs are not the end of the story and we need something to allow the models to more closely tie their LS with the context. Frankly the fact that DeepSeek gave us an LLM with LS was a huge leap since previous model attempts had been overly complex and had failed in training.