←back to thread

765 points MindBreaker2605 | 1 comments | | HN request time: 0.26s | source
Show context
sebmellen ◴[] No.45897467[source]
Making LeCun report to Wang was the most boneheaded move imaginable. But… I suppose Zuckerberg knows what he wants, which is AI slopware and not truly groundbreaking foundation models.
replies(20): >>45897481 #>>45897498 #>>45897518 #>>45897885 #>>45897970 #>>45897978 #>>45898040 #>>45898053 #>>45898092 #>>45898108 #>>45898186 #>>45898539 #>>45898651 #>>45898727 #>>45899160 #>>45899375 #>>45900884 #>>45900885 #>>45901421 #>>45903451 #
gnaman ◴[] No.45897498[source]
He is also not very interested in LLMs, and that seems to be Zuck's top priority.
replies(2): >>45897523 #>>45898412 #
tinco ◴[] No.45897523[source]
Yeah I think LeCun is underestimating the impact that LLM's and Diffusion models are going to have, even considering the huge impact they're already having. That's no problem as I'm sure whatever LeCun is working on is going to be amazing as well, but an enterprise like Facebook can't have their top researcher work on risky things when there's surefire paths to success still available.
replies(12): >>45897552 #>>45897567 #>>45897579 #>>45897666 #>>45897673 #>>45898027 #>>45898041 #>>45898615 #>>45898873 #>>45899785 #>>45900106 #>>45900288 #
jll29 ◴[] No.45897673[source]
I politely disagree - it is exactly an industry researcher's purpose to do the risky things that may not work, simply because the rest of the corporation cannot take such risks but must walk on more well-trodden paths.

Corporate R&D teams are there to absorb risk, innovate, disrupt, create new fields, not for doing small incremental improvements. "If we know it works, it's not research." (Albert Einstein)

I also agree with LeCun that LLMs in their current form - are a dead end. Note that this does not mean that I think we have already exploited LLMs to the limit, we are still at the beginning. We also need to create an ecosystem in which they can operate well: for instance, to combine LLMs with Web agents better we need a scalable "C2B2C" (customer delegated to business to business) micropayment infrastructure, because as these systems have already begun talking to each other, in the longer run nobody would offer their APIs for free.

I work on spatial/geographic models, inter alia, which by coincident is one of the direction mentioned in the LeCun article. I do not know what his reasoning is, but mine was/is: LMs are language models, and should (only) be used as such. We need other models - in particular a knowledge model (KM/KB) to cleanly separate knowledge from text generation - it looks to me right now that only that will solve hallucination.

replies(3): >>45897749 #>>45897798 #>>45898570 #
barrkel ◴[] No.45897798[source]
Knowledge models, like ontologies, always seem suspect to me; like they promise a schema for crisp binary facts, when the world is full of probabilistic and fuzzy information loosely categorized by fallible humans based on an ever slowly shifting social consensus.

Everything from the sorites paradox to leaky abstractions; everything real defies precise definition when you look closely at it, and when you try to abstract over it, to chunk up, the details have an annoying way of making themselves visible again.

You can get purity in mathematical models, and in information systems, but those imperfectly model the world and continually need to be updated, refactored, and rewritten as they decay and diverge from reality.

These things are best used as tools by something similar to LLMs, models to be used, built and discarded as needed, but never a ground source of truth.

replies(5): >>45898380 #>>45898696 #>>45899766 #>>45899819 #>>45900754 #
1. rob_c ◴[] No.45898380[source]
You're basically describing the knowledge problem vs model structure, how to even begin to design a system which self-updates/dynamically-learns vs being trained and deployed.

Cracking that is a huge step, pure multi-modal trained models will probably give us a hint, but I think we're some ways from seeing a pure multi-modal open model which can be pulled apart/modified. Even then they're still train and deploy not dynamically learning. I worry we're just going to see LSTM design bolted onto deep LLM because we don't know where else to go and it will be fragile and take eons to train.

And less said about the crap of "but inference is doing some kind of minimization within the context window" the better, it's vacuous and not where great minds should be looking for a step forwards.