←back to thread

385 points vessenes | 2 comments | | HN request time: 0.003s | source

So, Lecun has been quite public saying that he believes LLMs will never fix hallucinations because, essentially, the token choice method at each step leads to runaway errors -- these can't be damped mathematically.

In exchange, he offers the idea that we should have something that is an 'energy minimization' architecture; as I understand it, this would have a concept of the 'energy' of an entire response, and training would try and minimize that.

Which is to say, I don't fully understand this. That said, I'm curious to hear what ML researchers think about Lecun's take, and if there's any engineering done around it. I can't find much after the release of ijepa from his group.

Show context
tyronehed ◴[] No.43365788[source]
Any transformer based LLM will never achieve AGI because it's only trying to pick the next word. You need a larger amount of planning to achieve AGI. Also, the characteristics of LLMs do not resemble any existing intelligence that we know of. Does a baby require 2 years of statistical analysis to become useful? No. Transformer architectures are parlor tricks. They are glorified Google but they're not doing anything or planning. If you want that, then you have to base your architecture on the known examples of intelligence that we are aware of in the universe. And that's not a transformer. In fact, whatever AGI emerges will absolutely not contain a transformer.
replies(3): >>43366660 #>>43366893 #>>43366959 #
1. unsupp0rted ◴[] No.43366959[source]
> Does a baby require 2 years of statistical analysis to become useful?

Well yes, actually.

replies(1): >>43369075 #
2. nsonha ◴[] No.43369075[source]
of the entire human race's knowledge, and it's like from written history, not 2 years ago.