←back to thread

385 points vessenes | 1 comments | | HN request time: 0.22s | source

So, Lecun has been quite public saying that he believes LLMs will never fix hallucinations because, essentially, the token choice method at each step leads to runaway errors -- these can't be damped mathematically.

In exchange, he offers the idea that we should have something that is an 'energy minimization' architecture; as I understand it, this would have a concept of the 'energy' of an entire response, and training would try and minimize that.

Which is to say, I don't fully understand this. That said, I'm curious to hear what ML researchers think about Lecun's take, and if there's any engineering done around it. I can't find much after the release of ijepa from his group.

1. akomtu ◴[] No.43367765[source]
The next-gen LLMs are going to use something like mipmaps in graphics: a stack of progressively smaller versions of the image, with a 1x1 image at the top. The same concept applies to text. When you're writing something, your have a high-level idea in mind that serves as a guide. That idea is such a mipmap. Perhaps the next-gen LLMs will be generating a few parallel sequencies, the top-level will be a slow-pace anchor and the bottom-level being the actual text that depends on slower upper levels.