←back to thread

385 points vessenes | 2 comments | | HN request time: 0.494s | source

So, Lecun has been quite public saying that he believes LLMs will never fix hallucinations because, essentially, the token choice method at each step leads to runaway errors -- these can't be damped mathematically.

In exchange, he offers the idea that we should have something that is an 'energy minimization' architecture; as I understand it, this would have a concept of the 'energy' of an entire response, and training would try and minimize that.

Which is to say, I don't fully understand this. That said, I'm curious to hear what ML researchers think about Lecun's take, and if there's any engineering done around it. I can't find much after the release of ijepa from his group.

1. itkovian_ ◴[] No.43368609[source]
The fundamental distinction is usually made to contrastive approaches (i.e. make correct more likely, make everything else we just compared unlikely). Ebms are "only what is correct is more likely and the default for everything is unlikely"

This is obviously an extremely high level simplification, but that's the core of it.

replies(1): >>43368630 #
2. itkovian_ ◴[] No.43368630[source]
And in this categorization auto regressive llms are contrastive due to the cross entropy loss.