←back to thread

385 points vessenes | 4 comments | | HN request time: 0.212s | source

So, Lecun has been quite public saying that he believes LLMs will never fix hallucinations because, essentially, the token choice method at each step leads to runaway errors -- these can't be damped mathematically.

In exchange, he offers the idea that we should have something that is an 'energy minimization' architecture; as I understand it, this would have a concept of the 'energy' of an entire response, and training would try and minimize that.

Which is to say, I don't fully understand this. That said, I'm curious to hear what ML researchers think about Lecun's take, and if there's any engineering done around it. I can't find much after the release of ijepa from his group.

1. ALittleLight ◴[] No.43365365[source]
I've never understood this critique. Models have the capability to say: "oh, I made a mistake here, let me change this" and that solves the issue, right?

A little bit of engineering and fine tuning - you could imagine a model producing a sequence of statements, and reflecting on the sequence - updating things like "statement 7, modify: xzy to xyz"

replies(3): >>43365570 #>>43365674 #>>43365688 #
2. fhd2 ◴[] No.43365570[source]
I get "oh, I made a mistake" quite frequently. Often enough, it's just another hallucination, just because I contested the result, or even just prompted "double check this". Statistically speaking, when someone in a conversation says this, the other party is likely to change their position, so that's what an LLM does, too, replicating a statistically plausible conversation. That often goes in circles, not getting anywhere near a better answer.

Not an ML researcher, so I can't explain it. But I get a pretty clear sense that it's an inherent problem and don't see how it could be trained away.

3. rscho ◴[] No.43365674[source]
"Oh, I emptied your bank account here, let me change this."

For AI to really replace most workers like some people would like to see, there are plenty of situations where hallucinations are a complete no-go and need fixing.

4. croes ◴[] No.43365688[source]
Isn’t that the answer if you tell them they are wrong?