←back to thread

385 points vessenes | 2 comments | | HN request time: 0.448s | source

So, Lecun has been quite public saying that he believes LLMs will never fix hallucinations because, essentially, the token choice method at each step leads to runaway errors -- these can't be damped mathematically.

In exchange, he offers the idea that we should have something that is an 'energy minimization' architecture; as I understand it, this would have a concept of the 'energy' of an entire response, and training would try and minimize that.

Which is to say, I don't fully understand this. That said, I'm curious to hear what ML researchers think about Lecun's take, and if there's any engineering done around it. I can't find much after the release of ijepa from his group.

Show context
giantg2 ◴[] No.43367543[source]
I feel like some hallucinations aren't bad. Isn't that basically what a new idea is - a hallucination of what could be? The ability to come up with new things, even if they're sometimes wrong, can be useful and happen all the time with humans.
replies(2): >>43367585 #>>43372183 #
1. hn_user82179 ◴[] No.43367585[source]
That’s a really interesting thought. I think the key part (as a consumer of AI tools) would be identifying the things that are guesses vs deductions vs complete accurate based on the training data. I would happily look up or think about the output parts that are possibly hallucinated myself but we don’t currently get that kind of feedback. Whereas a human could list things out that they know, and then highlight the things they making educated guesses about, which makes it easier to build upon.
replies(1): >>43368851 #
2. giantg2 ◴[] No.43368851[source]
To be fair most people don't give you that level of detail. But I agree