←back to thread

385 points vessenes | 4 comments | | HN request time: 0.662s | source

So, Lecun has been quite public saying that he believes LLMs will never fix hallucinations because, essentially, the token choice method at each step leads to runaway errors -- these can't be damped mathematically.

In exchange, he offers the idea that we should have something that is an 'energy minimization' architecture; as I understand it, this would have a concept of the 'energy' of an entire response, and training would try and minimize that.

Which is to say, I don't fully understand this. That said, I'm curious to hear what ML researchers think about Lecun's take, and if there's any engineering done around it. I can't find much after the release of ijepa from his group.

1. giantg2 ◴[] No.43367543[source]
I feel like some hallucinations aren't bad. Isn't that basically what a new idea is - a hallucination of what could be? The ability to come up with new things, even if they're sometimes wrong, can be useful and happen all the time with humans.
replies(2): >>43367585 #>>43372183 #
2. hn_user82179 ◴[] No.43367585[source]
That’s a really interesting thought. I think the key part (as a consumer of AI tools) would be identifying the things that are guesses vs deductions vs complete accurate based on the training data. I would happily look up or think about the output parts that are possibly hallucinated myself but we don’t currently get that kind of feedback. Whereas a human could list things out that they know, and then highlight the things they making educated guesses about, which makes it easier to build upon.
replies(1): >>43368851 #
3. giantg2 ◴[] No.43368851[source]
To be fair most people don't give you that level of detail. But I agree
4. chriskanan ◴[] No.43372183[source]
Hallucinations is a terrible term for this. We want models that can create new ideas and make up stories. The problem is that they will give false answers to questions with factual answers, and that they don't realize this.

In humans, this is known as confabulation, and it happens due to various forms of brain damage, especially with damage to orbitofrontal cortex (part of prefrontal cortex). David Rumelhart, who was the main person who came up with backpropagation in a paper co-authored with Geoff Hinton, actually got Pick's disease which specifically results in damage to prefrontal cortex and people with that disease exhibit a lot of the same problems we have with today's LLMs: