←back to thread

385 points vessenes | 6 comments | | HN request time: 0.798s | source | bottom

So, Lecun has been quite public saying that he believes LLMs will never fix hallucinations because, essentially, the token choice method at each step leads to runaway errors -- these can't be damped mathematically.

In exchange, he offers the idea that we should have something that is an 'energy minimization' architecture; as I understand it, this would have a concept of the 'energy' of an entire response, and training would try and minimize that.

Which is to say, I don't fully understand this. That said, I'm curious to hear what ML researchers think about Lecun's take, and if there's any engineering done around it. I can't find much after the release of ijepa from his group.

1. bobosha ◴[] No.43367047[source]
I argue that JEPA and its Energy-Based Model (EBM) framework fail to capture the deeply intertwined nature of learning and prediction in the human brain—the “yin and yang” of intelligence. Contemporary machine learning approaches remain heavily reliant on resource-intensive, front-loaded training phases. I advocate for a paradigm shift toward seamlessly integrating training and prediction, aligning with the principles of online learning.

Disclosure: I am the author of this paper.

Reference: (PDF) Hydra: Enhancing Machine Learning with a Multi-head Predictions Architecture. Available from: https://www.researchgate.net/publication/381009719_Hydra_Enh... [accessed Mar 14, 2025].

replies(3): >>43367244 #>>43367312 #>>43367329 #
2. vessenes ◴[] No.43367244[source]
Thank you. So, quick q - it would make sense to me that JEPA is an outcome of the YLC work; would you say that’s the case?
3. esafak ◴[] No.43367312[source]
So you believe humans spend more energy on prediction, relative to computers? Isn't that because personal computers are not powerful enough to train big models, and most people have no desire to? It is more economically efficient to socialize the cost of training, as is done. Are you thinking of a distributed training, where we split the work and cost? That could happen when robots become more widespread.
replies(1): >>43371856 #
4. vessenes ◴[] No.43367329[source]
Update: Interesting paper, thanks. Comment on selection for Hydra — you mention v1 uses an arithmetic mean across timescales for prediction. Taking this analogy of the longer windows encapsulating different timescales, I’d propose it would be interesting to train a layer to predict weighting of the timescale predictions. Essentially — is this a moment where I need to focus on what just happened, or is this a moment in which my long range predictions are more important?
replies(1): >>43371833 #
5. bobosha ◴[] No.43371833[source]
Ty for reading the paper! I completely agree! Assigning soft weights to the window based on context is a fascinating research area. This concept is similar to Ebbinghaus' forgetting curve, which emphasizes recency bias while requiring repeated exposure for long-term retention.
6. bobosha ◴[] No.43371856[source]
The human brain operates at just 25W of power—less than the monitor you're likely using right now—whereas AI models like ChatGPT consume nearly 1GWh every 24 hours!

As I discuss in the paper, predictive coding suggests that the brain actively generates predictions and compares them to incoming sensory data (vision, hearing, etc.), prioritizing anomalies. Its efficiency stems from a hierarchical memory system that continuously updates only the "deltas"—the differences that matter. Embracing this approach could lead to a paradigm shift, enabling the development of significantly more energy-efficient AI in the future.