←back to thread

760 points MindBreaker2605 | 3 comments | | HN request time: 0s | source
1. Jackson__ ◴[] No.45898659[source]
From the outside, it always looked like they gave LeCun just barely enough compute for small scale experiments. They'd publish a promising new paper, show it works at a small scale, then not use it at all for any of their large AI runs.

I would have loved to see a VLM utilizing JEPA for example, but it simply never happened.

replies(2): >>45899434 #>>45899567 #
2. sakex ◴[] No.45899434[source]
I'd be surprised if they didn't scale it up.
3. tucnak ◴[] No.45899567[source]
The obvious explanation is they have scaled it up, but it turned out to be total shite, like most new architectures.