←back to thread

760 points MindBreaker2605 | 1 comments | | HN request time: 0.2s | source
Show context
Jackson__ ◴[] No.45898659[source]
From the outside, it always looked like they gave LeCun just barely enough compute for small scale experiments. They'd publish a promising new paper, show it works at a small scale, then not use it at all for any of their large AI runs.

I would have loved to see a VLM utilizing JEPA for example, but it simply never happened.

replies(2): >>45899434 #>>45899567 #
1. tucnak ◴[] No.45899567[source]
The obvious explanation is they have scaled it up, but it turned out to be total shite, like most new architectures.