←back to thread

760 points MindBreaker2605 | 1 comments | | HN request time: 0.207s | source
Show context
sebmellen ◴[] No.45897467[source]
Making LeCun report to Wang was the most boneheaded move imaginable. But… I suppose Zuckerberg knows what he wants, which is AI slopware and not truly groundbreaking foundation models.
replies(20): >>45897481 #>>45897498 #>>45897518 #>>45897885 #>>45897970 #>>45897978 #>>45898040 #>>45898053 #>>45898092 #>>45898108 #>>45898186 #>>45898539 #>>45898651 #>>45898727 #>>45899160 #>>45899375 #>>45900884 #>>45900885 #>>45901421 #>>45903451 #
garyclarke27 ◴[] No.45898053[source]
Zuck did this on purpose, humiliating LeCun so he would leave. Despite LeCun being proved wrong on LLMs capabilities such as reasoning, he remained extremely negative, not exactly inspiring leadership to the Meta Ai team, he had to go.
replies(1): >>45899016 #
aiven ◴[] No.45899016[source]
But LLMs still can't reason... in a reasonable sense. No matter how you look at it, it is still a statistical model that guesses next word, it doesn't think/reason per se.
replies(2): >>45901801 #>>45902954 #
1. astrange ◴[] No.45901801[source]
It does not guess the next word, the sampler chooses subword tokens. Your explanation can't even explain why it generates coherent words.