←back to thread

124 points alphadelphi | 1 comments | | HN request time: 0.241s | source
Show context
gsf_emergency_2 ◴[] No.43594330[source]
Recent talk: https://www.youtube.com/watch?v=ETZfkkv6V7Y

LeCun, "Mathematical Obstacles on the Way to Human-Level AI"

Slide (Why autoregressive models suck)

https://xcancel.com/ravi_mohan/status/1906612309880930641

replies(3): >>43594385 #>>43594491 #>>43594527 #
hatefulmoron ◴[] No.43594491[source]
Maybe someone can explain it to me, but isn't that slide sort of just describing what makes solving problems hard in general? That there are many more decisions which put you on an inevitable path of failure?

"Probability e that any produced [choice] takes us outside the set of correct answers .. probability that answer of length n is correct: P(correct) = (1-e)^{n}"

replies(2): >>43594606 #>>43594957 #
1. somenameforme ◴[] No.43594957[source]
I think he's focusing on the distinction between facts and output for humans and drawing a parallel to LLMs.

If I ask you something that you know the answer to, the words you use and that fact iself are distinct entities. You're just giving me a presentation layer for fact #74719.

But LLMs lack any comparable pool to draw from, and so their words and their answer are essentially the same thing.