LeCun, "Mathematical Obstacles on the Way to Human-Level AI"
Slide (Why autoregressive models suck)
LeCun, "Mathematical Obstacles on the Way to Human-Level AI"
Slide (Why autoregressive models suck)
> just one of the many tools of reason.
Read https://en.wikipedia.org/wiki/Preference_(economics)#Transit... then read https://pmc.ncbi.nlm.nih.gov/articles/PMC7058914/ and you will see there's a lot of data suggesting that indeed, it's just one of the many tools!
I think it's similar to how many dislike the non-deterministic output of LLM: when you use statistical tools, a non-deterministic output is a VERY nice feature to explore conceptual spaces with abductive reasoning: https://en.wikipedia.org/wiki/Abductive_reasoning
It's a tool I was using at a previous company, mixing LLMs, statistics and formal tools. I'm surprised there aren't more startups mixing LLM with z3 or even just prolog.
If you look at the slide, the subtree of correct answers exists, what's missing is just a way to make them more prevalent instead of less.
Personally, I think LeCun is just leaping to the wrong conclusion because he's sticking to the wrong tools for the job.
https://news.ycombinator.com/item?id=41892090
(It's very common, esp. with educationally traumatized Americans, e.g., to identify Math with "calculation"/"approved tools" and not "the concepts")
"No amount of calculation will model conceptual thinking" <- sounds more reasonable?? (You said you were ok with nondeterministic outputs? :)
Sorry to come across as patronizing
[if we disregard that he said "concepts are key" -- though we can be yet more charitable and assume that he doesn't accept (median) human-level intelligence as the final boss]
Para-doxxing ">" Under-standing
(I haven't thought this through, just vibe-calculating, as it were, having pondered the necessity of concrete particulars for a split-second)(More on that "sophistiKated" aspect of "projeKtion": turns out not to be as idiosynKratic as I'd presumed, but I traded bandwidth for immediacy here, so I'll let GP explain why that's interesting, if he indeed finds it is :)
Wolfram (selfstyled heir to Leibniz/Galois) seems to be serving himself a fronthanded compliment:
https://writings.stephenwolfram.com/2020/12/combinators-a-ce...
>What I called a “projection” then is what we’d call a function now; a “filter” is what we’d now call an argument )