←back to thread

124 points alphadelphi | 1 comments | | HN request time: 0.205s | source
Show context
GMoromisato ◴[] No.43594582[source]
I remember reading Douglas Hofstadter's Fluid Concepts and Creative Analogies [https://en.wikipedia.org/wiki/Fluid_Concepts_and_Creative_An...]

He wrote about Copycat, a program for understanding analogies ("abc is to 123 as cba is to ???"). The program worked at the symbolic level, in the sense that it hard-coded a network of relationships between words and characters. I wonder how close he was to "inventing" an LLM? The insight he needed was that instead of hard-coding patterns, he should have just trained on a vast set of patterns.

Hofstadter focused on Copycat because he saw pattern-matching as the core ability of intelligence. Unlocking that, in his view, would unlock AI. And, of course, pattern-matching is exactly what LLMs are good for.

I think he's right. Intelligence isn't about logic. In the early days of AI, people thought that a chess-playing computer would necessarily be intelligent, but that was clearly a dead-end. Logic is not the hard part. The hard part is pattern-matching.

In fact, pattern-matching is all there is: That's a bear, run away; I'm in a restaurant, I need to order; this is like a binary tree, I can solve it recursively.

I honestly can't come up with a situation that calls for intelligence that can't be solved by pattern-matching.

In my opinion, LeCun is moving the goal-posts. He's saying LLMs make mistakes and therefore they aren't intelligent and aren't useful. Obviously that's wrong: humans make mistakes and are usually considered both intelligent and useful.

I wonder if there is a necessary relationship between intelligence and mistakes. If you can solve a problem algorithmically (e.g., long-division) then there won't be mistakes, but you don't need intelligence (you just follow the algorithm). But if you need intelligence (because no algorithm exists) then there will always be mistakes.

replies(4): >>43594643 #>>43594738 #>>43594799 #>>43594869 #
GeorgeTirebiter ◴[] No.43594643[source]
What is Dark Matter? How to eradicate cancer? How to have world peace? I don't quite see how pattern-matching, alone, can solve questions like these.
replies(4): >>43595004 #>>43595229 #>>43595803 #>>43595905 #
1. strogonoff ◴[] No.43595229[source]
Pattern-matching can produce useful answers within the confines of a well-defined system. However, the hypothetical all-encompassing system for such a solver to produce hypothetical objective ground truth about an arbitrary question is not something we have—such a system would be one which we ourselves are part of and hence unavailable to us (cf. the incompleteness conundrum, map vs. territory, and so forth).

Your unsolved problems would likely involve the extremes of maps that you currently think in terms of. Maps become less useful as you get closer to undefined extreme conditions within them (a famous one is us humans ourselves, and why so many unsolved challenges to various degrees of obviousness concern our psyche and physiology—world peace, cancer, and so on), and I assume useful pattern matching is similarly less effective. Data to pattern-match against is collected and classified according to a preexisting model; if the model is wrong (which it is), the data may lead to spurious matches with wrong or nonsensical answers. Furthermore, if the answer has to be in terms of a new system, another fallible map hitherto unfamiliar to human mind, pattern-matching based on preexisting products of that very mind is unlikely to produce one.