←back to thread

124 points alphadelphi | 1 comments | | HN request time: 0.293s | source
Show context
GMoromisato ◴[] No.43594582[source]
I remember reading Douglas Hofstadter's Fluid Concepts and Creative Analogies [https://en.wikipedia.org/wiki/Fluid_Concepts_and_Creative_An...]

He wrote about Copycat, a program for understanding analogies ("abc is to 123 as cba is to ???"). The program worked at the symbolic level, in the sense that it hard-coded a network of relationships between words and characters. I wonder how close he was to "inventing" an LLM? The insight he needed was that instead of hard-coding patterns, he should have just trained on a vast set of patterns.

Hofstadter focused on Copycat because he saw pattern-matching as the core ability of intelligence. Unlocking that, in his view, would unlock AI. And, of course, pattern-matching is exactly what LLMs are good for.

I think he's right. Intelligence isn't about logic. In the early days of AI, people thought that a chess-playing computer would necessarily be intelligent, but that was clearly a dead-end. Logic is not the hard part. The hard part is pattern-matching.

In fact, pattern-matching is all there is: That's a bear, run away; I'm in a restaurant, I need to order; this is like a binary tree, I can solve it recursively.

I honestly can't come up with a situation that calls for intelligence that can't be solved by pattern-matching.

In my opinion, LeCun is moving the goal-posts. He's saying LLMs make mistakes and therefore they aren't intelligent and aren't useful. Obviously that's wrong: humans make mistakes and are usually considered both intelligent and useful.

I wonder if there is a necessary relationship between intelligence and mistakes. If you can solve a problem algorithmically (e.g., long-division) then there won't be mistakes, but you don't need intelligence (you just follow the algorithm). But if you need intelligence (because no algorithm exists) then there will always be mistakes.

replies(4): >>43594643 #>>43594738 #>>43594799 #>>43594869 #
guhidalg ◴[] No.43594738[source]
I wouldn't call pattern matching intelligence, I would call it something closer to "trainability" or "educatable" but not intelligence. You can train a person to do a task without understanding why they have to do it like that, but when confronted with a new never-before-seen situation they have to understand the physical laws of the universe to find a solution.

Ask ChatGPT to answer something that no one on the internet has done before and it will struggle to come up with a solution.

replies(2): >>43594911 #>>43595086 #
throw310822 ◴[] No.43595086[source]
Pattern matching leads to compression- once you identified a pattern you can compress the original information by some amount by replacing it with the identified pattern. Patterns are symbols of the information that was there originally; so manipulating patterns is the same as manipulating symbols. Compressing information by finding hidden connections, then operating on abstract representations of the original information, reorganising this information according to other patterns... this sounds a lot like intelligence.
replies(1): >>43595215 #
1. GMoromisato ◴[] No.43595215[source]
Exactly! And once you compress a pattern, it can became a piece of a larger pattern.