←back to thread

124 points alphadelphi | 1 comments | | HN request time: 0.208s | source
Show context
GMoromisato ◴[] No.43594582[source]
I remember reading Douglas Hofstadter's Fluid Concepts and Creative Analogies [https://en.wikipedia.org/wiki/Fluid_Concepts_and_Creative_An...]

He wrote about Copycat, a program for understanding analogies ("abc is to 123 as cba is to ???"). The program worked at the symbolic level, in the sense that it hard-coded a network of relationships between words and characters. I wonder how close he was to "inventing" an LLM? The insight he needed was that instead of hard-coding patterns, he should have just trained on a vast set of patterns.

Hofstadter focused on Copycat because he saw pattern-matching as the core ability of intelligence. Unlocking that, in his view, would unlock AI. And, of course, pattern-matching is exactly what LLMs are good for.

I think he's right. Intelligence isn't about logic. In the early days of AI, people thought that a chess-playing computer would necessarily be intelligent, but that was clearly a dead-end. Logic is not the hard part. The hard part is pattern-matching.

In fact, pattern-matching is all there is: That's a bear, run away; I'm in a restaurant, I need to order; this is like a binary tree, I can solve it recursively.

I honestly can't come up with a situation that calls for intelligence that can't be solved by pattern-matching.

In my opinion, LeCun is moving the goal-posts. He's saying LLMs make mistakes and therefore they aren't intelligent and aren't useful. Obviously that's wrong: humans make mistakes and are usually considered both intelligent and useful.

I wonder if there is a necessary relationship between intelligence and mistakes. If you can solve a problem algorithmically (e.g., long-division) then there won't be mistakes, but you don't need intelligence (you just follow the algorithm). But if you need intelligence (because no algorithm exists) then there will always be mistakes.

replies(4): >>43594643 #>>43594738 #>>43594799 #>>43594869 #
GeorgeTirebiter ◴[] No.43594643[source]
What is Dark Matter? How to eradicate cancer? How to have world peace? I don't quite see how pattern-matching, alone, can solve questions like these.
replies(4): >>43595004 #>>43595229 #>>43595803 #>>43595905 #
1. kadushka ◴[] No.43595803[source]
So, how do we solve questions like these? How about collecting a lot of data and looking for patterns in that data? In the process, scientists typically produce some hypotheses, test them by collecting more data and finding more patterns, and try to correlate these patterns with some patterns in existing knowledge. Do you agree?

If yes, it seems to me that LLMs should be much better at that than humans, and I believe the frontier models like o3 might already be better than humans, we are just starting to use them for these tasks. Give it a couple more years before making any conclusions.