←back to thread

56 points trott | 1 comments | | HN request time: 0.202s | source
Show context
makapuf ◴[] No.40714795[source]
Funny that it does not need that much data to train your average 20th century human genius. I'd say that if we are dreaming of the future of ai, learning and reasoning seems the greatest issue, not data. That said, the article title is about LLMs, so that's what will need changing I guess.
replies(3): >>40715430 #>>40715643 #>>40716666 #
1. bastien2 ◴[] No.40716666[source]
That's because humans learn in stages of growing complexity and semantic depth, and LLMs don't.

The chatbots do what infant humans do: mimic what it "sees" until it gets the pattern consistently matching what it saw without any capacity to understand what it is doing.

Once humans have that part done, whole new layers of semantic learning kick in and create the critical analyses we perceive as "intelligence".

LLMs, as a consequence of their design, lack those deeper layers. They are not artificially intelligent at all. Rather, they're the latest iteration of what centuries ago gave us steam-powered songbirds.