I remember all the hype open ai had done before the release of chat GPT-2 or something where they were so afraid, ooh so afraid to release this stuff and now it's a non-issue. it's all just marketing gimmicks.
So I hardly see why this is inherently crazy. At most I think it might not be scalable.
On the other hand if you mean, give you the correct answer to your question 100% of the time, then I agree, though then what about things that are only in your mind (guess the number I'm thinking type problems)?
I say: it's not human-like intelligence, it's just predicting the next token probabilistically.
Some AI advocate says: humans are just predicting the next token probabilistically, fight me.
The problem here is that "predicting the next token probabilistically" is a way of framing any kind of cleverness, up to and including magical, impossible omniscience. That doesn't mean it's the way every kind of cleverness is actually done, or could realistically be done. And it has to be the correct next token, where all the details of what's actually required are buried in that term "correct", and sometimes it literally means the same as "likely", and other times that just produces a reasonable, excusable, intelligence-esque effort.
This essay has aged extremely well.