That's because humans learn in stages of growing complexity and semantic depth, and LLMs don't.
The chatbots do what infant humans do: mimic what it "sees" until it gets the pattern consistently matching what it saw without any capacity to understand what it is doing.
Once humans have that part done, whole new layers of semantic learning kick in and create the critical analyses we perceive as "intelligence".
LLMs, as a consequence of their design, lack those deeper layers. They are not artificially intelligent at all. Rather, they're the latest iteration of what centuries ago gave us steam-powered songbirds.