Anthropomorphization is doing a lot of heavy lifting in your comment.
> While training LLMs to replicate the human output, the intelligence and understanding EMERGES in the internal layers.
Is it intelligence and understanding that emerges, or is applying clever statistics on the sum of human knowledge capable of surfacing patterns in the data that humans have never considered?
If this were truly intelligence we would see groundbreaking advancements in all industries even at this early stage. We've seen a few, which is expected when the approach is to brute force these systems into finding actually valuable patterns in the data. The rest of the time they generate unusable garbage that passes for insightful because most humans are not domain experts, and verifying correctness is often labor intensive.
> These LLMs are already better than 90% of humans at understanding any subject, in the sense of answering questions about that subject and carrying on meaningful and reasonable discussion.
Again, exceptional pattern matching does not imply understanding. Just because these tools are able to generate patterns that mimic human-made patterns, doesn't mean they understand anything about what they're generating. In fact, they'll be able to tell you this if you ask them.
> Yes occasionally they stumble or make a mistake, but overall it is very impressive.
This can still be very impressive, no doubt, and can have profound impact on many industries and our society. But it's important to be realistic about what the technology is and does, and not repeat what some tech bros whose income depends on this narrative tell us it is and does.