Any comparison to the human brain is missing the point that an LLM only simulates one small part, and that's notably not the frontal lobe. That's required for intelligence, reasoning, self-awareness, etc.
So, no, it's not a question of philosophy. For an AI to enter that realm, it would need to be more than just an LLM with some bells and whistles; an LLM plus something else, perhaps, something fundamentally different which does not yet currently exist.
But that 1% is pretty important.
For example, they are dismal at math problems that aren't just slight variations of problems they've seen before.
Here's one by blackandredpenn where ChatGPT insisted the solution to problem that could be solved by high school / talented middle school students was correct, even after trying to convince it it was wrong. https://youtu.be/V0jhP7giYVY?si=sDE2a4w7WpNwp6zU&t=837
Rewind earlier to see the real answer
https://chatgpt.com/share/67f40cd2-d088-8008-acd5-fe9a9784f3...
A human would probably say "I don't know how to solve the problem". But ChatGPT free version is confidentially wrong ..