It's not age-old nor is it controversial. LLMs aren't intelligent by any stretch of the imagination. Each word/token is chosen as that which is statistically most likely to follow the previous. There is no capability for understanding in the design of an LLM. It's not a matter of opinion; this just isn't how an LLM works.
Any comparison to the human brain is missing the point that an LLM only simulates one small part, and that's notably not the frontal lobe. That's required for intelligence, reasoning, self-awareness, etc.
So, no, it's not a question of philosophy. For an AI to enter that realm, it would need to be more than just an LLM with some bells and whistles; an LLM plus something else, perhaps, something fundamentally different which does not yet currently exist.