Chasing our own tail with concepts like "reasoning". Let's move the concept a bit - "search". Can LLMs search for novel ideas and discoveries? They do under the right circumstances. You got to provide idea testing environments, the missing ingredient. Search and learn, it's what humans do and AI can do as well.
The whole issue with "reasoning" is that is an incompletely defined concept. Over what domain, what problem space, and what kind of experimental access do we define "reasoning"? Search is better as a concept because it comes packed with all these things, and without conceptual murkiness. Search is scientifically studied to a greater extent.
I don't think we doubt LLMs can learn given training data, we already accuse them of being mere interpolators or parrots. And we can agree to some extent the LLMs can recombine concepts correctly. So they got down the learning part.
And for the searching part, we can probably agree its a matter of access to the search space not AI. It's an environment problem, and even a social one. Search is usually more extended than the lifetime of any agent, so it has to be a cultural process, where language plays a central role.
When you break reasoning/progress/intelligence into "search and learn" it becomes much more tractable and useful. We can also make more grounded predictions on AI, considering the needs for search that are implied, not just the needs for learning.
How much search did AlphaZero need to beat us at go? How much search did humans pack in our 200K years history over 10,000 generations? What was the cost of that journey of search? That kind of questions. In my napkin estimations we solved 1:10000 of the problem by learning, search is 10000x to a million times harder.