I'm sorry, but the whole stochastic parrot thing is so thoroughly debunked at this point that we should stop repeating it as if it's some kind of rare wisdom.
AlphaGo showed that even pre-LLM models could generate brand new approaches to winning a game that human experts had never seen before, and didn't exist in any training material.
With a little thought and experimentation, it's pretty easy to show that LLMs can reason about concepts that do not exist in its training corpus.
You could invent a tiny DSL with brand-new, never-seen-before tokens, give two worked examples, then ask it to evaluate a gnarlier expression. If it solves it, it inferred and executed rules you just made up for the first time.
Or you could drop in docs for a new, never-seen-before API and ask it to decide when and why to call which tool, run the calls, and revise after errors. If it composes a working plan and improves from feedback, that’s reasoning about procedures that weren’t in the corpus.