> LLM's build an internal representation that let's them efficiently and mostly successfully manipulate source code.
No, see, this is the problem right here. Everything in this discussion hinges on LLMs behavior. While they are capable of rendering text that looks like it was produced by reasoning from the input, they also often are incapable of that.
LLMs can be used by people who reason about the input and output. If and only if someone can show that LLMs can, without human intervention, go from natural language description to fully looping through the process and building and maintaining the code, that argument could be made.
The "LLM-as-AI" hinges entirely on their propensity to degenerate into nonsensical output being worked out. As long as that remains, LLMs will stay firmly in the camp of being usable to transform some inputs into outputs under supervision and that is no evidence of ability to reason. So the whole conversation devolves into people pointing out that they still descent into nonsense if left to their own devices, and the "LLM-as-AI" people saying "but when they don't..." as if it can be taken for granted that it is at all possible to get there.
Until that happens, using LLMs to generate code will remain a gimmick for using natural language to search for common patterns in popular programming languages.