>
How can that be extralopated with LLMs? How does a system independently know that it's arrived at a correct answer within a timeout or not?That's the catch 22 with LLM. You're supposed to be both the asker and the verifier. Which in practice, it's not that great. LLMs will just find the snippets of code that matches somehow and just act on it (It's the "I'm feeling Lucky" button with extra steps)
In traditional programming, coding is a notation too more than anything. You supposed to have a solution before coding, but because of how the human brain works, it's more like a blackboard, aka an helper for thinking. You write what you think is correct, verify your assumptions, then store and forget about all of it when that's true. Once in a while, you revisit the design and make it more elegant (at least you hope you're allowed to).
LLM programming, when first started, was more about a direct english to finished code translation. Now, hope has scaled down and it's more about precise specs to diff proposal. Which frankly does not improve productivity as you can either have a generator that's faster and more precise (less costly too) or you will need to read the same amount of docs to verify everything as you would need to do to code the stuff in the first place (80% of the time spent coding).
So no determinism with LLMs. The input does not have any formal aspects, and the output is randomly determined. And the domain is very large. It is like trying to find a specific grain of sand on a beach while not fully sure it's there. I suspect most people are doing the equivalent of taking a handful of sand and saying that's what they wanted all along.