This is the way LLM-enhanced coding should (and I believe will) go.
Treating the LLM like a compiler is a much more scalable, extensible and composable mental model than treating it like a junior dev.
This is the way LLM-enhanced coding should (and I believe will) go.
Treating the LLM like a compiler is a much more scalable, extensible and composable mental model than treating it like a junior dev.
smartfunc doesn't really treat the LLM as a compiler - it's not generating Python code to fill out the function, it's converting that function into one that calls the LLM every time you call the function passing in its docstring as a prompt.
A version that DID work like a compiler would be super interesting - it could replace the function body with generated Python code on your first call and then reuse that in the future, maybe even caching state on disk rather than in-memory.
Ah, cool, didn't read close enough.
Yeah, I do think that LLMs acting as compilers for super high-level specs (the new "code") is a much better approach than chatting with a bot to try to get the right code written. LLM-derived code should not be "peer" to human-written code IMO; it should exist at some subordinate level.
The fact that they're non-deterministic makes it a bit different from a traditional compiler but as you say, caching a "known good" artifact could work.