←back to thread

70 points alexmolas | 1 comments | | HN request time: 0.208s | source
Show context
lukev ◴[] No.43644995[source]

This is the way LLM-enhanced coding should (and I believe will) go.

Treating the LLM like a compiler is a much more scalable, extensible and composable mental model than treating it like a junior dev.

replies(2): >>43645013 #>>43650449 #
simonw ◴[] No.43645013[source]

smartfunc doesn't really treat the LLM as a compiler - it's not generating Python code to fill out the function, it's converting that function into one that calls the LLM every time you call the function passing in its docstring as a prompt.

A version that DID work like a compiler would be super interesting - it could replace the function body with generated Python code on your first call and then reuse that in the future, maybe even caching state on disk rather than in-memory.

replies(6): >>43645175 #>>43645658 #>>43646624 #>>43647762 #>>43647875 #>>43650257 #
1. lukev ◴[] No.43647875[source]

Ah, cool, didn't read close enough.

Yeah, I do think that LLMs acting as compilers for super high-level specs (the new "code") is a much better approach than chatting with a bot to try to get the right code written. LLM-derived code should not be "peer" to human-written code IMO; it should exist at some subordinate level.

The fact that they're non-deterministic makes it a bit different from a traditional compiler but as you say, caching a "known good" artifact could work.