←back to thread

70 points alexmolas | 10 comments | | HN request time: 0.001s | source | bottom
1. lukev ◴[] No.43644995[source]
This is the way LLM-enhanced coding should (and I believe will) go.

Treating the LLM like a compiler is a much more scalable, extensible and composable mental model than treating it like a junior dev.

replies(2): >>43645013 #>>43650449 #
2. simonw ◴[] No.43645013[source]
smartfunc doesn't really treat the LLM as a compiler - it's not generating Python code to fill out the function, it's converting that function into one that calls the LLM every time you call the function passing in its docstring as a prompt.

A version that DID work like a compiler would be super interesting - it could replace the function body with generated Python code on your first call and then reuse that in the future, maybe even caching state on disk rather than in-memory.

replies(6): >>43645175 #>>43645658 #>>43646624 #>>43647762 #>>43647875 #>>43650257 #
3. hedgehog ◴[] No.43645175[source]
I use something similar to this decorator (more or less a thin wrapper around instructor) and have looked a little bit at the codegen + cache route. It gets more interesting with the addition of tool calls, but I've found JSON outputs create quality degradation and reliability issues. My next experiment on that thread is to either use guidance (https://github.com/guidance-ai/guidance) or reimplement some of their heuristics to try to get tool calling without 100% reliance on JSON.
4. toxik ◴[] No.43645658[source]
Isn’t that basically just Copilot but way more cumbersome to use?
replies(1): >>43645761 #
5. nate_nowack ◴[] No.43645761{3}[source]
no https://bsky.app/profile/alternatebuild.dev/post/3lg5a5fq4dc...
6. photonthug ◴[] No.43646624[source]
Treating it as a compiler is obviously the way right? Setting aside overhead if you’re using local models.. Either the code gen is not deterministic in which case you risk random breakage or it is deterministic and you decided to delete it anyway and punt on ever changing / optimizing it except for in natural language? Why would anyone prefer either case? Code folding works fine if you just don’t want to look at it ever.

I can see this eventually going in the direction of "bidirectional synchronization" of NL representation and code representation (similar to how jupytext allows you work with notebooks in browser or markdown in editor). But a single representation that's completely NL and deliberately throwing away a code representation sounds like it would be the opposite of productivity..

7. huevosabio ◴[] No.43647762[source]
Yes, that would be indeed very interesting.

I would like to try something like this in Rust: - you use a macro to stub out the body of functions (so you just write the signature) - the build step fills in the code and caches it - on failures the, the build step is allowed to change the function bodies generated by LLMs until it satisfies the test / compile steps - you can then convert the satisfying LLM-generated function bodies into a hard code (or leave it within the domain of "changeable by the llm")

It sandboxes what the LLM can actually alter, and makes the generation happen in an environment where you can check right away if it was done correctly. Being Rust, you get a lot of more verifications. And, crucially, keeps you in the driver's seat.

8. lukev ◴[] No.43647875[source]
Ah, cool, didn't read close enough.

Yeah, I do think that LLMs acting as compilers for super high-level specs (the new "code") is a much better approach than chatting with a bot to try to get the right code written. LLM-derived code should not be "peer" to human-written code IMO; it should exist at some subordinate level.

The fact that they're non-deterministic makes it a bit different from a traditional compiler but as you say, caching a "known good" artifact could work.

9. hombre_fatal ◴[] No.43650257[source]
https://github.com/eeue56/neuro-lingo

You can even pin the last result:

    pinned function main() {
      // Print "Hello World" to the console
    }
10. vrighter ◴[] No.43650449[source]
a compiler has one requirement that llms cannot provide. It has to be robust.