←back to thread

70 points alexmolas | 1 comments | | HN request time: 0.353s | source
Show context
lukev ◴[] No.43644995[source]
This is the way LLM-enhanced coding should (and I believe will) go.

Treating the LLM like a compiler is a much more scalable, extensible and composable mental model than treating it like a junior dev.

replies(2): >>43645013 #>>43650449 #
simonw ◴[] No.43645013[source]
smartfunc doesn't really treat the LLM as a compiler - it's not generating Python code to fill out the function, it's converting that function into one that calls the LLM every time you call the function passing in its docstring as a prompt.

A version that DID work like a compiler would be super interesting - it could replace the function body with generated Python code on your first call and then reuse that in the future, maybe even caching state on disk rather than in-memory.

replies(6): >>43645175 #>>43645658 #>>43646624 #>>43647762 #>>43647875 #>>43650257 #
1. hedgehog ◴[] No.43645175[source]
I use something similar to this decorator (more or less a thin wrapper around instructor) and have looked a little bit at the codegen + cache route. It gets more interesting with the addition of tool calls, but I've found JSON outputs create quality degradation and reliability issues. My next experiment on that thread is to either use guidance (https://github.com/guidance-ai/guidance) or reimplement some of their heuristics to try to get tool calling without 100% reliance on JSON.