←back to thread

222 points futurisold | 3 comments | | HN request time: 0.543s | source
1. krackers ◴[] No.44401923[source]
One question, OP, how does cost for this work? Do you pay the LLM inference cost (quite literally if using an external API) every time you run a line that involves natural language computation? E.g. what happens if you call a "symbolic" function in a loop.
replies(2): >>44403284 #>>44404403 #
2. futurisold ◴[] No.44403284[source]
Yes, that's correct. If using say openai, then every semantic ops are API calls to openai. If you're hosting a local LLM via llama.cpp, then obviously there's no inference cost other than that of hosting the model.
3. demarq ◴[] No.44404403[source]
This will need a cache of some sort