←back to thread

222 points futurisold | 1 comments | | HN request time: 0s | source
Show context
krackers ◴[] No.44401923[source]
One question, OP, how does cost for this work? Do you pay the LLM inference cost (quite literally if using an external API) every time you run a line that involves natural language computation? E.g. what happens if you call a "symbolic" function in a loop.
replies(2): >>44403284 #>>44404403 #
1. futurisold ◴[] No.44403284[source]
Yes, that's correct. If using say openai, then every semantic ops are API calls to openai. If you're hosting a local LLM via llama.cpp, then obviously there's no inference cost other than that of hosting the model.