←back to thread

169 points constantinum | 1 comments | | HN request time: 0s | source
Show context
StrauXX ◴[] No.40714872[source]
Did I understand the documentation for many of these libraries correctly in that they reprompt until they receive valid JSON? If so I don't understand why one would do that when token masking is a deterministicly verifyable way to get structured output of any kind (as done by Guidance and LMQL for instance). This is not meant to be snarky, I really am curious. Is there an upside to reprompting - aside from easier implementation.
replies(4): >>40714984 #>>40714988 #>>40715185 #>>40715620 #
hellovai ◴[] No.40714984[source]
the main one is that most people don't own the model. so if you use openai / anthropic / etc then you can't use token masking. in that case, reprompting is pretty much the only option
replies(2): >>40716262 #>>40725394 #
1. StrauXX ◴[] No.40725394[source]
It does. With OpenAI at least you definetly can use token masking. There are some limitations but even those are circumventable. I have used token masking on the OpenAI API with LMQL without any issues.