←back to thread

204 points warrenm | 6 comments | | HN request time: 0.202s | source | bottom
1. ryukoposting ◴[] No.45107700[source]
A footnote in the GPT-5 announcement was that you can now give OpenAI's API a context-free grammar that the LLM must follow. One way of thinking about this feature is that it's a user-defined world model. You could tell the model "the sky is" => "blue" for example.

Obviously you can't actually use this feature as a true world model. There's just too much stuff you have to codify, and basing such a system on tokens is inherently limiting.

The basic principle sounds like what we're looking for, though: a strict automata or rule set that steers the model's output reliably and provably. Perhaps a similar kind of thing that operates on neurons, rather than tokens? Hmm.

replies(4): >>45107903 #>>45108199 #>>45111729 #>>45112038 #
2. nxobject ◴[] No.45107903[source]
> There's just too much stuff you have to codify, and basing such a system on tokens is inherently limiting.

As a complete amateur who works in embedded: I imagine the restriction to a linear, ordered input stream is fundamentally limiting as well, even with the use of attention layers.

3. gavmor ◴[] No.45108199[source]
I suspect something more akin to a LoRA and/or circuit tracing will help us keep track of the truth.
4. spindump8930 ◴[] No.45111729[source]
It's good to have this support in APIs but grammar constrained decoding has been around for quite a while, even before the contemporary LLM era (e.g. [1] is similar in spirit). Local vs global planning is a huge issue here though - if you enforce local constraints during decoding time, an LLM might be forced to make suboptimal token decisions. This could result in a "global" (i.e. all tokens) miss, where the probability of the constrained output is far lower than the probability of the optimal response (which may also conform to the grammar). Algorithms like beam search can alleviate this, but it's still difficult. This is one of the reasons that XML tags work better than JSON outputs - less constraints on "weird" tokens.

[1] https://aclanthology.org/P17-2012/

replies(1): >>45140110 #
5. ijk ◴[] No.45112038[source]
Oh, OpenAI finally added it? Structured generation has been available in things like llama.cpp and Instructor for a while, so I was wondering if they were going to get around to adding it.

In the examples I've seen, it's not something you can define an entire world model in, but you can sure constrain the immediate action space so the model does something sensible.

6. jsmith45 ◴[] No.45140110[source]
Why would this be? I'm probably missing something.

Don't these LLMs fundamentally work by outputting a vector of all possible tokens and strengths assigned to each, which is sampled via some form of sampler (that typically implements some softmax variant, and then picks a random output form that distribution), which now becomes the newest input token, repeat until some limit is hit, or an end of output token is selected?

I don't see why limiting that sampling to the set of valid tokens to fit a grammar should be harmful vs repeated generation until you get something that fits your grammar. (Assuming identical input to both processes.) This is especially the case if you maintain the relative probability of valid (per grammar) tokens in the restricted sampling. If one lets the relative probabilities change substantially, then I could see that giving worse results.

Now, I could certainly imagine blindsiding the LLM with output restrictions when it is expecting to be able to give a freeform response might give worse results than if one prompts it to give output in that format without restricting it. (Simply because forcing an output that is not natural and not a good fit for training can mean the LLM will struggle with creating good output.) I'd imagine the best results likely come from both textually prompting it to give output in your desired format, plus constraining the output to prevent it from accidentally going off the rails.