←back to thread

204 points warrenm | 1 comments | | HN request time: 0.204s | source
Show context
ryukoposting ◴[] No.45107700[source]
A footnote in the GPT-5 announcement was that you can now give OpenAI's API a context-free grammar that the LLM must follow. One way of thinking about this feature is that it's a user-defined world model. You could tell the model "the sky is" => "blue" for example.

Obviously you can't actually use this feature as a true world model. There's just too much stuff you have to codify, and basing such a system on tokens is inherently limiting.

The basic principle sounds like what we're looking for, though: a strict automata or rule set that steers the model's output reliably and provably. Perhaps a similar kind of thing that operates on neurons, rather than tokens? Hmm.

replies(4): >>45107903 #>>45108199 #>>45111729 #>>45112038 #
1. nxobject ◴[] No.45107903[source]
> There's just too much stuff you have to codify, and basing such a system on tokens is inherently limiting.

As a complete amateur who works in embedded: I imagine the restriction to a linear, ordered input stream is fundamentally limiting as well, even with the use of attention layers.