←back to thread

204 points warrenm | 1 comments | | HN request time: 0.256s | source
Show context
ryukoposting ◴[] No.45107700[source]
A footnote in the GPT-5 announcement was that you can now give OpenAI's API a context-free grammar that the LLM must follow. One way of thinking about this feature is that it's a user-defined world model. You could tell the model "the sky is" => "blue" for example.

Obviously you can't actually use this feature as a true world model. There's just too much stuff you have to codify, and basing such a system on tokens is inherently limiting.

The basic principle sounds like what we're looking for, though: a strict automata or rule set that steers the model's output reliably and provably. Perhaps a similar kind of thing that operates on neurons, rather than tokens? Hmm.

replies(4): >>45107903 #>>45108199 #>>45111729 #>>45112038 #
1. gavmor ◴[] No.45108199[source]
I suspect something more akin to a LoRA and/or circuit tracing will help us keep track of the truth.