←back to thread

454 points nathan-barry | 1 comments | | HN request time: 0.201s | source
Show context
rafaelero ◴[] No.45645557[source]
The problem with this approach to text generation is that it's still not flexible enough. If during inference the model changes its mind and wants to output something considerably different it can't because there are too many tokens already in place.
replies(3): >>45645633 #>>45646473 #>>45647311 #
1. oezi ◴[] No.45647311[source]
Didn't anybody add backspace to an LLM's output token set yet?