←back to thread

132 points harel | 1 comments | | HN request time: 0.199s | source
Show context
flykespice ◴[] No.45397008[source]
I am very dummy on LLMs, but wouldn't a confined model (no internet access) eventually just loop to repeating itself on each consecutive run or is entropy enough for them to produce endless creativity?
replies(3): >>45397107 #>>45397116 #>>45397913 #
1. zeta0134 ◴[] No.45397107[source]
The model's weights are fixed. Most clients let you specify the "temperature", which influences how the predictive output will navigate that possibility space. There's a surprising amount of accumulated entropy in the context window, but yes, I think eventually it runs out of knowledge that it hasn't yet used to form some response.

I think the model being fixed is a fascinating limitation. What research is being done that could allow a model to train itself continually? That seems like it could allow a model to update itself with new knowledge over time, but I'm not sure how you'd do it efficiently