←back to thread

132 points harel | 1 comments | | HN request time: 0.001s | source
Show context
flykespice ◴[] No.45397008[source]
I am very dummy on LLMs, but wouldn't a confined model (no internet access) eventually just loop to repeating itself on each consecutive run or is entropy enough for them to produce endless creativity?
replies(3): >>45397107 #>>45397116 #>>45397913 #
ethmarks ◴[] No.45397913[source]
The actual underlying neural net that the LLMs use doesn't actually output tokens. It outputs a probability distribution for how likely each token is to come next. For example, in the sentence "once upon a ", the token with the highest probability is "time", and then probably "child", and so on.

In order to make this probability distribution useful, the software chooses a token based on its position in the distribution. I'm simplifying here, but the likelihood that it chooses the most probable next token is based on the model's temperature. A temperature of 0 means that (in theory) it'll always choose the most probable token, making it deterministic. A non-zero temperature means that sometimes it will choose less likely tokens, so it'll output different results every time.

Hope this helps.

replies(1): >>45399251 #
1. yatopifo ◴[] No.45399251[source]
This makes me wonder, are we in a fancy simulation with an elaborate sampling mechanism? Not that the answer would matter…