←back to thread

725 points simonw | 1 comments | | HN request time: 0.206s | source
Show context
xnx ◴[] No.44527256[source]
> It’s worth noting that LLMs are non-deterministic,

This is probably better phrased as "LLMs may not provide consistent answers due to changing data and built-in randomness."

Barring rare(?) GPU race conditions, LLMs produce the same output given the same inputs.

replies(7): >>44527264 #>>44527395 #>>44527458 #>>44528870 #>>44530104 #>>44533038 #>>44536027 #
msgodel ◴[] No.44527264[source]
I run my local LLMs with a seed of one. If I re-run my "ai" command (which starts a conversation with its parameters as a prompt) I get exactly the same output every single time.
replies(2): >>44527284 #>>44527453 #
xnx ◴[] No.44527284[source]
Yes. This is what I was trying to say. Saying "It’s worth noting that LLMs are non-deterministic" is wrong and should be changed in the blog post.
replies(3): >>44527462 #>>44528765 #>>44529031 #
1. DemocracyFTW2 ◴[] No.44529031[source]
"Non-deterministic" in the sense that a dice roll is when you don't know every parameter with ultimate precision. On one hand I find insistence on the wrongness on the phrase a bit too OCD, on the other I must agree that a very simple re-phrasing like "appears {non-deterministic|random|unpredictable} to an outside observer" would've maybe even added value even for less technically-inclined folks, so yeah.