←back to thread

724 points simonw | 3 comments | | HN request time: 0.76s | source
Show context
xnx ◴[] No.44527256[source]
> It’s worth noting that LLMs are non-deterministic,

This is probably better phrased as "LLMs may not provide consistent answers due to changing data and built-in randomness."

Barring rare(?) GPU race conditions, LLMs produce the same output given the same inputs.

replies(7): >>44527264 #>>44527395 #>>44527458 #>>44528870 #>>44530104 #>>44533038 #>>44536027 #
troupo ◴[] No.44528870[source]
> Barring rare(?) GPU race conditions, LLMs produce the same output given the same inputs.

Are these LLMs in the room with us?

Not a single LLM available as a SaaS is deterministic.

As for other models: I've only run ollama locally, and it, too, provided different answers for the same question five minutes apart

Edit/update: not a single LLM available as a SaaS's output is deterministic, especially when used from a UI. Pointing out that you could probably run a tightly controlled model in a tightly controlled environment to achieve deterministic output is very extremely irrelevant when describing output of grok in situations when the user has no control over it

replies(5): >>44528884 #>>44528892 #>>44528898 #>>44528952 #>>44528971 #
orbital-decay ◴[] No.44528971[source]
> Not a single LLM available as a SaaS is deterministic.

Gemini Flash has deterministic outputs, assuming you're referring to temperature 0 (obviously). Gemini Pro seems to be deterministic within the same kernel (?) but is likely switching between a few different kernels back and forth, depending on the batch or some other internal grouping.

replies(1): >>44529041 #
1. troupo ◴[] No.44529041[source]
And it's the author of the original article running Gemkni Flash/GemmniPro through an API where he can control the temperature? can kernels be controlled by the user? Any of those can be controlled through the UI/apis where most of these LLMs are involved from?

> but is likely switching between a few different kernels back and forth, depending on the batch or some other internal grouping.

So you're literally saying it's non-deterministic

replies(1): >>44529068 #
2. orbital-decay ◴[] No.44529068[source]
The only thing I'm saying is that there is a SaaS model that would give you the same output for the same input, over and over. You just seem to be arguing for the sake of arguing, especially considering that non-determinism is a red herring to begin with, and not a thing to care about for practical use (that's why providers usually don't bother with guaranteeing it). The only reason it was mentioned in the article is because the author is basically reverse engineering a particular model.
replies(1): >>44532061 #
3. troupo ◴[] No.44532061[source]
> especially considering that non-determinism is a red herring to begin with, and not a thing to care about for practical use

That is, it really is important in practical use because it's impossible to talk about stuff like in the original article without being able to consistently reproduce results.

Also, in almost all situations you really do want deterministic output (remember how "do what I want and what is expected" was an important property of computer systems? Good times)

> The only reason it was mentioned in the article is because the author is basically reverse engineering a particular model.

The author is attempting reverse engineering the model, the randomness and the temperature, the system prompts and the training set, and all the possible layers added by xAI in between, and still getting a non-deterministic output.

HN: no-no-no, you don't understand, it's 100% deterministic and it doesn't matter