←back to thread

724 points simonw | 3 comments | | HN request time: 1.12s | source
Show context
xnx ◴[] No.44527256[source]
> It’s worth noting that LLMs are non-deterministic,

This is probably better phrased as "LLMs may not provide consistent answers due to changing data and built-in randomness."

Barring rare(?) GPU race conditions, LLMs produce the same output given the same inputs.

replies(7): >>44527264 #>>44527395 #>>44527458 #>>44528870 #>>44530104 #>>44533038 #>>44536027 #
troupo ◴[] No.44528870[source]
> Barring rare(?) GPU race conditions, LLMs produce the same output given the same inputs.

Are these LLMs in the room with us?

Not a single LLM available as a SaaS is deterministic.

As for other models: I've only run ollama locally, and it, too, provided different answers for the same question five minutes apart

Edit/update: not a single LLM available as a SaaS's output is deterministic, especially when used from a UI. Pointing out that you could probably run a tightly controlled model in a tightly controlled environment to achieve deterministic output is very extremely irrelevant when describing output of grok in situations when the user has no control over it

replies(5): >>44528884 #>>44528892 #>>44528898 #>>44528952 #>>44528971 #
fooker ◴[] No.44528884[source]
> Not a single LLM available as a SaaS is deterministic.

Lower the temperature parameter.

replies(2): >>44528930 #>>44529115 #
1. troupo ◴[] No.44528930[source]
So, how does one do it outside of APIs in the context we're discussing? In the UI or when invoking @grok in X?

How do we also turn off all the intermediate layers in between that we don't know about like "always rant about white genocide in South Africa" or "crash when user mentions David Meyer"?

replies(1): >>44530946 #
2. marcinzm ◴[] No.44530946[source]
Grok is not deterministic would then be the correct statement.
replies(1): >>44532080 #
3. troupo ◴[] No.44532080[source]
When used through UI, like the author does, Grok isn't. OpenAI isn't. Gemini isn't