←back to thread

724 points simonw | 1 comments | | HN request time: 0s | source
Show context
xnx ◴[] No.44527256[source]
> It’s worth noting that LLMs are non-deterministic,

This is probably better phrased as "LLMs may not provide consistent answers due to changing data and built-in randomness."

Barring rare(?) GPU race conditions, LLMs produce the same output given the same inputs.

replies(7): >>44527264 #>>44527395 #>>44527458 #>>44528870 #>>44530104 #>>44533038 #>>44536027 #
troupo ◴[] No.44528870[source]
> Barring rare(?) GPU race conditions, LLMs produce the same output given the same inputs.

Are these LLMs in the room with us?

Not a single LLM available as a SaaS is deterministic.

As for other models: I've only run ollama locally, and it, too, provided different answers for the same question five minutes apart

Edit/update: not a single LLM available as a SaaS's output is deterministic, especially when used from a UI. Pointing out that you could probably run a tightly controlled model in a tightly controlled environment to achieve deterministic output is very extremely irrelevant when describing output of grok in situations when the user has no control over it

replies(5): >>44528884 #>>44528892 #>>44528898 #>>44528952 #>>44528971 #
fooker ◴[] No.44528884[source]
> Not a single LLM available as a SaaS is deterministic.

Lower the temperature parameter.

replies(2): >>44528930 #>>44529115 #
1. pydry ◴[] No.44529115[source]
It's not enough. Ive done this and still often gotten different results for the same question.