←back to thread

1479 points sandslash | 2 comments | | HN request time: 0.415s | source
Show context
OJFord ◴[] No.44324130[source]
I'm not sure about the 1.0/2.0/3.0 classification, but it did lead me to think about LLMs as a programming paradigm: we've had imperative & declarative, procedural & functional languages, maybe we'll come to view deterministic vs. probabilistic (LLMs) similarly.

    def __main__:
        You are a calculator. Given an input expression, you compute the result and print it to stdout, exiting 0.
        Should you be unable to do this, you print an explanation to stderr and exit 1.
(and then, perhaps, a bunch of 'DO NOT express amusement when the result is 5318008', etc.)
replies(10): >>44324398 #>>44324762 #>>44325091 #>>44325404 #>>44325767 #>>44327171 #>>44327549 #>>44328699 #>>44328876 #>>44329436 #
semiquaver ◴[] No.44327171[source]
LLMs are not inherently indeterministic. Batching, temperature, and other things make them appear so when run by big providers but a locally-run LLM model at zero temperature will always produce the same output given the same input.
replies(2): >>44327624 #>>44329408 #
1. oytis ◴[] No.44327624[source]
That's an improvement, they are still "chaotic" though in that small changes in input can change the output unpredictably strong
replies(1): >>44328091 #
2. behnamoh ◴[] No.44328091[source]
Yes, this paper says exactly what you talked about: https://arxiv.org/abs/2404.01332