←back to thread

1479 points sandslash | 1 comments | | HN request time: 0.211s | source
Show context
OJFord ◴[] No.44324130[source]
I'm not sure about the 1.0/2.0/3.0 classification, but it did lead me to think about LLMs as a programming paradigm: we've had imperative & declarative, procedural & functional languages, maybe we'll come to view deterministic vs. probabilistic (LLMs) similarly.

    def __main__:
        You are a calculator. Given an input expression, you compute the result and print it to stdout, exiting 0.
        Should you be unable to do this, you print an explanation to stderr and exit 1.
(and then, perhaps, a bunch of 'DO NOT express amusement when the result is 5318008', etc.)
replies(10): >>44324398 #>>44324762 #>>44325091 #>>44325404 #>>44325767 #>>44327171 #>>44327549 #>>44328699 #>>44328876 #>>44329436 #
iLoveOncall ◴[] No.44325767[source]
> maybe we'll come to view deterministic vs. probabilistic (LLMs) similarly

I can't believe someone would seriously write this and not realize how nonsensical it is.

"indeterministic programming", you seriously cannot come up with a bigger oxymoron.

replies(1): >>44326467 #
diggan ◴[] No.44326467[source]
Why do people keep having this reaction to something we're already used to? When you're developing against an API, you're already doing the same thing, planning for what happens when the request hangs, or fails completely, or gives a different response, and so on. Same for basically any IO.

It's almost not even new, just that it generates text instead of JSON, or whatever. But we've already been doing "indeterministic programming" for a long time, where you cannot always assume a function 100% returns what it should all the time.

replies(3): >>44326778 #>>44326786 #>>44327190 #
1. alganet ◴[] No.44326786[source]
> request hangs, or fails completely, or gives a different response

I try to avoid those, not celebrate them.