←back to thread

1480 points sandslash | 1 comments | | HN request time: 0.346s | source
Show context
OJFord ◴[] No.44324130[source]
I'm not sure about the 1.0/2.0/3.0 classification, but it did lead me to think about LLMs as a programming paradigm: we've had imperative & declarative, procedural & functional languages, maybe we'll come to view deterministic vs. probabilistic (LLMs) similarly.

    def __main__:
        You are a calculator. Given an input expression, you compute the result and print it to stdout, exiting 0.
        Should you be unable to do this, you print an explanation to stderr and exit 1.
(and then, perhaps, a bunch of 'DO NOT express amusement when the result is 5318008', etc.)
replies(10): >>44324398 #>>44324762 #>>44325091 #>>44325404 #>>44325767 #>>44327171 #>>44327549 #>>44328699 #>>44328876 #>>44329436 #
iLoveOncall ◴[] No.44325767[source]
> maybe we'll come to view deterministic vs. probabilistic (LLMs) similarly

I can't believe someone would seriously write this and not realize how nonsensical it is.

"indeterministic programming", you seriously cannot come up with a bigger oxymoron.

replies(1): >>44326467 #
diggan ◴[] No.44326467[source]
Why do people keep having this reaction to something we're already used to? When you're developing against an API, you're already doing the same thing, planning for what happens when the request hangs, or fails completely, or gives a different response, and so on. Same for basically any IO.

It's almost not even new, just that it generates text instead of JSON, or whatever. But we've already been doing "indeterministic programming" for a long time, where you cannot always assume a function 100% returns what it should all the time.

replies(3): >>44326778 #>>44326786 #>>44327190 #
dax_ ◴[] No.44327190[source]
Why would we embrace that even more? In Software Development we try to keep things deterministic as much as possible. The more variables we're introducing into our software, the more complicated it becomes.

The whole notion of adding LLM prompts as a replacement for code just seems utterly insane to me. It would be a massive waste of resources as we're reprompting AI a lot more frequently than we need to. Also must be fun to debug, as it may or may not work correctly depending on how the LLM model is feeling at that moment. Compilation should always be deterministic, given the same environment.

replies(1): >>44350652 #
1. jason_oster ◴[] No.44350652[source]
Some algorithms are inherently probabilistic (bloom filters are a very common example, HyperLogLog is another). If we accept that probabilistic algorithms are useful, then we can extrapolate that to using LLMs (or other neural networks) for similar useful work.

You can make the LLM/NN deterministic. That was never a problem.