←back to thread

1479 points sandslash | 1 comments | | HN request time: 0.242s | source
Show context
OJFord ◴[] No.44324130[source]
I'm not sure about the 1.0/2.0/3.0 classification, but it did lead me to think about LLMs as a programming paradigm: we've had imperative & declarative, procedural & functional languages, maybe we'll come to view deterministic vs. probabilistic (LLMs) similarly.

    def __main__:
        You are a calculator. Given an input expression, you compute the result and print it to stdout, exiting 0.
        Should you be unable to do this, you print an explanation to stderr and exit 1.
(and then, perhaps, a bunch of 'DO NOT express amusement when the result is 5318008', etc.)
replies(10): >>44324398 #>>44324762 #>>44325091 #>>44325404 #>>44325767 #>>44327171 #>>44327549 #>>44328699 #>>44328876 #>>44329436 #
crsn ◴[] No.44324762[source]
This (sort of) is already a paradigm: https://en.m.wikipedia.org/wiki/Probabilistic_programming
replies(1): >>44325867 #
1. stabbles ◴[] No.44325867[source]
That's entirely orthogonal.

In probabilistic programming you (deterministically) define variables and formulas. It's just that the variables aren't instances of floats, but represent stochastic variables over floats.

This is similar to libraries for linear algebra where writing A * B * C does not immediately evaluate, but rather builds an expression tree that represent the computation; you need to do say `eval(A * B * C)` to obtain the actual value, and it gives the library room to compute it in the most efficient way.

It's more related to symbolic programming and lazy evaluation than (non-)determinism.