←back to thread

1480 points sandslash | 1 comments | | HN request time: 0.209s | source
Show context
OJFord ◴[] No.44324130[source]
I'm not sure about the 1.0/2.0/3.0 classification, but it did lead me to think about LLMs as a programming paradigm: we've had imperative & declarative, procedural & functional languages, maybe we'll come to view deterministic vs. probabilistic (LLMs) similarly.

    def __main__:
        You are a calculator. Given an input expression, you compute the result and print it to stdout, exiting 0.
        Should you be unable to do this, you print an explanation to stderr and exit 1.
(and then, perhaps, a bunch of 'DO NOT express amusement when the result is 5318008', etc.)
replies(10): >>44324398 #>>44324762 #>>44325091 #>>44325404 #>>44325767 #>>44327171 #>>44327549 #>>44328699 #>>44328876 #>>44329436 #
dheera ◴[] No.44325091[source]

    def __main__:
        You run main(). If there are issues, you edit __file__ to try to fix the errors and re-run it. You are determined, persistent, and never give up.
replies(1): >>44325115 #
beambot ◴[] No.44325115[source]
Output "1" if the program halts; "0" if it doesn't.
replies(1): >>44328013 #
fragmede ◴[] No.44328013[source]
funnily enough, you can give the LLM the code and ask it if the function will halt, and for some cases of input, it is able to say that the program does/does not halt.
replies(2): >>44329344 #>>44329473 #
1. pxc ◴[] No.44329344[source]
The halting problem is about being able to answer this question in full generality, though. Being able to answer the question for specific cases is already feasible and always was.