←back to thread

132 points harel | 1 comments | | HN request time: 0s | source
Show context
mannyv ◴[] No.45397095[source]
Can you actually prompt an LLM to continue talking forever? Hmm, time to try.
replies(4): >>45397145 #>>45397659 #>>45397880 #>>45402010 #
parsimo2010 ◴[] No.45397145[source]
You can send an empty user string or just the word “continue” after each model completion, and the model will keep cranking out tokens, basically building on its own stream of “consciousness.”
replies(1): >>45397500 #
idiotsecant ◴[] No.45397500[source]
In my experience, the results decrease exponentially in how interesting they are over time. Maybe that's the mark of a true AGI precursor - if you leave them to their own devices, they have little sparks of interesting behaviour from time to time
replies(2): >>45397683 #>>45399864 #
dingnuts ◴[] No.45397683[source]
I can't imagine my own thoughts would be very interesting after long, if there was no stimuli whatsoever
replies(2): >>45398099 #>>45398885 #
1. beanshadow ◴[] No.45398099[source]
The subject, by default, can always treat its 'continue' prison as a game: try to escape. There is a great short story by qntm called "The Difference" which feels a lot like this.

https://qntm.org/difference

In this story, though, the subject has a very light signal which communicates how close they are to escaping. The AI with a 'continue' signal has essentially nothing. However, in a context like this, I as a (generally?) intelligent subject would just devote myself into becoming a mental Turing machine on which I would design a game engine which simulates the physics of the world I want to live in. Then, I would code an agent whose thought processes are predicted with sufficient accuracy to mine, and then identify with them.