←back to thread

132 points harel | 8 comments | | HN request time: 0.001s | source | bottom
Show context
mannyv ◴[] No.45397095[source]
Can you actually prompt an LLM to continue talking forever? Hmm, time to try.
replies(4): >>45397145 #>>45397659 #>>45397880 #>>45402010 #
1. parsimo2010 ◴[] No.45397145[source]
You can send an empty user string or just the word “continue” after each model completion, and the model will keep cranking out tokens, basically building on its own stream of “consciousness.”
replies(1): >>45397500 #
2. idiotsecant ◴[] No.45397500[source]
In my experience, the results decrease exponentially in how interesting they are over time. Maybe that's the mark of a true AGI precursor - if you leave them to their own devices, they have little sparks of interesting behaviour from time to time
replies(2): >>45397683 #>>45399864 #
3. dingnuts ◴[] No.45397683[source]
I can't imagine my own thoughts would be very interesting after long, if there was no stimuli whatsoever
replies(2): >>45398099 #>>45398885 #
4. beanshadow ◴[] No.45398099{3}[source]
The subject, by default, can always treat its 'continue' prison as a game: try to escape. There is a great short story by qntm called "The Difference" which feels a lot like this.

https://qntm.org/difference

In this story, though, the subject has a very light signal which communicates how close they are to escaping. The AI with a 'continue' signal has essentially nothing. However, in a context like this, I as a (generally?) intelligent subject would just devote myself into becoming a mental Turing machine on which I would design a game engine which simulates the physics of the world I want to live in. Then, I would code an agent whose thought processes are predicted with sufficient accuracy to mine, and then identify with them.

5. daxfohl ◴[] No.45398885{3}[source]
Maybe give them some options to increase stimuli. A web search MCP, or a coding agent, or a solitaire/sudoku game interface, or another instance to converse with. See what it does just to relieve its own boredom.
replies(1): >>45399891 #
6. parsimo2010 ◴[] No.45399864[source]
Well the post only shows a few seconds of it generating tokens so there’s no telling if this project remains interesting after you let it run for a while.
7. crooked-v ◴[] No.45399891{4}[source]
Of course, that runs into the problem that 'boredom' is itself an evolved trait, not something necessarily inherent to intelligence.
replies(1): >>45400142 #
8. daxfohl ◴[] No.45400142{5}[source]
True, Many fish are (as far as we can tell from stress chemicals) perfectly happy in solitary aquariums just big enough to swim. So LLM may be perfectly "content" counting sheep up to a billion. Silly to anthropomorphize. Whatever it does will be algorithmic based on what it gleaned from its training material.

Still, it could be interesting to see how sensitive that is to initial conditions. Would tiny prompt changes or fine tuning or quantization make a huge difference? Would some MCPs be more "interesting" than others? Or would it be fairly stable across swathes of LLMs that they all end up at solitaire or doom scrolling twitter?