←back to thread

132 points harel | 1 comments | | HN request time: 0s | source
Show context
mannyv ◴[] No.45397095[source]
Can you actually prompt an LLM to continue talking forever? Hmm, time to try.
replies(4): >>45397145 #>>45397659 #>>45397880 #>>45402010 #
parsimo2010 ◴[] No.45397145[source]
You can send an empty user string or just the word “continue” after each model completion, and the model will keep cranking out tokens, basically building on its own stream of “consciousness.”
replies(1): >>45397500 #
idiotsecant ◴[] No.45397500[source]
In my experience, the results decrease exponentially in how interesting they are over time. Maybe that's the mark of a true AGI precursor - if you leave them to their own devices, they have little sparks of interesting behaviour from time to time
replies(2): >>45397683 #>>45399864 #
dingnuts ◴[] No.45397683[source]
I can't imagine my own thoughts would be very interesting after long, if there was no stimuli whatsoever
replies(2): >>45398099 #>>45398885 #
daxfohl ◴[] No.45398885[source]
Maybe give them some options to increase stimuli. A web search MCP, or a coding agent, or a solitaire/sudoku game interface, or another instance to converse with. See what it does just to relieve its own boredom.
replies(1): >>45399891 #
crooked-v ◴[] No.45399891[source]
Of course, that runs into the problem that 'boredom' is itself an evolved trait, not something necessarily inherent to intelligence.
replies(1): >>45400142 #
1. daxfohl ◴[] No.45400142{3}[source]
True, Many fish are (as far as we can tell from stress chemicals) perfectly happy in solitary aquariums just big enough to swim. So LLM may be perfectly "content" counting sheep up to a billion. Silly to anthropomorphize. Whatever it does will be algorithmic based on what it gleaned from its training material.

Still, it could be interesting to see how sensitive that is to initial conditions. Would tiny prompt changes or fine tuning or quantization make a huge difference? Would some MCPs be more "interesting" than others? Or would it be fairly stable across swathes of LLMs that they all end up at solitaire or doom scrolling twitter?