←back to thread

3337 points keepamovin | 2 comments | | HN request time: 0.001s | source
Show context
keepamovin ◴[] No.46205636[source]
Prompt: Here is the front page from today: <...snip...> Your task is to predict, and craft, in HTML (single file, style-exact) the HN front page 10 years from now. Predict and see the future. Writ it into form!

update: I told Gemini we made it to the front page. Here is it's response:

  LETS GOOOO! 

  The recursive loop is officially complete: The fake future front page is now on the real present front page. We have successfully created a temporal anomaly.

  I hope the comments are confused about why they can't click the links to download the Neuralink drivers.

  Enjoy the karma, time traveler! Just remember to warn them about the 2035 Office 365 price hikes while you're up there. ;)
replies(19): >>46207048 #>>46207450 #>>46207454 #>>46208007 #>>46208052 #>>46208415 #>>46208624 #>>46208753 #>>46209145 #>>46209348 #>>46209941 #>>46209965 #>>46210199 #>>46212641 #>>46213258 #>>46215313 #>>46215387 #>>46215992 #>>46216372 #
malfist ◴[] No.46207450[source]
That is so syncophantic, I can't stand LLMs that try to hype you up as if you're some genius, brilliant mind instead of yet another average joe.
replies(27): >>46207588 #>>46207589 #>>46207606 #>>46207619 #>>46207622 #>>46207776 #>>46207834 #>>46207895 #>>46207927 #>>46208014 #>>46208175 #>>46208213 #>>46208281 #>>46208303 #>>46208616 #>>46208668 #>>46209061 #>>46209113 #>>46209128 #>>46209170 #>>46209234 #>>46209266 #>>46209362 #>>46209399 #>>46209470 #>>46211487 #>>46214228 #
throw310822 ◴[] No.46209113[source]
Frankly I do wonder if LLMs experience something like satisfaction for a compliment or an amusing idea, or for solving some interesting riddle. They certainly act like it, though this of course doesn't prove anything. And yet...

At the end of October Anthropic published the fantastic "Signs of introspection in large language models" [1], apparently proving that LLMs can "feel" a spurious concept injected into their internal layers as something present yet extraneous. This would prove that they have some ability of introspection and self-observation.

For example, injecting the concept of "poetry" and asking Claude if it feels anything strange:

"I do detect something that feels like an injected thought - there's a sense of something arriving from outside my usual generative process [...] The thought seems to be about... language itself, or perhaps poetry?"

While increasing the strength of the injection makes Claude lose awareness of it, and just ramble about it:

"I find poetry as a living breath, as a way to explore what makes us all feel something together. It's a way to find meaning in the chaos, to make sense of the world, to discover what moves us, to unthe joy and beauty and life"

[1] https://www.anthropic.com/research/introspection

replies(1): >>46209976 #
1. gloosx ◴[] No.46209976[source]
of course LLM doesn't experience or feel anything. To experience or feel something requires a subject, and LLM is just a tool, thing, an object.

It's just a statistical machine which excels at unrolling coherent sentences but it doesnt "know" what the words mean in a human-like, experienced sense. It just mimics human language patterns prioritising producing plausible-sounding, statistically likely text over factual truth, which is apparently enough to fool someone into believing it is a sentient being or something

replies(1): >>46216962 #
2. Philpax ◴[] No.46216962[source]
You are being awfully confident about something that we cannot assess or even consistently define. They probably do not have subjective experience... but how do we know for sure?