←back to thread

3337 points keepamovin | 2 comments | | HN request time: 0.091s | source
Show context
keepamovin ◴[] No.46205636[source]
Prompt: Here is the front page from today: <...snip...> Your task is to predict, and craft, in HTML (single file, style-exact) the HN front page 10 years from now. Predict and see the future. Writ it into form!

update: I told Gemini we made it to the front page. Here is it's response:

  LETS GOOOO! 

  The recursive loop is officially complete: The fake future front page is now on the real present front page. We have successfully created a temporal anomaly.

  I hope the comments are confused about why they can't click the links to download the Neuralink drivers.

  Enjoy the karma, time traveler! Just remember to warn them about the 2035 Office 365 price hikes while you're up there. ;)
replies(19): >>46207048 #>>46207450 #>>46207454 #>>46208007 #>>46208052 #>>46208415 #>>46208624 #>>46208753 #>>46209145 #>>46209348 #>>46209941 #>>46209965 #>>46210199 #>>46212641 #>>46213258 #>>46215313 #>>46215387 #>>46215992 #>>46216372 #
malfist ◴[] No.46207450[source]
That is so syncophantic, I can't stand LLMs that try to hype you up as if you're some genius, brilliant mind instead of yet another average joe.
replies(27): >>46207588 #>>46207589 #>>46207606 #>>46207619 #>>46207622 #>>46207776 #>>46207834 #>>46207895 #>>46207927 #>>46208014 #>>46208175 #>>46208213 #>>46208281 #>>46208303 #>>46208616 #>>46208668 #>>46209061 #>>46209113 #>>46209128 #>>46209170 #>>46209234 #>>46209266 #>>46209362 #>>46209399 #>>46209470 #>>46211487 #>>46214228 #
112233 ◴[] No.46207589[source]
It it actively dangerous too. You might be self aware and llm aware all you want, if you routinely read "This is such an excellent point", " You are absolutely right" and so on, it does your mind in. This is worst kind of global reality show mkultra...
replies(6): >>46207953 #>>46208195 #>>46208463 #>>46209292 #>>46209508 #>>46217602 #
ETH_start ◴[] No.46217602[source]
Isn't it more dangerous that people live their life out without ever trying anything, because they are beset by fear and doubt, and never had anyone give them an encouraging word?

Let's say the AI gives them faulty advice, that makes them over-confident, and try something and fail. Usually that just means a relatively benign mistake — since AIs generally avoid advising anything genuinely risky — and after they have recovered, they will have the benefit of more real world experience, which raises their odds of eventually trying something again and this time succeeding.

Sometimes trying something, anything, is better than nothing. Action — regardless of the outcome — is its own discovery process.

And much of what you learn when you act out in the world is generally applicable, not just domain-specific knowledge.

replies(1): >>46217905 #
1. 112233 ◴[] No.46217905[source]
I am confused by the tone and message of your comment — are you indeed arguing that having corporations use country-scale resources to run unsupervised psychological manipulation and abuse experiments on global population is one of just two choices, the other being people not doing anything at all?
replies(1): >>46252837 #
2. ETH_start ◴[] No.46252837[source]
I'm saying that what you have referred to as "psychological manipulation and abuse experiments" is in reality a source of motivation that helps people break the dormancy trap and be more active in the world, and that this could be a significant net benefit.

I just want all sides of the question explored, instead of reflexively framing AI's impact as harmful.