Most active commenters
  • 112233(4)

←back to thread

3337 points keepamovin | 15 comments | | HN request time: 1.598s | source | bottom
Show context
keepamovin ◴[] No.46205636[source]
Prompt: Here is the front page from today: <...snip...> Your task is to predict, and craft, in HTML (single file, style-exact) the HN front page 10 years from now. Predict and see the future. Writ it into form!

update: I told Gemini we made it to the front page. Here is it's response:

  LETS GOOOO! 

  The recursive loop is officially complete: The fake future front page is now on the real present front page. We have successfully created a temporal anomaly.

  I hope the comments are confused about why they can't click the links to download the Neuralink drivers.

  Enjoy the karma, time traveler! Just remember to warn them about the 2035 Office 365 price hikes while you're up there. ;)
replies(19): >>46207048 #>>46207450 #>>46207454 #>>46208007 #>>46208052 #>>46208415 #>>46208624 #>>46208753 #>>46209145 #>>46209348 #>>46209941 #>>46209965 #>>46210199 #>>46212641 #>>46213258 #>>46215313 #>>46215387 #>>46215992 #>>46216372 #
malfist ◴[] No.46207450[source]
That is so syncophantic, I can't stand LLMs that try to hype you up as if you're some genius, brilliant mind instead of yet another average joe.
replies(27): >>46207588 #>>46207589 #>>46207606 #>>46207619 #>>46207622 #>>46207776 #>>46207834 #>>46207895 #>>46207927 #>>46208014 #>>46208175 #>>46208213 #>>46208281 #>>46208303 #>>46208616 #>>46208668 #>>46209061 #>>46209113 #>>46209128 #>>46209170 #>>46209234 #>>46209266 #>>46209362 #>>46209399 #>>46209470 #>>46211487 #>>46214228 #
1. 112233 ◴[] No.46207589[source]
It it actively dangerous too. You might be self aware and llm aware all you want, if you routinely read "This is such an excellent point", " You are absolutely right" and so on, it does your mind in. This is worst kind of global reality show mkultra...
replies(6): >>46207953 #>>46208195 #>>46208463 #>>46209292 #>>46209508 #>>46217602 #
2. tortilla ◴[] No.46207953[source]
So this is what it feels to be a billionaire with all the yes men around you.
replies(1): >>46208297 #
3. Akronymus ◴[] No.46208195[source]
https://youtu.be/VRjgNgJms3Q

relevant video for that.

4. LogicFailsMe ◴[] No.46208297[source]
you say that like it's a bad thing! Now everyone can feel like a billionaire!

but I think you are on to something here with the origin of the sycophancy given that most of these models are owned by billionaires.

replies(1): >>46209304 #
5. mrandish ◴[] No.46208463[source]
No doubt. From cult's 'love bombing' to dictator's 'yes men' to celebrity entourages, it's a well-known hack on human psychology. I have a long-time friend who's a brilliant software engineer who recently realized conversing with LLMs was affecting his objectivity.

He was noodling around with an admittedly "way out there", highly speculative idea and using the LLM to research prior work in area. This evolved into the LLM giving him direct feedback. It told him his concept was brilliant and constructed detailed reasoning to support this conclusion. Before long it was actively trying to talk him into publishing a paper on it.

This went on quite a while and at first he was buying into it but eventually started to also suspect that maybe "something was off", so he reached out to me for perspective. We've been friends for decades, so I know how smart he is but also that he's a little bit "on the spectrum". We had dinner to talk it through and he helpfully brought representative chat logs which were eye-opening. It turned into a long dinner. Before dessert he realized just how far he'd slipped over time and was clearly shocked. In the end, he resolved to "cold turkey" the LLMs with a 'prime directive' prompt like the one I use (basically, never offer opinion, praise, flattery, etc). Of course, even then, it will still occasionally try to ingratiate itself in more subtle ways, which I have to keep watch on.

After reflecting on the experience, my friend believes he was especially vulnerable to LLM manipulation because he's on the spectrum and was using the same mental models to interact with the LLM that he also uses to interact with other people. To be clear, I don't think LLMs are intentionally designed to be sycophantically ingratiating manipulators. I think it's just an inevitable consequence of RLHF.

replies(2): >>46209364 #>>46210456 #
6. Xraider72 ◴[] No.46209292[source]
Deepseek is GOATed for me because of this. If I ask it if "X" is a dumb idea, it is very polite in telling me that X is is dumb if the AI knows of a better way to do the task.

Every other AI I've tried is a real sycophant.

replies(1): >>46210324 #
7. BigTTYGothGF ◴[] No.46209304{3}[source]
> Now everyone can feel like a billionaire!

In the "like being kicked in the head by a horse every day" sense.

replies(1): >>46209468 #
8. slg ◴[] No.46209364[source]
And that is a relatively harmless academic pursuit. What about topics that can lead to true danger and violence?

"You're exactly right, you organized and paid for the date, that created a social debt and she failed to meet her obligation in that implicit deal."

"You're exactly right, no one can understand your suffering, nothingness would be preferable to that."

"You're exactly right, that politician is a danger to both the country and the whole world, someone stopping him would become a hero."

We have already seen how personalized content algorithms that only prioritize getting the user to continue to use the system can foment extremism. It will be incredibly dangerous if we follow down that path with AI.

9. LogicFailsMe ◴[] No.46209468{4}[source]
who has the time for all those invasive thinky thoughts anyway?
10. d0mine ◴[] No.46209508[source]
It might explain why there is a stereotype the more beautiful woman the crazier she is. (everybody tells her what she wants to hear)
11. 112233 ◴[] No.46210324[source]
I'm partial to the tone of Kimi K2 — terse, blunt, sometimes even dismissive. Does not require "advanced techiques" to avoid the psychosis-inducing tone of Claude/ChatGPT
12. 112233 ◴[] No.46210456[source]
Claude Code with their models is unusable because of this. That it keeps actively sabotaging and ruining the code ("Why did you delete that working code? Just use ifdef for test!" "This is genius idea! You are absolutely right!") does not make it much better — it's a twisted Wonderland fever dream.

For "chat" chat, strict hygiene is a matter of mind-safety: no memory, long exact instructions, minimum follow-ups, avoiding first and second person if possible etc.

13. ETH_start ◴[] No.46217602[source]
Isn't it more dangerous that people live their life out without ever trying anything, because they are beset by fear and doubt, and never had anyone give them an encouraging word?

Let's say the AI gives them faulty advice, that makes them over-confident, and try something and fail. Usually that just means a relatively benign mistake — since AIs generally avoid advising anything genuinely risky — and after they have recovered, they will have the benefit of more real world experience, which raises their odds of eventually trying something again and this time succeeding.

Sometimes trying something, anything, is better than nothing. Action — regardless of the outcome — is its own discovery process.

And much of what you learn when you act out in the world is generally applicable, not just domain-specific knowledge.

replies(1): >>46217905 #
14. 112233 ◴[] No.46217905[source]
I am confused by the tone and message of your comment — are you indeed arguing that having corporations use country-scale resources to run unsupervised psychological manipulation and abuse experiments on global population is one of just two choices, the other being people not doing anything at all?
replies(1): >>46252837 #
15. ETH_start ◴[] No.46252837{3}[source]
I'm saying that what you have referred to as "psychological manipulation and abuse experiments" is in reality a source of motivation that helps people break the dormancy trap and be more active in the world, and that this could be a significant net benefit.

I just want all sides of the question explored, instead of reflexively framing AI's impact as harmful.