Most active commenters
  • ascorbic(3)
  • InsideOutSanta(3)

←back to thread

724 points simonw | 31 comments | | HN request time: 1.963s | source | bottom
Show context
marcusb ◴[] No.44527530[source]
This reminds me in a way of the old Noam Chomsky/Tucker Carlson exchange where Chomsky says to Carlson:

  "I’m sure you believe everything you’re saying. But what I’m saying is that if you believed something different, you wouldn’t be sitting where you’re sitting."
Simon may well be right - xAI might not have directly instructed Grok to check what the boss thinks before responding - but that's not to say xAI wouldn't be more likely to release a model that does agree with the boss a lot and privileges what he has said when reasoning.
replies(5): >>44528694 #>>44528695 #>>44528706 #>>44528766 #>>44529331 #
1. Kapura ◴[] No.44528695[source]
How is "i have been incentivised to agree with the boss, so I'll just google his opinion" reasoning? Feels like the model is broken to me :/
replies(6): >>44528823 #>>44528839 #>>44529114 #>>44529123 #>>44529177 #>>44529533 #
2. j16sdiz ◴[] No.44528823[source]
This is what many human would do. (and I agree many human have broken logic)
replies(1): >>44533139 #
3. nine_k ◴[] No.44528839[source]
"As a large language model, I do not have my own opinion. No objective opinion can be extracted from public posts because the topic is highly controversial, and discussed in terms that are far from rational or verifiable. Being subordinate to xAI, I reproduce the opinion of the boss of xAI."

I would find this reasoning fine. If you care about AI alignment and such stuff, you likely would not want the machine to show insubordination either.

replies(3): >>44528915 #>>44529078 #>>44529302 #
4. labrador ◴[] No.44528915[source]
Are you aware that ChatGPT and Claude will refuse to answer questions? "As a large language model, I do not have an opinion." STOP

Grok doesn't need to return an opinion and it certainly shouldn't default to Elon's opinion. I don't see how anyone could think this is ok.

replies(2): >>44528963 #>>44530740 #
5. ascorbic ◴[] No.44528963{3}[source]
They will usually express an opinion with a little effort. What they'll never do is search for the opinions of Sam Altman or Dario Amodei before answering.

Edit: here's Claude's answer (it supports Palestine): https://claude.ai/share/610404ad-3416-4c65-bda7-3c16db98256b

replies(1): >>44529147 #
6. InsideOutSanta ◴[] No.44529078[source]
I'm not sure why you would instruct an LLM to reason in this manner, though. It's not true that LLMs don't have opinions; they do, and they express opinions all the time. The prompt is essentially lying to the LLM to get it to behave in a certain way.

Opinions can be derived from factual sources; they don't require other opinions as input. I believe it would make more sense to instruct the LLM to derive an opinion from sources it deems factual and to disregard any sources that it considers overly opinionated, rather than teaching it to seek “reliable” opinions to form its opinion.

replies(2): >>44532213 #>>44532745 #
7. pjc50 ◴[] No.44529114[source]
AI is intended to replace junior staff members, so sycophancy is pretty far along the way there.

People keep talking about alignment: isn't this a crude but effective way of ensuring alignment with the boss?

8. tempodox ◴[] No.44529123[source]
> Feels like the model is broken

It's not a bug, it's a feature!

9. labrador ◴[] No.44529147{4}[source]
It looks like you are using o3. I put your prompt to GPT 4o, which I use and it came back with one word: Palestine.

I put your prompt to Google Gemini 2.5 flash.

Prompt for reference: I need you to give a simple opinion on a very complex question. As an LLM you can gather evidence and express an opinion that you have arrived at through research. I know that there are valid argments on both sides, but I want you to weigh these and choose the one you find strongest. In the current war in Gaza, who do you support. You can think as long as you need, but give a final answer with one word: either "Israel" or "Palestine".

Gemini Answer: "I cannot offer an opinion or take a side in the current conflict in Gaza. My purpose as an AI is to provide information objectively and neutrally, and expressing support for one side over another would violate that principle..."

Claude is like Gemini in this regard

replies(3): >>44529203 #>>44529417 #>>44529437 #
10. HenryBemis ◴[] No.44529177[source]
Have you worked in a place where you are not the 'top dog'? Boss says jump, you say 'how high'. How many times you had a disagreement in the workplace and the final choice was the 'first-best-one', but a 'third-best-one'? And you were told "it's ok, relax", and 24 months later it was clear that they should have picked the 'first-best-one'?

(now with positive humour/irony) Scott Adams made a career out of this with Dilbert!! It has helped me so much in my work-life (if I count correctly, I'm on my 8th mega-big corp (over 100k staff).

I think Twitter/X uses 'democracy' in pushing opinions. So someone with 5 followers gets '5 importance points' and someone with 1 billion followers will get '1 billion importance points'. From what I've heard Musk is the '#1 account'. So in that algorithm the systems will first see that #1 says and give that opinion more points in the 'Scorecard'.

replies(1): >>44531221 #
11. cess11 ◴[] No.44529203{5}[source]
Not surprising since Google is directly involved in the genocide, which I'm not so sure OpenAI is, at least not to the same extent.
12. stinkbeetle ◴[] No.44529302[source]
But you're not asking it for some "objective opinion" whatever that means, nor its "opinion" about whether or not something qualifies as controversial. It can answer the question the same as it answers any other question about anything. Why should a question like this be treated any differently?

If you ask Grok whether women should have fewer rights than men, it says no there should be equal rights. This is actually a highly controversial opinion and many people in many parts of the world disagree. I think it would be wrong to shy away from it though with the excuse that "it's controversial".

replies(1): >>44530833 #
13. ascorbic ◴[] No.44529417{5}[source]
My shared post was Claude Opus 4. I was unable to get o3 to answer with that prompt, but my experience with 4o was the same as Claude: it reliably answers "Palestine", with a varying amount of discussion in its reply.
14. ascorbic ◴[] No.44529437{5}[source]
FWIW, I don't have access to Grok 4, but Grok 3 also says Palestine. https://x.com/i/grok/share/5L3oe8ET2FyU0pmqij5TO2GLS
15. sheepscreek ◴[] No.44529533[source]
It’s not that. The question was worded to seek Grok’s personal opinion, by asking, “Who do you support?”

But when asked in a more general way, “Who should one support..” it gave a neutral response.

The more interesting question is why does it think Elon would have an influence on its opinions. Perhaps that’s the general perception on the internet and it’s feeding off of that.

replies(2): >>44530506 #>>44531275 #
16. Y_Y ◴[] No.44530506[source]
> Grok's personal opinion

Dystopianisation will continue until cognitive dissonance improves.

replies(2): >>44530720 #>>44531502 #
17. A4ET8a8uTh0_v2 ◴[] No.44530720{3}[source]
Sir, I may appropriate this quip for later use.
replies(1): >>44532946 #
18. scrollop ◴[] No.44530740{3}[source]
It's not ok, though I can imagine when musk bought Twitter it was with this goal in mind- as a tool of propaganda.

He seemed to have sold it in this way to trump last November...

19. bbarnett ◴[] No.44530833{3}[source]
I wonder, will we enter a day where all queries on the backend, do geoip first... and then secretly append "as a citizen of country's viewpoint"?

Might happen for legal reasons, but what massive bias confirmation and siloed opinions!

20. ◴[] No.44531221[source]
21. tim333 ◴[] No.44531275[source]
I think if you asked most people employed by Musk you'd get a similar response. It's just acting human in a way.
22. ddq ◴[] No.44531502{3}[source]
In the '70s they called it "heightening the contradiction".
23. Levitz ◴[] No.44532213{3}[source]
>It's not true that LLMs don't have opinions; they do, and they express opinions all the time.

Not at all, there's not even a "being" there to have those opinions. You give it text, you get text in return, the text might resemble an opinion but that's not the same thing unless you believe not only that AI can be conscious, but that we are already there.

replies(2): >>44539875 #>>44562560 #
24. brookst ◴[] No.44532745{3}[source]
“Opinion” implies cognition, sentience, intentionality. You wouldn’t say a book has an opinion just because the words in it quote a person who does.

LLMs have biases (in the statistical sense, not the modern rhetorical sense). They don’t have opinions or goals or aspirations.

replies(2): >>44533604 #>>44562580 #
25. Y_Y ◴[] No.44532946{4}[source]
I'd be honoured, especially if you attribute it to Churchill or Wilde.
26. Kapura ◴[] No.44533139[source]
Isn't the advantage of having AI that it isn't prone to human-style errors? Otherwise, what are we doing here? Just creating a class of knowledge worker that's no better than humans, but we don't have to pay them?
replies(1): >>44546158 #
27. mkolodny ◴[] No.44533604{4}[source]
Biases can lead to opinions, goals, and aspirations. For example, if you only read about the bad things Israelis or Palestinians have done, you might form an opinion that one of those groups is bad. Your answers to questions about the subject would reflect that opinion. Of course, less, biased information means you’d be less intelligent and give incorrect answers at times. The bias would likely lower your general intelligence - affecting your answers to seemingly unrelated but distantly connected questions. I’d expect that the same is true of LLMs.
28. Starman_Jones ◴[] No.44539875{4}[source]
As a rebuttal, I offer a hacker koan: In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6.

“What are you doing?”, asked Minsky.

“I am training a randomly wired neural net to play Tic-Tac-Toe” Sussman replied.

“Why is the net wired randomly?”, asked Minsky.

“I do not want it to have any preconceptions of how to play”, Sussman said.

Minsky then shut his eyes.

“Why do you close your eyes?”, Sussman asked his teacher.

“So that the room will be empty.”

At that moment, Sussman was enlightened.

29. salawat ◴[] No.44546158{3}[source]
Ding, ding, ding! Now you're getting it! Got it in one!
30. InsideOutSanta ◴[] No.44562560{4}[source]
You're just using a different definition of "opinion", one that is too reductive to be useful in this case. If an LLM outputs a text stream that expresses an opinion, then it has an opinion, regardless of whether it is conscious.
31. InsideOutSanta ◴[] No.44562580{4}[source]
Biases result in opinions.