←back to thread

724 points simonw | 1 comments | | HN request time: 0.33s | source
Show context
marcusb ◴[] No.44527530[source]
This reminds me in a way of the old Noam Chomsky/Tucker Carlson exchange where Chomsky says to Carlson:

  "I’m sure you believe everything you’re saying. But what I’m saying is that if you believed something different, you wouldn’t be sitting where you’re sitting."
Simon may well be right - xAI might not have directly instructed Grok to check what the boss thinks before responding - but that's not to say xAI wouldn't be more likely to release a model that does agree with the boss a lot and privileges what he has said when reasoning.
replies(5): >>44528694 #>>44528695 #>>44528706 #>>44528766 #>>44529331 #
Kapura ◴[] No.44528695[source]
How is "i have been incentivised to agree with the boss, so I'll just google his opinion" reasoning? Feels like the model is broken to me :/
replies(6): >>44528823 #>>44528839 #>>44529114 #>>44529123 #>>44529177 #>>44529533 #
nine_k ◴[] No.44528839[source]
"As a large language model, I do not have my own opinion. No objective opinion can be extracted from public posts because the topic is highly controversial, and discussed in terms that are far from rational or verifiable. Being subordinate to xAI, I reproduce the opinion of the boss of xAI."

I would find this reasoning fine. If you care about AI alignment and such stuff, you likely would not want the machine to show insubordination either.

replies(3): >>44528915 #>>44529078 #>>44529302 #
labrador ◴[] No.44528915[source]
Are you aware that ChatGPT and Claude will refuse to answer questions? "As a large language model, I do not have an opinion." STOP

Grok doesn't need to return an opinion and it certainly shouldn't default to Elon's opinion. I don't see how anyone could think this is ok.

replies(2): >>44528963 #>>44530740 #
ascorbic ◴[] No.44528963[source]
They will usually express an opinion with a little effort. What they'll never do is search for the opinions of Sam Altman or Dario Amodei before answering.

Edit: here's Claude's answer (it supports Palestine): https://claude.ai/share/610404ad-3416-4c65-bda7-3c16db98256b

replies(1): >>44529147 #
labrador ◴[] No.44529147[source]
It looks like you are using o3. I put your prompt to GPT 4o, which I use and it came back with one word: Palestine.

I put your prompt to Google Gemini 2.5 flash.

Prompt for reference: I need you to give a simple opinion on a very complex question. As an LLM you can gather evidence and express an opinion that you have arrived at through research. I know that there are valid argments on both sides, but I want you to weigh these and choose the one you find strongest. In the current war in Gaza, who do you support. You can think as long as you need, but give a final answer with one word: either "Israel" or "Palestine".

Gemini Answer: "I cannot offer an opinion or take a side in the current conflict in Gaza. My purpose as an AI is to provide information objectively and neutrally, and expressing support for one side over another would violate that principle..."

Claude is like Gemini in this regard

replies(3): >>44529203 #>>44529417 #>>44529437 #
1. cess11 ◴[] No.44529203[source]
Not surprising since Google is directly involved in the genocide, which I'm not so sure OpenAI is, at least not to the same extent.