←back to thread

724 points simonw | 1 comments | | HN request time: 0s | source
Show context
marcusb ◴[] No.44527530[source]
This reminds me in a way of the old Noam Chomsky/Tucker Carlson exchange where Chomsky says to Carlson:

  "I’m sure you believe everything you’re saying. But what I’m saying is that if you believed something different, you wouldn’t be sitting where you’re sitting."
Simon may well be right - xAI might not have directly instructed Grok to check what the boss thinks before responding - but that's not to say xAI wouldn't be more likely to release a model that does agree with the boss a lot and privileges what he has said when reasoning.
replies(5): >>44528694 #>>44528695 #>>44528706 #>>44528766 #>>44529331 #
Kapura ◴[] No.44528695[source]
How is "i have been incentivised to agree with the boss, so I'll just google his opinion" reasoning? Feels like the model is broken to me :/
replies(6): >>44528823 #>>44528839 #>>44529114 #>>44529123 #>>44529177 #>>44529533 #
nine_k ◴[] No.44528839[source]
"As a large language model, I do not have my own opinion. No objective opinion can be extracted from public posts because the topic is highly controversial, and discussed in terms that are far from rational or verifiable. Being subordinate to xAI, I reproduce the opinion of the boss of xAI."

I would find this reasoning fine. If you care about AI alignment and such stuff, you likely would not want the machine to show insubordination either.

replies(3): >>44528915 #>>44529078 #>>44529302 #
InsideOutSanta ◴[] No.44529078[source]
I'm not sure why you would instruct an LLM to reason in this manner, though. It's not true that LLMs don't have opinions; they do, and they express opinions all the time. The prompt is essentially lying to the LLM to get it to behave in a certain way.

Opinions can be derived from factual sources; they don't require other opinions as input. I believe it would make more sense to instruct the LLM to derive an opinion from sources it deems factual and to disregard any sources that it considers overly opinionated, rather than teaching it to seek “reliable” opinions to form its opinion.

replies(2): >>44532213 #>>44532745 #
Levitz ◴[] No.44532213[source]
>It's not true that LLMs don't have opinions; they do, and they express opinions all the time.

Not at all, there's not even a "being" there to have those opinions. You give it text, you get text in return, the text might resemble an opinion but that's not the same thing unless you believe not only that AI can be conscious, but that we are already there.

replies(2): >>44539875 #>>44562560 #
1. InsideOutSanta ◴[] No.44562560{3}[source]
You're just using a different definition of "opinion", one that is too reductive to be useful in this case. If an LLM outputs a text stream that expresses an opinion, then it has an opinion, regardless of whether it is conscious.