←back to thread

724 points simonw | 3 comments | | HN request time: 0.001s | source
Show context
marcusb ◴[] No.44527530[source]
This reminds me in a way of the old Noam Chomsky/Tucker Carlson exchange where Chomsky says to Carlson:

  "I’m sure you believe everything you’re saying. But what I’m saying is that if you believed something different, you wouldn’t be sitting where you’re sitting."
Simon may well be right - xAI might not have directly instructed Grok to check what the boss thinks before responding - but that's not to say xAI wouldn't be more likely to release a model that does agree with the boss a lot and privileges what he has said when reasoning.
replies(5): >>44528694 #>>44528695 #>>44528706 #>>44528766 #>>44529331 #
Kapura ◴[] No.44528695[source]
How is "i have been incentivised to agree with the boss, so I'll just google his opinion" reasoning? Feels like the model is broken to me :/
replies(6): >>44528823 #>>44528839 #>>44529114 #>>44529123 #>>44529177 #>>44529533 #
nine_k ◴[] No.44528839[source]
"As a large language model, I do not have my own opinion. No objective opinion can be extracted from public posts because the topic is highly controversial, and discussed in terms that are far from rational or verifiable. Being subordinate to xAI, I reproduce the opinion of the boss of xAI."

I would find this reasoning fine. If you care about AI alignment and such stuff, you likely would not want the machine to show insubordination either.

replies(3): >>44528915 #>>44529078 #>>44529302 #
InsideOutSanta ◴[] No.44529078{3}[source]
I'm not sure why you would instruct an LLM to reason in this manner, though. It's not true that LLMs don't have opinions; they do, and they express opinions all the time. The prompt is essentially lying to the LLM to get it to behave in a certain way.

Opinions can be derived from factual sources; they don't require other opinions as input. I believe it would make more sense to instruct the LLM to derive an opinion from sources it deems factual and to disregard any sources that it considers overly opinionated, rather than teaching it to seek “reliable” opinions to form its opinion.

replies(2): >>44532213 #>>44532745 #
1. brookst ◴[] No.44532745{4}[source]
“Opinion” implies cognition, sentience, intentionality. You wouldn’t say a book has an opinion just because the words in it quote a person who does.

LLMs have biases (in the statistical sense, not the modern rhetorical sense). They don’t have opinions or goals or aspirations.

replies(2): >>44533604 #>>44562580 #
2. mkolodny ◴[] No.44533604[source]
Biases can lead to opinions, goals, and aspirations. For example, if you only read about the bad things Israelis or Palestinians have done, you might form an opinion that one of those groups is bad. Your answers to questions about the subject would reflect that opinion. Of course, less, biased information means you’d be less intelligent and give incorrect answers at times. The bias would likely lower your general intelligence - affecting your answers to seemingly unrelated but distantly connected questions. I’d expect that the same is true of LLMs.
3. InsideOutSanta ◴[] No.44562580[source]
Biases result in opinions.