←back to thread

724 points simonw | 2 comments | | HN request time: 0.431s | source
Show context
marcusb ◴[] No.44527530[source]
This reminds me in a way of the old Noam Chomsky/Tucker Carlson exchange where Chomsky says to Carlson:

  "I’m sure you believe everything you’re saying. But what I’m saying is that if you believed something different, you wouldn’t be sitting where you’re sitting."
Simon may well be right - xAI might not have directly instructed Grok to check what the boss thinks before responding - but that's not to say xAI wouldn't be more likely to release a model that does agree with the boss a lot and privileges what he has said when reasoning.
replies(5): >>44528694 #>>44528695 #>>44528706 #>>44528766 #>>44529331 #
Kapura ◴[] No.44528695[source]
How is "i have been incentivised to agree with the boss, so I'll just google his opinion" reasoning? Feels like the model is broken to me :/
replies(6): >>44528823 #>>44528839 #>>44529114 #>>44529123 #>>44529177 #>>44529533 #
j16sdiz ◴[] No.44528823[source]
This is what many human would do. (and I agree many human have broken logic)
replies(1): >>44533139 #
1. Kapura ◴[] No.44533139[source]
Isn't the advantage of having AI that it isn't prone to human-style errors? Otherwise, what are we doing here? Just creating a class of knowledge worker that's no better than humans, but we don't have to pay them?
replies(1): >>44546158 #
2. salawat ◴[] No.44546158[source]
Ding, ding, ding! Now you're getting it! Got it in one!