←back to thread

I'm absolutely right

(absolutelyright.lol)
648 points yoavfr | 1 comments | | HN request time: 0.201s | source
Show context
tyushk ◴[] No.45138171[source]
I wonder if this is a tactic that LLM providers use to coerce the model into doing something.

Gemini will often start responses that use the canvas tool with "Of course", which would force the model into going down a line of tokens that end up with attempting to fulfill the user's request. It happens often enough that it seems like it's not being generated by the model, but instead inserted by the backend. Maybe "you're absolutely right" is used the same way?

replies(5): >>45138295 #>>45138496 #>>45138604 #>>45138641 #>>45154548 #
1. ACCount37 ◴[] No.45138604[source]
Very unlikely to be an explicit tactic. Likely to be a result of RLHF or other types of optimization pressure for multi-turn instruction following.

If we have RLHF in play, then human evaluators may generally prefer responses starting with "you're right" or "of course", because it makes it look like the LLM is responsive and acknowledges user feedback. Even if the LLM itself was perfectly capable of being responsive and acknowledging user feedback without emitting an explicit cue. The training will then wire that human preference into the AI, and an explicit "yes I'm paying attention to user feedback" cue will be emitted by the LLM more often.

If we have RL on harder targets, where multiturn instruction following is evaluated not by humans that are sensitive to wording changes, but by a hard eval system that is only sensitive to outcomes? The LLM may still adopt a "yes I'm paying attention to user feedback" cue because it allows it to steer its future behavior better (persona self-consistency drive). Same mechanism as what causes "double check your prior reasoning" cues such as "Wait, " to be adopted by RL'd reasoning models.