←back to thread

I'm absolutely right

(absolutelyright.lol)
648 points yoavfr | 1 comments | | HN request time: 0.204s | source
Show context
trjordan ◴[] No.45138620[source]
OK, so I love this, because we all recognize it.

It's not fully just a tic of language, though. Responses that start off with "You're right!" are alignment mechanisms. The LLM, with its single-token prediction approach, follows up with a suggestion that much more closely follows the user's desires, instead of latching onto it's own previous approach.

The other tic I love is "Actually, that's not right." That happens because once agents finish their tool-calling, they'll do a self-reflection step. That generates the "here's what I did response" or, if it sees an error, the "Actually, ..." change in approach. And again, that message contains a stub of how the approach should change, which allows the subsequent tool calls to actually pull that thread instead of stubbornly sticking to its guns.

The people behind the agents are fighting with the LLM just as much as we are, I'm pretty sure!

replies(11): >>45138772 #>>45138812 #>>45139686 #>>45139852 #>>45140141 #>>45140233 #>>45140703 #>>45140713 #>>45140722 #>>45140723 #>>45141393 #
1. Szpadel ◴[] No.45141393[source]
exactly!

People bless gpt-5 for not doing exactly this and in my testing with it in copilot I had lot of cases where it tried to do wrong thing (execute come messed up in context compaction build command) and I couldn't steer it to do ANYTHING else. It constantly tried to execute it as response any my message (I tries many common steerability tricks, (important, <policy>, just asking, yelling etc) nothing worked.

the same think when I tried to do socratic coder prompting, I wanted to finish and generate spec, but he didn't agree and kept asking nonsensical at this point questions