←back to thread

204 points tdchaitanya | 1 comments | | HN request time: 0s | source
Show context
andrewflnr ◴[] No.45094933[source]
Is this really the frontier of LLM research? I guess we really aren't getting AGI any time soon, then. It makes me a little less worried about the future, honestly.

Edit: I never actually expected AGI from LLMs. That was snark. I just think it's notable that the fundamental gains in LLM performance seem to have dried up.

replies(7): >>45094979 #>>45094995 #>>45095059 #>>45095198 #>>45095374 #>>45095383 #>>45095463 #
jibal ◴[] No.45095198[source]
LLMs are not on the road to AGI, but there are plenty of dangers associated with them nonetheless.
replies(2): >>45095419 #>>45095531 #
nicce ◴[] No.45095419[source]
Just 2 days ago Gemini 2.5 Pro tried to recommend me tax evasion based on non-existing laws and court decisions. The model was so charming and convincing, that even after I brought all the logic flaws and said that this is plain wrong, I started to doubt myself, because it is so good at pleasing, arguing and using words.

And most would have accept the recommendation because the model sold it as less common tactic, while sounding very logical.

replies(2): >>45095524 #>>45095785 #
roywiggins ◴[] No.45095524[source]
> even after I brought all the logic flaws and said that this is plain wrong

Once you've started to argue with an LLM you're already barking up the wrong tree. Maybe you're right, maybe not, but there's no point in arguing it out with an LLM.

replies(1): >>45096510 #
nicce ◴[] No.45096510[source]
There are cases when they are actually correct, instead of the human.
replies(1): >>45096538 #
roywiggins ◴[] No.45096538{3}[source]
Yes, and there's a substantial chance they'll apologize to you anyway even when they were right. There's no reason to expect them to be more likely to apologize when they're actually right vs actually wrong- their agreeableness is really orthogonal to their correctness.
replies(1): >>45096612 #
1. nicce ◴[] No.45096612{4}[source]
Yes, they over-apologize. But my main reason for using LLMs is seeking out things that I missed myself or my own argumentation was not good. Sometimes they are really good at bringing new perspectives. Whether they are correct or incorrect is not the point - are they giving argument or perspective that is worth inspecting more with my own brains?