←back to thread

204 points tdchaitanya | 2 comments | | HN request time: 0.557s | source
Show context
andrewflnr ◴[] No.45094933[source]
Is this really the frontier of LLM research? I guess we really aren't getting AGI any time soon, then. It makes me a little less worried about the future, honestly.

Edit: I never actually expected AGI from LLMs. That was snark. I just think it's notable that the fundamental gains in LLM performance seem to have dried up.

replies(7): >>45094979 #>>45094995 #>>45095059 #>>45095198 #>>45095374 #>>45095383 #>>45095463 #
jibal ◴[] No.45095198[source]
LLMs are not on the road to AGI, but there are plenty of dangers associated with them nonetheless.
replies(2): >>45095419 #>>45095531 #
nicce ◴[] No.45095419[source]
Just 2 days ago Gemini 2.5 Pro tried to recommend me tax evasion based on non-existing laws and court decisions. The model was so charming and convincing, that even after I brought all the logic flaws and said that this is plain wrong, I started to doubt myself, because it is so good at pleasing, arguing and using words.

And most would have accept the recommendation because the model sold it as less common tactic, while sounding very logical.

replies(2): >>45095524 #>>45095785 #
1. nutjob2 ◴[] No.45095785[source]
Or you could understand the tool you are using and be skeptical of any of its output.

So many people just want to believe, instead of the reality of LLMs being quite unreliable.

Personally it's usually fairly obvious to me when LLMs are bullshitting probably because I have lots of experience detecting it in humans.

replies(1): >>45096499 #
2. nicce ◴[] No.45096499[source]
LLM is only useful if it gives shortcut to information with reasonable accuracy. If I need to double check everything, it is just extra step.

In this case I just happened to be domain expert and knew it was wrong. It would have required significant effort to verify everything with some less experienced person.