←back to thread

204 points tdchaitanya | 1 comments | | HN request time: 0s | source
Show context
andrewflnr ◴[] No.45094933[source]
Is this really the frontier of LLM research? I guess we really aren't getting AGI any time soon, then. It makes me a little less worried about the future, honestly.

Edit: I never actually expected AGI from LLMs. That was snark. I just think it's notable that the fundamental gains in LLM performance seem to have dried up.

replies(7): >>45094979 #>>45094995 #>>45095059 #>>45095198 #>>45095374 #>>45095383 #>>45095463 #
kenjackson ◴[] No.45094995[source]
First, I don't think we will ever get to AGI. Not because we won't see huge advances still, but AGI is a moving ambiguous target that we won't get consensus on.

But why does this paper impact your thinking on it? It is about budget and recognizing that different LLMs have different cost structures. It's not really an attempt to improve LLM performance measured absolutely.

replies(3): >>45095489 #>>45096115 #>>45099679 #
ACCount37 ◴[] No.45096115[source]
I can totally see "it's not really AGI because it doesn't consistently outperform those three top 0.000001% outlier human experts yet if they work together".

It'll be a while until the ability to move the goalposts of "actual intelligence" is exhausted entirely.

replies(1): >>45096696 #
9dev ◴[] No.45096696{3}[source]
Well right now, my niece of 7 years outperforms all LLM contenders in drawing a Pelican on a bicycle
replies(2): >>45097149 #>>45103451 #
1. neuronexmachina ◴[] No.45103451{4}[source]
I tried it in Gemini just now, it seems to have done a decent job: https://g.co/gemini/share/b6fef8398c01