←back to thread

265 points ctoth | 3 comments | | HN request time: 0.501s | source
Show context
simonw ◴[] No.43745125[source]
Coining "Jagged AGI" to work around the fact that nobody agrees on a definition for AGI is a clever piece of writing:

> In some tasks, AI is unreliable. In others, it is superhuman. You could, of course, say the same thing about calculators, but it is also clear that AI is different. It is already demonstrating general capabilities and performing a wide range of intellectual tasks, including those that it is not specifically trained on. Does that mean that o3 and Gemini 2.5 are AGI? Given the definitional problems, I really don’t know, but I do think they can be credibly seen as a form of “Jagged AGI” - superhuman in enough areas to result in real changes to how we work and live, but also unreliable enough that human expertise is often needed to figure out where AI works and where it doesn’t.

replies(4): >>43745268 #>>43745321 #>>43745426 #>>43746223 #
verdverm ◴[] No.43745321[source]
Why not call it AJI instead of AGI then?

Certainly jagged does not imply general

It seems to me the bar for "AGI" has been lowered to measuring what tasks it can do rather than the traits we normally associate with general intelligence. People want it to be here so bad they nerf the requirements...

replies(4): >>43745364 #>>43745367 #>>43746244 #>>43756424 #
1. bbor ◴[] No.43745364[source]
Well I think the point being made is an instrumental one: it’s general enough to matter, so we should use the word “general” to communicate that to laypeople.

Re:”traits we associate with general intelligence”, I think the exact issue is that there is no scientific (ie specific*consistent) list of such traits. This is why Turing wrote his famous 1950 paper and invoked the Imitation Game; not to detail how one could test for a computer that’s really thinking(/truly general), but to show why that question isn’t necessary in the first place.

replies(2): >>43745422 #>>43748690 #
2. verdverm ◴[] No.43745422[source]
I still disagree, being good at a number of tasks does not make it intellectual.

Certainly creativity is missing, it has no internal motivation, and it will answer the same simple question both right and wrong, depending on unknown factors. What if we reverse the framing from "it can do these tasks, therefore it must be..." to "it lacks these traits, therefore it is not yet..."

While I do not disagree that the LLMs have become advanced enough to do a bunch of automation, I do not agree they are intelligent or actually thinking.

I'm with Yann Lecun when he says that we won't reach AGI until we move beyond transformers.

3. parodysbird ◴[] No.43748690[source]
And based on the actual Imitation Game in Turing's paper, we are no where close and I don't think we will be close for quite some time.