←back to thread

265 points ctoth | 1 comments | | HN request time: 0.235s | source
Show context
simonw ◴[] No.43745125[source]
Coining "Jagged AGI" to work around the fact that nobody agrees on a definition for AGI is a clever piece of writing:

> In some tasks, AI is unreliable. In others, it is superhuman. You could, of course, say the same thing about calculators, but it is also clear that AI is different. It is already demonstrating general capabilities and performing a wide range of intellectual tasks, including those that it is not specifically trained on. Does that mean that o3 and Gemini 2.5 are AGI? Given the definitional problems, I really don’t know, but I do think they can be credibly seen as a form of “Jagged AGI” - superhuman in enough areas to result in real changes to how we work and live, but also unreliable enough that human expertise is often needed to figure out where AI works and where it doesn’t.

replies(4): >>43745268 #>>43745321 #>>43745426 #>>43746223 #
shrx ◴[] No.43745268[source]
>> It is already demonstrating general capabilities and performing a wide range of intellectual tasks, including those that it is not specifically trained on.

Huh? Isn't a LLM's capability fully constrained by the training data? Everything else is hallucinated.

replies(2): >>43745341 #>>43745489 #
1. bbor ◴[] No.43745341[source]
The critical discovery was a way to crack the “Frame Problem”, which roughly comes down to colloquial notions of common sense or intuition. For the first time ever, we have models that know if you jump off a stool, you will (likely!) be standing on the ground afterwards.

In that sense, they absolutely know things that aren’t in their training data. You’re correct about factual knowledge, tho — that’s why they’re not trained to optimize it! A database(/pagerank?) solves that problem already.