←back to thread

265 points ctoth | 6 comments | | HN request time: 0.847s | source | bottom
1. gilbetron ◴[] No.43747151[source]
AGI that is bad at some things is still AGI. We have AGI, it is just bad at some things and hallucinates. It is literally smarter than many people I know, but that doesn't mean it can beat a human at anything. That would be ASI, which, hopefully, will take a while to get here.

Although, I could be argued into calling what we have already ASI - take a human and Gemini 2.5, and put them through a barrage of omni-disciplinary questions and situations and problems. Gemini 2.5 will win, but not absolutely.

AGI (we have) ASI (we might have) AOI (Artificial Omniscient Intelligence, will hopefully take a while to get here)

replies(3): >>43747290 #>>43747305 #>>43750461 #
2. jzig ◴[] No.43747290[source]
ASI? AOI?

Might as well call it “ultrathink”!

3. ramesh31 ◴[] No.43747305[source]
>”Although, I could be argued into calling what we have already ASI - take a human and Gemini 2.5, and put them through a barrage of omni-disciplinary questions and situations and problems. Gemini 2.5 will win, but not absolutely.”

Except for writing a joke that will make you laugh, a poem that will make you cry, or a work of art that evokes deep introspection.

Intelligence is much deeper and more nuanced than answering questions of rote knowledge. LLMs are fantastic “reasoning engines”, but the soul is simply not there yet.

replies(1): >>43747969 #
4. gilbetron ◴[] No.43747969[source]
Ok, tell me a joke that I'll find funny - but you can't look it up.

I asked GPT to do so and I chuckled out loud.

5. prmph ◴[] No.43750461[source]
Then calculators are AGI. A program that does fizz-buzz is AGI. Way to make AGI a meaningless term. What LLMs do now is so far from AGI that I don't know how people make any connection between it and AGI.

This is what AGI means (or should mean): Generalized understanding of the world. Basically, with AGI the context window would be the something like the entire knowledge and understanding of the world that an (adult?) person has (e.g., physics intuition), coupled with the ability to actually reason and act on it, update it, reflect on it, etc.

A small slice of this (e.g., less knowledge than a typical adult) would still be AGI, but current AIs:

- Cannot continually learn and incorporate that learning into their model.

- Cannot reason on any deep level. And before anyone claims that the pattern matching they do is all we do, no this is not the case. Even strong pattern-matching/AI chess engines have weak spots that betray the fact that they do not actually reason like humans do.

- Cannot engage in unprompted reflection in the background.

Current AIs are like a hologram; we are mistaking the 1- or 2-dimensional responses to queries for a deep higher dimensional understanding humans have. The incredible thing about human consciousness is the the deep (infinite?) interiority of it. I can reason about reasoning. I can reason about reasoning about my reasoning, etc. I can reflect on my consciousness. I can reflect on reflecting on my consciousness, etc.

Machines are nowhere close this, and likely will never be

replies(1): >>43752115 #
6. gilbetron ◴[] No.43752115[source]
> Generalized understanding of the world.

LLMs definitely have this, and it really is bizarre to me that people think otherwise.

> Cannot continually learn and incorporate that learning into their model.

This is definitely a valid criticism of our current LLMs and once we (further) develop ways to do this, I think my main criticism of LLMs as AGI will go away

> Cannot reason on any deep level.

Few people are able to do this

> Cannot engage in unprompted reflection in the background.

True, but I don't know if that belongs as a requirement to be AGI.