←back to thread

336 points mooreds | 3 comments | | HN request time: 0.001s | source
Show context
raspasov ◴[] No.44485275[source]
Anyone who claims that a poorly definined concept, AGI, is right around the corner is most likely:

- trying to sell something

- high on their own stories

- high on exogenous compounds

- all of the above

LLMs are good at language. They are OK summarizers of text by design but not good at logic. Very poor at spatial reasoning and as a result poor at connecting concepts together.

Just ask any of the crown jewel LLM models "What's the biggest unsolved problem in the [insert any] field".

The usual result is a pop-science-level article but with ton of subtle yet critical mistakes! Even worse, the answer sounds profound on the surface. In reality, it's just crap.

replies(12): >>44485480 #>>44485483 #>>44485524 #>>44485758 #>>44485846 #>>44485900 #>>44485998 #>>44486105 #>>44486138 #>>44486182 #>>44486682 #>>44493526 #
Buttons840 ◴[] No.44485758[source]
I'll offer a definition of AGI:

An AI (a computer program) that is better at [almost] any task than 5% of the human specialists in that field has achieved AGI.

Or, stated another way, if 5% of humans are incapable of performing any intellectual job better than an AI can, then that AI has achieved AGI.

Note, I am not saying that an AI that is better than humans at one particular thing has achieved AGI, because it is not "general". I'm saying that if a single AI is better at all intellectual tasks than some humans, the AI has achieved AGI.

The 5th percentile of humans deserves the label of "intelligent", even if they are not the most intelligent, (I'd say all humans deserve the label "intelligent") and if an AI is able to perform all intellectual tasks better than such a person, the AI has achieved AGI.

replies(3): >>44485869 #>>44485939 #>>44486860 #
djoldman ◴[] No.44485869[source]
I like where this is going.

However, it's not sufficient. The actual tasks have to be written down, tests constructed, and the specialists tested.

A subset of this has been done with some rigor and AI/computers have surpassed this threshold for some tests. Some have then responded by saying that it isn't AGI, and that the tasks aren't sufficiently measuring of "intelligence" or some other word, and that more tests are warranted.

replies(1): >>44485892 #
1. Buttons840 ◴[] No.44485892{3}[source]
You're saying we need to write down all intellectual tasks? How would that help?

If an AI is better at some tasks (that happen to be written down), it doesn't mean it is better at all tasks.

Actually, I'd lower my threshold even further--I originally said 50%, then 20%, then 5%--but now I'll say if an AI is better than 0.1% of people at all intellectual tasks, then it is AGI, because it is "general" (being able to do all intellectual tasks), and it is "intelligent" (a label we ascribe to all humans).

But the AGI has to be better at all (not just some) intellectual tasks.

replies(1): >>44486156 #
2. djoldman ◴[] No.44486156[source]
> An AI (a computer program) that is better at [almost] any task than 5% of the human specialists in that field has achieved AGI.

Let's say you have a candidate AI and assert that it indeed has passed the above benchmark. How do you prove that? Don't you have to say which tasks?

replies(1): >>44486549 #
3. Buttons840 ◴[] No.44486549[source]
Well, to state it crudely, you just have to find a dumb person who is inferior to the AI at every single intellectual task. This is cruel, and I don't envy that dumb person, but who knows, I might end up being that dumb person--we all might.