←back to thread

265 points ctoth | 1 comments | | HN request time: 0.202s | source
Show context
fsmv ◴[] No.43745151[source]
It's not AGI because it still doesn't understand anything. It can only tell you things that can be found on the internet. These "jagged" results expose the truth that these models have near 0 intelligence.

It is not a simple matter of patching the rough edges. We are fundamentally not using an architecture that is capable of intelligence.

Personally the first time I tried deep research on a real topic it was disastrously incorrect on a key point.

replies(4): >>43745177 #>>43745178 #>>43745251 #>>43745758 #
simonw ◴[] No.43745178[source]
Is one of your personal requirements for AGI "never makes a mistake?"
replies(1): >>43745286 #
Arainach ◴[] No.43745286[source]
I think determinism is an important element. You can ask the same LLM the same question repeatedly and get different answers - and not just different ways of stating the same answer, very different answers.

If you ask an intelligent being the same question they may occasionally change the precise words they use but their answer will be the same over and over.

replies(4): >>43745344 #>>43745362 #>>43745395 #>>43745545 #
1. hdjjhhvvhga ◴[] No.43745344[source]
If determinism is a hard requirement, then LLM-based AI can't fulfill it by definition.