←back to thread

197 points baylearn | 3 comments | | HN request time: 0.413s | source
Show context
bsenftner ◴[] No.44471917[source]
Also, AGI is not just around the corner. We need artificial comprehension for that, and we don't even have a theory how comprehension works. Comprehension is the fusing of separate elements into new functional wholes, dynamically abstracting observations, evaluating them for plausibility, and reconstituting the whole - and all instantaneously, for security purposes, of every sense constantly. We have no technology that approaches that.
replies(5): >>44472191 #>>44473051 #>>44473180 #>>44474879 #>>44476456 #
tenthirtyam ◴[] No.44472191[source]
You'd need to define "comprehension" - it's a bit like the Chinese room / Turing test.

If an AI or AGI can look at a picture and see an apple, or (say) with an artificial nose smell an apple, or likewise feel or taste or hear* an apple, and at the same identify that it is an apple and maybe even suggest baking an apple pie, then what else is there to be comprehended?

Maybe humans are just the same - far far ahead of the state of the tech, but still just the same really.

*when someone bites into it :-)

For me, what AI is missing is genuine out-of-the-box revolutionary thinking. They're trained on existing material, so perhaps it's fundamentally impossible for AIs to think up a breakthrough in any field - barring circumstances where all the component parts of a breakthrough already exist and the AI is the first to connect the dots ("standing on the shoulders of giants" etc).

replies(4): >>44472345 #>>44472378 #>>44472490 #>>44472942 #
RugnirViking ◴[] No.44472942[source]
It's very very good at sounding like it understands stuff. Almost as good as actually understanding stuff in some fields, sure. But it's definitely not the same.

It will confidently analyze and describe a chess position using advanced sounding book techniques, but its all fundamentally flawed, often missing things that are extremely obvious (like, an undefended queen free to take) while trying to sound like its a seasoned expert - that is if it doesn't completely hallucinate moves that are not allowed by the rules of the game.

This is how it works in other fields I am able to analyse. It's very good at sounding like it knows what its doing, speaking at the level of a masters level student or higher, but its actual appraisal of problems is often wrong in a way very different to how humans make mistakes. Another great example is getting it to solve cryptic crosswords from back in the day. It often knows the answer already in its training set, but it hasn't seen anyone write out the reasoning for the answer, so if you ask it to explain, it makes nonsensical leaps (claiming birch rhymes with tyre level nonsense)

replies(3): >>44473642 #>>44473738 #>>44477472 #
DiogenesKynikos ◴[] No.44473738[source]
A sufficiently good simulation of understanding is functionally equivalent to understanding.

At that point, the question of whether the model really does understand is pointless. We might as well argue about whether humans understand.

replies(3): >>44474337 #>>44475199 #>>44475200 #
andrei_says_ ◴[] No.44474337[source]
In the Catch me if you Can movie, Leo diCaprio’s character wears a surgeon’s gown and confidently says “I concur”.

What I’m hearing here is that you are willing to get your surgery done by him and not by one of the real doctors - if he is capable of pronouncing enough doctor-sounding phrases.

replies(2): >>44475431 #>>44477894 #
1. bsenftner ◴[] No.44475431[source]
If that's what you're hearing, then you're not thinking it through. Of course one would not want an AI acting as a doctor as one's real doctor, but a medical or law school graduate studying for a license sure would appreciate a Socratic tutor in their specialization. Likewise, on the job in a technical specialization, a sounding board is of more value when it follows along, potentially with a virtual board of debate, and questions when logical drifts occur. It's not AI thinking for one, it is AI critically assisting their exploration through Socratic debate. Do not place AI in charge of critical decisions, but do place them in the assistance of people figuring out such situations.
replies(2): >>44475766 #>>44479986 #
2. amlib ◴[] No.44475766[source]
The doctors analogy still applies, that "socratic tutor" LLM is actually a charlatan that sounds, to the untrained mind, like a competent person, but in actuality is a complete farce. I still wouldn't trust that.
3. scrubs ◴[] No.44479986[source]
The doctor example is good because it puts the consumer at risk. Now, it's not a parlor game. Now can llm do the same?