If an AI or AGI can look at a picture and see an apple, or (say) with an artificial nose smell an apple, or likewise feel or taste or hear* an apple, and at the same identify that it is an apple and maybe even suggest baking an apple pie, then what else is there to be comprehended?
Maybe humans are just the same - far far ahead of the state of the tech, but still just the same really.
*when someone bites into it :-)
For me, what AI is missing is genuine out-of-the-box revolutionary thinking. They're trained on existing material, so perhaps it's fundamentally impossible for AIs to think up a breakthrough in any field - barring circumstances where all the component parts of a breakthrough already exist and the AI is the first to connect the dots ("standing on the shoulders of giants" etc).
It will confidently analyze and describe a chess position using advanced sounding book techniques, but its all fundamentally flawed, often missing things that are extremely obvious (like, an undefended queen free to take) while trying to sound like its a seasoned expert - that is if it doesn't completely hallucinate moves that are not allowed by the rules of the game.
This is how it works in other fields I am able to analyse. It's very good at sounding like it knows what its doing, speaking at the level of a masters level student or higher, but its actual appraisal of problems is often wrong in a way very different to how humans make mistakes. Another great example is getting it to solve cryptic crosswords from back in the day. It often knows the answer already in its training set, but it hasn't seen anyone write out the reasoning for the answer, so if you ask it to explain, it makes nonsensical leaps (claiming birch rhymes with tyre level nonsense)
At that point, the question of whether the model really does understand is pointless. We might as well argue about whether humans understand.
This is just a thing to say that has no substantial meaning.
- What is "sufficiently" mean?
- What is functionally equivalent?
- and what is even understanding?
All just vague hand wavingWe're not philosophizing here, we're talking about practical results and clearly, in the current context, it does not deliver in that area.
> At that point, the question of whether the model really does understand is pointless.
You're right it is pointless, because you are suggesting something that doesnt exist. And the current models cannot understand