←back to thread

197 points baylearn | 2 comments | | HN request time: 0.415s | source
Show context
bsenftner ◴[] No.44471917[source]
Also, AGI is not just around the corner. We need artificial comprehension for that, and we don't even have a theory how comprehension works. Comprehension is the fusing of separate elements into new functional wholes, dynamically abstracting observations, evaluating them for plausibility, and reconstituting the whole - and all instantaneously, for security purposes, of every sense constantly. We have no technology that approaches that.
replies(5): >>44472191 #>>44473051 #>>44473180 #>>44474879 #>>44476456 #
tenthirtyam ◴[] No.44472191[source]
You'd need to define "comprehension" - it's a bit like the Chinese room / Turing test.

If an AI or AGI can look at a picture and see an apple, or (say) with an artificial nose smell an apple, or likewise feel or taste or hear* an apple, and at the same identify that it is an apple and maybe even suggest baking an apple pie, then what else is there to be comprehended?

Maybe humans are just the same - far far ahead of the state of the tech, but still just the same really.

*when someone bites into it :-)

For me, what AI is missing is genuine out-of-the-box revolutionary thinking. They're trained on existing material, so perhaps it's fundamentally impossible for AIs to think up a breakthrough in any field - barring circumstances where all the component parts of a breakthrough already exist and the AI is the first to connect the dots ("standing on the shoulders of giants" etc).

replies(4): >>44472345 #>>44472378 #>>44472490 #>>44472942 #
RugnirViking ◴[] No.44472942[source]
It's very very good at sounding like it understands stuff. Almost as good as actually understanding stuff in some fields, sure. But it's definitely not the same.

It will confidently analyze and describe a chess position using advanced sounding book techniques, but its all fundamentally flawed, often missing things that are extremely obvious (like, an undefended queen free to take) while trying to sound like its a seasoned expert - that is if it doesn't completely hallucinate moves that are not allowed by the rules of the game.

This is how it works in other fields I am able to analyse. It's very good at sounding like it knows what its doing, speaking at the level of a masters level student or higher, but its actual appraisal of problems is often wrong in a way very different to how humans make mistakes. Another great example is getting it to solve cryptic crosswords from back in the day. It often knows the answer already in its training set, but it hasn't seen anyone write out the reasoning for the answer, so if you ask it to explain, it makes nonsensical leaps (claiming birch rhymes with tyre level nonsense)

replies(3): >>44473642 #>>44473738 #>>44477472 #
DiogenesKynikos ◴[] No.44473738[source]
A sufficiently good simulation of understanding is functionally equivalent to understanding.

At that point, the question of whether the model really does understand is pointless. We might as well argue about whether humans understand.

replies(3): >>44474337 #>>44475199 #>>44475200 #
1. RugnirViking ◴[] No.44475200[source]
thats the point though, its not sufficient. Not even slightly. It constantly makes obvious mistakes, and cannot keep things coherent

I was almost going to explicitly mention your point but deleted it because I thought people would be able to understand.

This is not a philosophy/theology sitting around handwringing about "oh but would a sufficiently powerful LLM be able to dance on the head of a pin". We're talking about a thing, that actually exists, that you can actually test. In a whole lot of real-world scenarios that you try to throw at it, it fails in strange and unpredictable ways. Ways that it will swear up and down it did not do. It will lie to your face. It's convincing. But then it will lose in chess, it will fuck up running a vending machine buisness, it will get lost coding and reinvent the same functions over and over, it will make completely nonsensical answers to crossword puzzles.

This is not an intelligence that is unlimited, it is a deeply flawed two year old that just so happens to have read the entire output of human writing. It's a fundamentally different mind to ours, and makes different mistakes. It sounds convincing and yet fails, constantly. It will tell you a four step explanation of how its going to do something, then fail to execute four simple steps.

replies(1): >>44475369 #
2. bsenftner ◴[] No.44475369[source]
Which is exactly why is it insane that the industry is hell bent on creating autonomous automation through LLMs. Rube Goldberg machines is what will be created, and if civilization survives that insanity it will be looked back upon as one grand stupid era.