Most active commenters
  • bsenftner(5)
  • DiogenesKynikos(3)

←back to thread

197 points baylearn | 30 comments | | HN request time: 1.729s | source | bottom
1. bsenftner ◴[] No.44471917[source]
Also, AGI is not just around the corner. We need artificial comprehension for that, and we don't even have a theory how comprehension works. Comprehension is the fusing of separate elements into new functional wholes, dynamically abstracting observations, evaluating them for plausibility, and reconstituting the whole - and all instantaneously, for security purposes, of every sense constantly. We have no technology that approaches that.
replies(5): >>44472191 #>>44473051 #>>44473180 #>>44474879 #>>44476456 #
2. tenthirtyam ◴[] No.44472191[source]
You'd need to define "comprehension" - it's a bit like the Chinese room / Turing test.

If an AI or AGI can look at a picture and see an apple, or (say) with an artificial nose smell an apple, or likewise feel or taste or hear* an apple, and at the same identify that it is an apple and maybe even suggest baking an apple pie, then what else is there to be comprehended?

Maybe humans are just the same - far far ahead of the state of the tech, but still just the same really.

*when someone bites into it :-)

For me, what AI is missing is genuine out-of-the-box revolutionary thinking. They're trained on existing material, so perhaps it's fundamentally impossible for AIs to think up a breakthrough in any field - barring circumstances where all the component parts of a breakthrough already exist and the AI is the first to connect the dots ("standing on the shoulders of giants" etc).

replies(4): >>44472345 #>>44472378 #>>44472490 #>>44472942 #
3. Touche ◴[] No.44472345[source]
They might not be capable of ingenuity, but they can spot patterns humans can miss. And that accelerates AI research, where it might help invent the next AI that helps invent the next AI that finally can think outside the box.
4. bsenftner ◴[] No.44472378[source]
I do define it, right up there in my OP. It's subtle, you missed it. Everybody misses it, because comprehension is like air, we swim in it constantly, to the degree the majority cannot even see it.
5. add-sub-mul-div ◴[] No.44472490[source]
Was that the intention of the Chinese room concept, to ask "what else is there to be comprehended?" after producing a translation?
6. RugnirViking ◴[] No.44472942[source]
It's very very good at sounding like it understands stuff. Almost as good as actually understanding stuff in some fields, sure. But it's definitely not the same.

It will confidently analyze and describe a chess position using advanced sounding book techniques, but its all fundamentally flawed, often missing things that are extremely obvious (like, an undefended queen free to take) while trying to sound like its a seasoned expert - that is if it doesn't completely hallucinate moves that are not allowed by the rules of the game.

This is how it works in other fields I am able to analyse. It's very good at sounding like it knows what its doing, speaking at the level of a masters level student or higher, but its actual appraisal of problems is often wrong in a way very different to how humans make mistakes. Another great example is getting it to solve cryptic crosswords from back in the day. It often knows the answer already in its training set, but it hasn't seen anyone write out the reasoning for the answer, so if you ask it to explain, it makes nonsensical leaps (claiming birch rhymes with tyre level nonsense)

replies(3): >>44473642 #>>44473738 #>>44477472 #
7. andy99 ◴[] No.44473051[source]
Another way to put it is we need Artificial Intelligence. Right now the term has been co-opted to mean prediction (and more commonly transcript generation). The stuff you're describing are what's commonly thought of as intelligence, it's too bad we need a new word for it.
replies(1): >>44475500 #
8. Workaccount2 ◴[] No.44473180[source]
We only have two computational tools to work with - deterministic and random behavior. So whatever comprehension/understanding/original thought/consciousness is, it's some algorithmic combination of deterministic and random inputs/outputs.

I know that sounds broad or obvious, but people seem to easily and unknowingly wander into "Human intelligence is magically transcendent".

replies(3): >>44474303 #>>44474378 #>>44475642 #
9. filleduchaos ◴[] No.44473642{3}[source]
If anyone wants to see the chess comprehension breakdown in action, the YouTuber GothamChess occasionally puts out videos where he plays against a new or recently-updated LLM.

Hanging a queen is not evidence of a lack of intelligence - even the very best human grandmasters will occasionally do that. But in pretty much every single video, the LLM loses the plot entirely after barely a couple dozen moves and starts to resurrect already-captured pieces, move pieces to squares they can't get to, etc - all while keeping the same confident "expert" tone.

10. DiogenesKynikos ◴[] No.44473738{3}[source]
A sufficiently good simulation of understanding is functionally equivalent to understanding.

At that point, the question of whether the model really does understand is pointless. We might as well argue about whether humans understand.

replies(3): >>44474337 #>>44475199 #>>44475200 #
11. omnicognate ◴[] No.44474303[source]
What you state is called the Physical Church-Turing Thesis, and it's neither obvious nor necessarily true.

I don't know if you're making it, but the simplest mistake would be to think that you can prove that a computer can evaluate any mathematical function. If that were the case then "it's got to be doable with algorithms" would have a fairly strong basis. Anything the mind does that an algorithm can't would have to be so "magically transcendent" that it's beyond the scope of the mathematical concept of "function". However, this isn't the case. There are many mathematical functions that are proven to be impossible for any algorithm to implement. Look up uncomputable functions you're unfamiliar with this.

The second mistake would be to think that we have some proof that all physically realisable functions are computable by an algorithm. That's the Physical Church-Turing Thesis mentioned above, and as the name indicates it's a thesis, not a theorem. It is a statement about physical reality, so it could only ever be empirically supported, not some absolute mathematical truth.

It's a fascinating rabbit hole if you're interested - what we actually do and do not know for sure about the generality of algorithms.

replies(1): >>44481070 #
12. andrei_says_ ◴[] No.44474337{4}[source]
In the Catch me if you Can movie, Leo diCaprio’s character wears a surgeon’s gown and confidently says “I concur”.

What I’m hearing here is that you are willing to get your surgery done by him and not by one of the real doctors - if he is capable of pronouncing enough doctor-sounding phrases.

replies(2): >>44475431 #>>44477894 #
13. RaftPeople ◴[] No.44474378[source]
> but people seem to easily and unknowingly wander into "Human intelligence is magically transcendent".

But the poster you responded to didn't say it's magically transcendent, they just pointed out that there are many significantly hard problems that we don't solutions for yet.

14. zxcb1 ◴[] No.44474879[source]
Translation Between Modalities is All You Need

~2028

15. timacles ◴[] No.44475199{4}[source]
> A sufficiently good simulation of understanding is functionally equivalent to understanding.

This is just a thing to say that has no substantial meaning.

  - What is "sufficiently" mean? 
  - What is functionally equivalent? 
  - and what is even understanding?
All just vague hand waving

We're not philosophizing here, we're talking about practical results and clearly, in the current context, it does not deliver in that area.

> At that point, the question of whether the model really does understand is pointless.

You're right it is pointless, because you are suggesting something that doesnt exist. And the current models cannot understand

replies(2): >>44477902 #>>44478072 #
16. RugnirViking ◴[] No.44475200{4}[source]
thats the point though, its not sufficient. Not even slightly. It constantly makes obvious mistakes, and cannot keep things coherent

I was almost going to explicitly mention your point but deleted it because I thought people would be able to understand.

This is not a philosophy/theology sitting around handwringing about "oh but would a sufficiently powerful LLM be able to dance on the head of a pin". We're talking about a thing, that actually exists, that you can actually test. In a whole lot of real-world scenarios that you try to throw at it, it fails in strange and unpredictable ways. Ways that it will swear up and down it did not do. It will lie to your face. It's convincing. But then it will lose in chess, it will fuck up running a vending machine buisness, it will get lost coding and reinvent the same functions over and over, it will make completely nonsensical answers to crossword puzzles.

This is not an intelligence that is unlimited, it is a deeply flawed two year old that just so happens to have read the entire output of human writing. It's a fundamentally different mind to ours, and makes different mistakes. It sounds convincing and yet fails, constantly. It will tell you a four step explanation of how its going to do something, then fail to execute four simple steps.

replies(1): >>44475369 #
17. bsenftner ◴[] No.44475369{5}[source]
Which is exactly why is it insane that the industry is hell bent on creating autonomous automation through LLMs. Rube Goldberg machines is what will be created, and if civilization survives that insanity it will be looked back upon as one grand stupid era.
18. bsenftner ◴[] No.44475431{5}[source]
If that's what you're hearing, then you're not thinking it through. Of course one would not want an AI acting as a doctor as one's real doctor, but a medical or law school graduate studying for a license sure would appreciate a Socratic tutor in their specialization. Likewise, on the job in a technical specialization, a sounding board is of more value when it follows along, potentially with a virtual board of debate, and questions when logical drifts occur. It's not AI thinking for one, it is AI critically assisting their exploration through Socratic debate. Do not place AI in charge of critical decisions, but do place them in the assistance of people figuring out such situations.
replies(2): >>44475766 #>>44479986 #
19. bsenftner ◴[] No.44475500[source]
No, we have the intelligence part, we know what to do when we have the answers. What we don't know is how to derive the answers without human intervention at all, not even our written knowledge. Artificial comprehension will not require anything beyond senses, observations through time, which builds a functional world model from observation and interaction, capable of navigating the world as a communicating participant. Note I'm not talking about agency, also called "will", which is separate from both intelligence and comprehension. Where intelligence is "knowing", comprehension is the derivation of knowing from observation and interaction alone, and agency is the entirely other ability to choose action over in action, to employ comprehension to affect the world, and for what purpose?
20. __loam ◴[] No.44475642[source]
We don't understand human intelligence enough to make any comparisons like this
replies(1): >>44479962 #
21. amlib ◴[] No.44475766{6}[source]
The doctors analogy still applies, that "socratic tutor" LLM is actually a charlatan that sounds, to the untrained mind, like a competent person, but in actuality is a complete farce. I still wouldn't trust that.
22. ekianjo ◴[] No.44476456[source]
> We need artificial comprehension for that, and we don't even have a theory how comprehension works.

Not sure we need it. The counter example is the LLM itself. We had absolutely zero idea that the attention heads would bring such benefits down the road.

23. ◴[] No.44477472{3}[source]
24. DiogenesKynikos ◴[] No.44477894{5}[source]
Leo diCaprio's character says nothing of substance in that scene. If you ask an LLM a question about most subjects, it will give you a highly intelligent, substantive answer.
replies(1): >>44478506 #
25. DiogenesKynikos ◴[] No.44477902{5}[source]
The current models obviously understand a lot. They would easily understand your comment, for example, and give an intelligent answer in response. The whole "the current models cannot understand" mantra is more religious than anything.
26. og_kalu ◴[] No.44478072{5}[source]
>We're not philosophizing here, we're talking about practical results and clearly, in the current context, it does not deliver in that area.

Except it clearly does, in a lot of areas. You can't take a 'practical results trump all' stance and come out of it saying LLMs understand nothing. They understand a lot of things just fine.

27. vrighter ◴[] No.44478506{6}[source]
it gives you an answer. Not a highly intelligent one. Just an answer. And if it doesn't know what it's talking about, it'll still give an answer.
28. scrubs ◴[] No.44479962{3}[source]
Well yes, but we gotta try something. The fact that agi or human intelligence is a unknown in any engineering sense is also why it thrives in ways so amenable to nonsense. Still, we gotta try.
29. scrubs ◴[] No.44479986{6}[source]
The doctor example is good because it puts the consumer at risk. Now, it's not a parlor game. Now can llm do the same?
30. Workaccount2 ◴[] No.44481070{3}[source]
What I am stating is a step above church-turing, that the constituents of any physical process is either deterministic or random, computability aside.

From a purely practical standpoint, we don't know of any non-computable physical systems and it's just so painfully god-of-the-gaps to say "The brain could contain new physics that transcends everything we know even though this has never proved true with any other complex system we ever gained knowledge about. It's all proved computable".