←back to thread

625 points lukebennett | 2 comments | | HN request time: 0s | source
Show context
irrational ◴[] No.42139106[source]
> The AGI bubble is bursting a little bit

I'm surprised that any of these companies consider what they are working on to be Artificial General Intelligences. I'm probably wrong, but my impression was AGI meant the AI is self aware like a human. An LLM hardly seems like something that will lead to self-awareness.

replies(18): >>42139138 #>>42139186 #>>42139243 #>>42139257 #>>42139286 #>>42139294 #>>42139338 #>>42139534 #>>42139569 #>>42139633 #>>42139782 #>>42139855 #>>42139950 #>>42139969 #>>42140128 #>>42140234 #>>42142661 #>>42157364 #
jedberg ◴[] No.42139186[source]
Whether self awareness is a requirement for AGI definitely gets more into the Philosophy department than the Computer Science department. I'm not sure everyone even agrees on what AGI is, but a common test is "can it do what humans can".

For example, in this article it says it can't do coding exercises outside the training set. That would definitely be on the "AGI checklist". Basically doing anything that is outside of the training set would be on that list.

replies(5): >>42139314 #>>42139671 #>>42139703 #>>42139946 #>>42141257 #
sourcepluck ◴[] No.42139946[source]
Searle's Chinese Room Argument springs to mind:

  https://plato.stanford.edu/entries/chinese-room/
The idea that "human-like" behaviour will lead to self-awareness is both unproven (it can't be proven until it happens) and impossible to disprove (like Russell's teapot).

Yet, one common assumption of many people running these companies or investing in them, or of some developers investing their time in these technologies, is precisely that some sort of explosion of superintelligence is likely, or even inevitable.

It surely is possible, but stretching that to likely seems a bit much if you really think how imperfectly we understand things like consciousness and the mind.

Of course there are people who have essentially religious reactions to the notion that there may be limits to certain domains of knowledge. Nonetheless, I think that's the reality we're faced with here.

replies(1): >>42140395 #
1. abeppu ◴[] No.42140395{3}[source]
> The idea that "human-like" behaviour will lead to self-awareness is both unproven (it can't be proven until it happens) and impossible to disprove (like Russell's teapot).

I think Searle's view was that:

- while it cannot be dis-_proven_, the Chinese Room argument was meant to provide reasons against believing it

- the "it can't be proven until it happens" part is misunderstanding: you won't know if it happens because the objective, externally available attributes don't indicate whether self-awareness (or indeed awareness at all) is present

replies(1): >>42141503 #
2. sourcepluck ◴[] No.42141503[source]
The short version of this is that I don't disagree with your interpretation of Searle, and my paragraphs immediately following the link weren't meant to be a direct description of his point with the Chinese Room thought experiment.

> while it cannot be dis-_proven_, the Chinese Room argument was meant to provide reasons against believing it

Yes, like Russell's teapot. I also think that's what Searle means.

> the "it can't be proven until it happens" part is misunderstanding: you won't know if it happens because the objective, externally available attributes don't indicate whether self-awareness (or indeed awareness at all) is present

Yes, agreed, I believe that's what Searle is saying too. I think I was maybe being ambiguous here - I wanted to say that even if you forgave the AI maximalists for ignoring all relevant philosophical work, the notion that "appearing human-like" inevitably tends to what would actually be "consciousness" or "intelligence" is more than a big claim.

Searle goes further, and I'm not sure if I follow him all the way, personally, but it's a side point.