←back to thread

265 points ctoth | 1 comments | | HN request time: 0.253s | source
Show context
mellosouls ◴[] No.43745240[source]
The capabilities of AI post gpt3 have become extraordinary and clearly in many cases superhuman.

However (as the article admits) there is still no general agreement of what AGI is, or how we (or even if we can) get there from here.

What there is is a growing and often naïve excitement that anticipates it as coming into view, and unfortunately that will be accompanied by the hype-merchants desperate to be first to "call it".

This article seems reasonable in some ways but unfortunately falls into the latter category with its title and sloganeering.

"AGI" in the title of any article should be seen as a cautionary flag. On HN - if anywhere - we need to be on the alert for this.

replies(13): >>43745398 #>>43745959 #>>43746159 #>>43746204 #>>43746319 #>>43746355 #>>43746427 #>>43746447 #>>43746522 #>>43746657 #>>43746801 #>>43749837 #>>43795216 #
Zambyte ◴[] No.43746204[source]
I think a reasonable definition of intelligence is the application of reason on knowledge. An example of a system that is highly knowledgeable but has little to no reason would be an encyclopedia. An example of a system that is highly reasonable, but has little knowledge would be a calculator. Intelligent systems demonstrate both.

Systems that have general intelligence are ones that are capable of applying reason to an unbounded domain of knowledge. Examples of such systems include: libraries, wikis, and forums like HN. These systems are not AGI, because the reasoning agents in each of these systems are organic (humans); they are more like a cyborg general intelligence.

Artificial general intelligence are just systems that are fully artificial (ie: computer programs) that can apply reason to an unbounded domain of knowledge. We're here, and we have been for years. AGI sets no minimum as to how great the reasoning must be, but it's obvious to anyone who has used modern generative intelligence systems like LLMs that the technology can be used to reason about an unbounded domain of knowledge.

If you don't want to take my word for it, maybe Peter Norvig can be more convincing: https://www.noemamag.com/artificial-general-intelligence-is-...

replies(3): >>43746635 #>>43746665 #>>43749304 #
jimbokun ◴[] No.43746635[source]
Excellent article and analysis. Surprised I missed it.

It is very hard to argue with Norvig’s arguments that AGI has been around since at least 2023.

replies(1): >>43749356 #
littlestymaar ◴[] No.43749356[source]
It's not: whatever the way you define AGI, you cannot just ignore the key letter of the three letters acronym: G stands for “General”.

You can argue that for the first time in the history we have an AI that deserves its name (unlike Deep blue or AlphaGo which aren't really about intelligence at all) but you cannot call that Artificial GENERAL Intelligence before it overcomes the “jagged intelligence” syndrome.

replies(1): >>43749719 #
Zambyte ◴[] No.43749719[source]
It sounds like you have a different definition of "general" in the context of intelligence from the one I shared. What is it?
replies(1): >>43750472 #
Jensson ◴[] No.43750472[source]
General intelligence means it can do the same intellectual tasks as humans can, including learning to do different kinds of intellectual jobs. Current AI can't learn to do most jobs like a human kid can, so its not AGI.

This is the original definition of AGI. Some data scientists try to move the goalposts to something else and call something that can't replace humans "AGI".

This is a very simple definition that is easy to see when it is fulfilled because then companies can operate without humans.

replies(1): >>43750593 #
Zambyte ◴[] No.43750593[source]
What intellectual tasks can humans do that language models can't? Particularly agentic language model frameworks.
replies(3): >>43750658 #>>43752538 #>>43755623 #
1. ben_w ◴[] No.43752538[source]
Weird spiky things that are hard to characterise even within one specific model, and where the ability to reliably identify such things itself causes subsequent models to not fail so much.

A few months ago, I'd have said "create image with coherent text"*, but that's now changed. At least in English — trying to get ChatGPT's new image mode to draw the 狐 symbol sometimes works, sometimes goes weird in the way latin characters used to.

* if the ability to generate images doesn't count as "language model" then one intellectual task they can't do is "draw images", see Simon Willison's pelican challenge: https://simonwillison.net/tags/pelican-riding-a-bicycle/