←back to thread

265 points ctoth | 7 comments | | HN request time: 0.998s | source | bottom
1. logicchains ◴[] No.43745171[source]
I'd argue that it's not productive to use any definition of AGI coined after 2020, to avoid the fallacy of shifting the goalposts.
replies(2): >>43745346 #>>43746649 #
2. Borealid ◴[] No.43745346[source]
I think there's a single definition of AGI that will stand until the singularity:

"An AGI is a human-created system that demonstrates iteratively improving its own conceptual design without further human assistance".

Note that a "conceptual design" here does not include tweaking weights within an already-externally-established formula.

My reasoning is thus:

1. A system that is only capable of acting with human assistance cannot have its own intelligence disentangled from the humans'

2. A system that is only intelligent enough to solve problems that somehow exclude problems with itself is not "generally" intelligent

3. A system that can only generate a single round of improvements to its own designs has not demonstrated improvements to those designs, as if iteration N+1 were truly superior to iteration N, it would be able to produce iteration N+2

4. A system that is not capable of changing its own design is incapable of iterative improvement, as there is a maximum efficacy within any single framework

5. A system that could improve itself in theory and fails to do so in practice has not demonstrated intelligence

It's pretty clear that no current-day system has hit this milestone; if some program had, there would no longer be a need for continued investment in algorithms design (or computer science, or most of humanity...).

A program that randomly mutates its own code could self-improve in theory but fails to do so in practice.

I don't think these goalposts have moved in the past or need to move in the future. This is what it takes to cause the singularity. The movement recently has been people trying to sell something less than this as an AGI.

replies(3): >>43745953 #>>43746239 #>>43747342 #
3. logicchains ◴[] No.43745953[source]
AGI means "artificial general intelligence", it's got nothing to do with the singularity (which requires "artificial superior intelligence"; ASI). Requiring AGI to have capabilities that most humans lack is moving the goal post WRT to how it was originally defined.
replies(1): >>43746117 #
4. jpc0 ◴[] No.43746117{3}[source]
I don't think this is capabilities humans do not have, this to me is the one capability humans destinctly have over LLMs, the ability to introspect and shape their own future.

I feel this definition doesn't require a current LLM model to be able to change its own working but to be able to generate a guided next generation.

It's possible that LLMs can surpass human beings, purely because I believe we will inevitably be limited to short term storage constraints which LLMs will not. It will be a bandwidth vs througput question. An LLM will have a much larger although slightly slower store of knowledge than what human have. But will be much quicker than a human looking up and validating the data.

We aren't there yet.

5. gom_jabbar ◴[] No.43746239[source]
> The movement recently has been people trying to sell something less than this as an AGI.

Selling something that does not yet exist is an essential part of capitalism, which - according to the main thesis of philosophical Accelerationism - is (teleologically) identical to AI. [0] It's sometimes referred to as Hyperstition, i.e. fictions that make themselves real.

[0] https://retrochronic.com

6. TheAceOfHearts ◴[] No.43746649[source]
I really dislike this framing. Historically we've been very confused about what AGI means because we don't actually understand it. We're still confused so most working definitions have been iterated upon as models acquire new capabilities. It's akin to searching something in the fog of war: you set a course or destination because you think that's the approximate direction where the thing will be found, but then you get there and realize you were wrong so you continue exploring.

Most people have a rough idea of what AGI means, but we still haven't figured out an exact definition that lines up with reality. As we continue exploring the idea space, we'll keep figuring out which parameters place boundaries and requirements on what AGI means.

There's no reason to just accept an ancient definition from someone who was confused and didn't know any better at the time when they invented their definition. Older definitions were just shots in the dark that pointed in a general direction, but there's no guarantee that they would hit upon the exact destination.

7. esafak ◴[] No.43747342[source]
You're describing learning, not intelligence.