←back to thread

265 points ctoth | 1 comments | | HN request time: 0.217s | source
Show context
logicchains ◴[] No.43745171[source]
I'd argue that it's not productive to use any definition of AGI coined after 2020, to avoid the fallacy of shifting the goalposts.
replies(2): >>43745346 #>>43746649 #
Borealid ◴[] No.43745346[source]
I think there's a single definition of AGI that will stand until the singularity:

"An AGI is a human-created system that demonstrates iteratively improving its own conceptual design without further human assistance".

Note that a "conceptual design" here does not include tweaking weights within an already-externally-established formula.

My reasoning is thus:

1. A system that is only capable of acting with human assistance cannot have its own intelligence disentangled from the humans'

2. A system that is only intelligent enough to solve problems that somehow exclude problems with itself is not "generally" intelligent

3. A system that can only generate a single round of improvements to its own designs has not demonstrated improvements to those designs, as if iteration N+1 were truly superior to iteration N, it would be able to produce iteration N+2

4. A system that is not capable of changing its own design is incapable of iterative improvement, as there is a maximum efficacy within any single framework

5. A system that could improve itself in theory and fails to do so in practice has not demonstrated intelligence

It's pretty clear that no current-day system has hit this milestone; if some program had, there would no longer be a need for continued investment in algorithms design (or computer science, or most of humanity...).

A program that randomly mutates its own code could self-improve in theory but fails to do so in practice.

I don't think these goalposts have moved in the past or need to move in the future. This is what it takes to cause the singularity. The movement recently has been people trying to sell something less than this as an AGI.

replies(3): >>43745953 #>>43746239 #>>43747342 #
1. gom_jabbar ◴[] No.43746239[source]
> The movement recently has been people trying to sell something less than this as an AGI.

Selling something that does not yet exist is an essential part of capitalism, which - according to the main thesis of philosophical Accelerationism - is (teleologically) identical to AI. [0] It's sometimes referred to as Hyperstition, i.e. fictions that make themselves real.

[0] https://retrochronic.com