←back to thread

Getting 50% (SoTA) on Arc-AGI with GPT-4o

(redwoodresearch.substack.com)
394 points tomduncalf | 3 comments | | HN request time: 0s | source
Show context
atleastoptimal ◴[] No.40714152[source]
I'll say what a lot of people seem to be denying. GPT-4 is an AGI, just a very bad one. Even GPT-1 was an AGI. There isn't a hard boundary between non AGI and AGI. A lot of people wish there was so they imagine absolutes regarding LLM's like "they cannot create anything new" or something like that. Just think: we consider humans a general intelligence, but obviously wouldn't consider an embryo or infant a general intelligence. So at what point does a human go from not generally intelligent to generally intelligent? And I don't mean an age or brain size, I mean suite of testable abilities.

Intelligence is an ability that is naturally gradual and emerges over many domains. It is a collection of tools via which general abstractive principles can be applied, not a singular universally applicable ability to think in abstractions. GPT-4, compared to a human, is a very very small brain trained for the single purpose of textual thinking with some image capabilities. Claiming that ARC is the absolute market of general intelligence fails to account for the big picture of what intelligence is.

replies(7): >>40714189 #>>40714191 #>>40714565 #>>40715248 #>>40715346 #>>40715384 #>>40716518 #
blharr ◴[] No.40714189[source]
The "general" part of AGI implies it should be capable across all types of different tasks. I would definitely call it real Artificial Intelligence, but it's not general by any means.
replies(1): >>40714598 #
FeepingCreature ◴[] No.40714598[source]
It's capable of attempting all types of different tasks. That is a novel capability on its own. We're used to GPT's amusing failures at this point, so we forget that there is absolutely no input you could hand to a chess program that would get it to try and play checkers.

Not so with GPT. It will try, and fail, but that it tries at all was unimaginable five years ago.

replies(1): >>40715160 #
dahart ◴[] No.40715160[source]
Its amusing to me how the very language used to describe GPT anthropomorphizes it. GPT wont “attempt” or “try” anything on its own without a human telling it what to try, it has no agenda, no will, no agency, no self-reflection, no initiative, no fear, and no desire. It’s all A and no I.
replies(2): >>40715775 #>>40719521 #
lupusreal ◴[] No.40719521[source]
That's not an interesting argument, all you're doing is staking out words that are reserved for beings with souls or something. It's like saying submarines can't swim. It's an argument about linguistics, not capabilities.
replies(1): >>40720610 #
1. dahart ◴[] No.40720610[source]
Disagree. The ability to “try” is a capability, and GPT doesn’t have it. Everything in the world that we’ve called “intelligent” up to this point has had autonomy and self motivation, and GPT doesn’t have those things. GPT doesn’t grow and doesn’t learn from its mistakes. GPT won’t act without a prompt, this is a basic fact of its design, and I’m not sure why people are suddenly confused about this.
replies(1): >>40729644 #
2. lupusreal ◴[] No.40729644[source]
"Try" doesn't imply autonomy or individual initiative because people can "try" to do things other people are ordering them to do.
replies(1): >>40733588 #
3. dahart ◴[] No.40733588[source]
Try rephrasing that without referring to people, otherwise you’re supporting my original point. ;)

“Try” does imply autonomy, it implies there was a goal on the part of the individual doing the trying, a choice about whether and how to try, and the possibility for failure to achieve the goal. You can argue that the words “attempt” and “try” can technically be used on machines, but it still anthropomorphizes them. If you say “my poor car was trying to get up the hill”, you’re being cheeky and giving your car a little personality. The car either will or will not make it up the hill, but it has no desire to “try” regardless of how you describe it, and most importantly it will not “try” to do anything without the human driving it.

You’re choosing to ignore my actual point and make this about semantics, which I agree is boring. Show me how GPT has any autonomy or agency of its own, and you can make this a more interesting conversation for both of us.