←back to thread

Getting 50% (SoTA) on Arc-AGI with GPT-4o

(redwoodresearch.substack.com)
394 points tomduncalf | 1 comments | | HN request time: 0s | source
Show context
atleastoptimal ◴[] No.40714152[source]
I'll say what a lot of people seem to be denying. GPT-4 is an AGI, just a very bad one. Even GPT-1 was an AGI. There isn't a hard boundary between non AGI and AGI. A lot of people wish there was so they imagine absolutes regarding LLM's like "they cannot create anything new" or something like that. Just think: we consider humans a general intelligence, but obviously wouldn't consider an embryo or infant a general intelligence. So at what point does a human go from not generally intelligent to generally intelligent? And I don't mean an age or brain size, I mean suite of testable abilities.

Intelligence is an ability that is naturally gradual and emerges over many domains. It is a collection of tools via which general abstractive principles can be applied, not a singular universally applicable ability to think in abstractions. GPT-4, compared to a human, is a very very small brain trained for the single purpose of textual thinking with some image capabilities. Claiming that ARC is the absolute market of general intelligence fails to account for the big picture of what intelligence is.

replies(7): >>40714189 #>>40714191 #>>40714565 #>>40715248 #>>40715346 #>>40715384 #>>40716518 #
surfingdino ◴[] No.40714565[source]
> GPT-4 is an AGI, just a very bad one.

Then stop selling it as a tool to replace humans. A fast moving car breaking through a barrier and flying off the cliff could be called "an airborne means of transportation, just a very bad one" yet nobody is suggesting it should replace school busses if only we could add longer wings to it. What the LLM community refuses to see is that there is a limit to the patience and the financing the rest of the world will grant you before you're told, "it doesn't work mate."

> So at what point does a human go from not generally intelligent to generally intelligent?

Developmental psychology would be a good place to start looking for answers to this question. Also, forgetting scientific approach and going with common sense, we do not allow young humans to operate complex machinery, decide who is allowed to become a doctor, or go to jail. Intelligence is something that is not equally distributed across the human population and some of us never have much of it, yet we function and have a role in society. Our behaviour, choices, preferences, opinions are not just based on our intelligence, but often on our past experiences and circumstances. It is also not the sole quality we use to compare ourselves against each other. A not very intelligent person is capable of making the right choices (an otherwise obedient soldier refusing to press the button and blow up a building full of children); similarly, a highly intelligent person can become a hard-to-find serial criminal (a gynecologist impregnating his patients).

What intelligent and creative people hold against LLMs is not that they replace them, but that they replace them with a shit version of them relegating thousands of years of human progress and creativity to the dustbin of the models and layers of tweaks to the output that still produce unreliable crap. I think the person who wrote this sign summed it up best https://x.com/gvanrossum/status/1802378022361911711

replies(3): >>40714760 #>>40714937 #>>40718593 #
empath75 ◴[] No.40718593[source]
> Then stop selling it as a tool to replace humans

I don't understand why people assume that the purpose of any tool is to "replace humans". Automation doesn't replace humans and never has and never will. It simply does certain tasks that humans used to do, freeing people up to do different tasks. There is not a limited amount of work that can be done, there isn't a limited amount of _creative_ work that can be done. Even if AIs were good enough to do every creative task done by humans today (and they aren't and won't be any time soon), that doesn't mean that humans will have nothing of value to produce, or that humans will have been "replaced". There is always going to be work for humans to do, even in a universe where AI have super human capabilities at all tasks.

In particular, human beings strongly value the opinions and creative output of _human beings_ simply for the reason that they are human and similar to them. That will never change, no matter how intelligent that AIs get.

replies(2): >>40719125 #>>40720168 #
1. ◴[] No.40720168[source]