Most active commenters
  • vlovich123(3)
  • infgeoax(3)

←back to thread

Getting 50% (SoTA) on Arc-AGI with GPT-4o

(redwoodresearch.substack.com)
394 points tomduncalf | 15 comments | | HN request time: 1.045s | source | bottom
Show context
asperous ◴[] No.40712326[source]
Having tons of people employ human ingenuity to manipulate existing LLMs into passing this one benchmark kind of defeats the purpose of testing for "AGI". The author points this out as it's more of a pattern matching test.

Though on the other hand figuring out which manipulations are effective does teach us something. And I think most problems boil down to pattern matching, creating a true, easily testable AGI test may be tough.

replies(5): >>40712503 #>>40712555 #>>40712632 #>>40713120 #>>40713156 #
1. worstspotgain ◴[] No.40713120[source]
Let me play devil's advocate for a second. Let's suppose that with LLMs, we've actually invented an AGI machine that also happens to produce useful textual responses to a prompt.

This would sound more far-fetched if we knew exactly how they work, bit-by-bit. We've been training them statistically, via the data-for-code tradeoff. The question is not yet satisfactorily answered.

In this hypothetical, for every accusation that an LLM passes a test because it's been coached to do so, there's a counter that it was designed for "excessively human" AGI to begin with, maybe even that it was designed for the unconscious purpose of having humans pass it preferentially. The attorney for the hypothetical AGI in the LLM would argue that there are tons of "LLM AGI" problems it can solve that a human would struggle with.

Fundamentally, the tests are only useful insofar as they let us improve AI. The evaluation of novel approaches to pass them like this one should err in the approaches' favor, IMO. A 'gotcha' test is the least-useful kind.

replies(1): >>40713521 #
2. vlovich123 ◴[] No.40713521[source]
There’s every reason to believe that AGI is meaningfully different from LLMs because humans do not take anywhere near this amount of training data to create inferences (that and executive planning and creative problem solving are clear weak spots in LLMs)
replies(3): >>40713651 #>>40714011 #>>40718212 #
3. og_kalu ◴[] No.40713651[source]
>There’s every reason to believe that AGI is meaningfully different from LLMs because humans do not take anywhere near this amount of training data to create inferences

The human brain is millions of years of brute force evolution in the making. Comparing it to a transformer or any other ANN really which essentially start from scratch relatively speaking doesn't mean much.

replies(1): >>40713702 #
4. infgeoax ◴[] No.40713702{3}[source]
Plus it's unclear if the amount of data used to "train" a human brain is really less than what GPT4 used. Imagine all the inputs from all the senses of a human over a lifetime: the sound, light, touches, interactions with peers, etc.
replies(2): >>40714140 #>>40714993 #
5. visarga ◴[] No.40714011[source]
How many attempts have there been for humans to solve math or science outstanding problems? We're also kind of spamming with ideas until one works out
replies(1): >>40714044 #
6. vlovich123 ◴[] No.40714044{3}[source]
I’ll give you as much time as you want with an LLM and am 100% sure that it won’t solve a single outstanding complex math problem.
replies(2): >>40714251 #>>40718919 #
7. Jensson ◴[] No.40714140{4}[source]
But that is of little help when you want to train an LLM to do the job at your company. A human requires just a little bit of tutorials and help, an LLM still require an unknown amount of data to get up to speed since we haven't reached that level yet.
replies(1): >>40714356 #
8. danielbln ◴[] No.40714251{4}[source]
I can say the same about myself, and I would probably consider myself generally intelligent.
replies(1): >>40714383 #
9. infgeoax ◴[] No.40714356{5}[source]
Yeah humans can generalize much faster than LLM with far fewer "examples" running on sandwiches and coffee.
replies(1): >>40714614 #
10. vlovich123 ◴[] No.40714383{5}[source]
There’s a meaningful difference between a silicon intelligence and an organic one. Every silicon intelligence is closer to an equally smart clone whereas organic ones have much more variance (not to mention different training).

Anyway, my point was that humans butter direct their energy than randomly spamming ideas, at least with the innovation of the scientific method. But an LLM struggles deeply to perform reasoning.

11. logicchains ◴[] No.40714614{6}[source]
>Yeah humans can generalize much faster than LLM with far fewer "examples" running on sandwiches and coffee.

This isn't really true. If you give an LLM a large prompt detailing a new spoken language, programming language or logical framework with a couple examples, and ask it to do something with it, it'll probably do a lot better at it than if you just let an average human read the same prompt and do the same task.

replies(1): >>40723361 #
12. alchemist1e9 ◴[] No.40714993{4}[source]
Don’t forget all the lifetimes of all ancestors as well. A lot of our intelligence is something we are born with and a result of many millions of years of evolution.
13. bongodongobob ◴[] No.40718212[source]
Our compute architecture has been brute forced via an revolutionary algorithm over a billion years. An LLM approaching our capabilities in like a year is pretty fucking good.
14. coolspot ◴[] No.40718919{4}[source]
> I’ll give you as much time as you want with an LLM

With infinite amount of time you can LLM brute force whole search space. Infinite monkeys with typewriters.

15. infgeoax ◴[] No.40723361{7}[source]
Hmm, but is it really "generalizing" or just pulling information from the training data? I think that's what this benchmark is really about: to adapt to something it has never seen before quickly.