←back to thread

Getting 50% (SoTA) on Arc-AGI with GPT-4o

(redwoodresearch.substack.com)
394 points tomduncalf | 1 comments | | HN request time: 0s | source
Show context
mikeknoop ◴[] No.40712282[source]
(ARC Prize co-founder here).

Ryan's work is legitimately interesting and novel "LLM reasoning" research! The core idea:

> get GPT-4o to generate around 8,000 python programs which attempt to implement the transformation, select a program which is right on all the examples (usually there are 3 examples), and then submit the output this function produces when applied to the additional test input(s)

Roughly, he's implemented an outer loop and using 4o to sample reasoning traces/programs from training data and test. Hybrid DL + program synthesis approaches are solutions we'd love to see more of.

A couple important notes:

1. this result is on the public eval set vs private set (ARC Prize $).

2. the current private set SOTA ~35% solution also performed ~50% on the public set. so this new result might be SOTA but hasn't been validated or scrutinized yet.

All said, I do expect verified public set results to flow down to the private set over time. We'll be publishing all the SOTA scores and open source reproductions here once available: https://arcprize.org/leaderboard

EDIT: also, congrats and kudos to Ryan for achieving this and putting the effort in to document and share his approach. we hope to inspire more frontier AI research sharing like this

replies(11): >>40712673 #>>40712907 #>>40713440 #>>40714116 #>>40714245 #>>40714428 #>>40715353 #>>40715468 #>>40715482 #>>40716604 #>>40718028 #
lelanthran ◴[] No.40715468[source]
Maybe I am missing something, but to me this looks like "Let's brute-force on the training data".

I mean, generating tens of thousands of possible solutions, to find one that works does not, to me, signify AGI.

After all, the human solving these problem doesn't make 10k attempts before getting a solution, do they?

The approach here, due to brute force, can't really scale: if a random solution to a very simple problem has a 1/10k chance of being right, you can't scale this up to non-trivial problems without exponentially increasing the computational power used. Hence, I feel this is brute-force.

replies(1): >>40716577 #
killerstorm ◴[] No.40716577[source]
10000 samples are nothing compared to 2^100 possible outputs. It is absolutely, definitely not a "brute search". Testing a small fraction of possibilities (e.g. 0.000001%) is called heuristics, and that's what people use too.

Please learn a bit of combinatorics.

> After all, the human solving these problem doesn't make 10k attempts before getting a solution, do they?

No. People have much better "early rejection", also human brain has massive parallel compute capacity.

It's ridiculous to demand GPT-4 performs as good as a human. Obviously its vision is much worse and it doesn't have 'video' and physics priors people have, so it has to guess more times.

replies(1): >>40716765 #
lelanthran ◴[] No.40716765[source]
> 10000 samples are nothing compared to 2^100 possible outputs. It is absolutely, definitely not a "brute search". Testing a small fraction of possibilities (e.g. 0.000001%) is called heuristics, and that's what people use too.

Brute searching literally means generating solutions until one works. Which is exactly what is being done here.

> Please learn a bit of combinatorics.

Don't be condescending - I understand the problem space just fine. Fine enough to realise that the problem was constructed specifically to ensure that "solutions" such as this just won't work.

Which is why this "solution" is straight-up broken (doesn't meet the target, exceeds the computationally bounds, etc).

> It's ridiculous to demand GPT-4 performs as good as a human.

Wasn't the whole point of this prize to spur interest in a new approach to learning? What does GPT-[1234] have to do with the contest rules? Especially since this solution broke those rules anyway?

> Obviously its vision is much worse and it doesn't have 'video' and physics priors people have, so it has to guess more times.

That's precisely my point - it has to guess. Humans aren't guessing for those types of problems (not for the few that I saw anyway).

replies(2): >>40717295 #>>40718880 #
ealexhudson ◴[] No.40717295[source]
I think to be clear, brute force generally means an iterative search of a solution space. I don't think that's what this system is doing, and it's not like it's following some search path and returning as early as possible.

It's similar that a lot of wrong answers are being thrown up, but I think this is more like a probabilistic system which is being pruned than a walk of the solution space. It's much smarter, but not as smart as we would like.

replies(1): >>40717802 #
lelanthran ◴[] No.40717802[source]
> I think to be clear, brute force generally means an iterative search of a solution space.

Sure, but not an exhaustive one - you stop when you get a solution[1]. Brute force does not require an exhaustive search in order to be called brute-force.

GP was using the argument that because it is not exhaustive, it cannot be brute-force. That's the wrong argument. Brute-force doesn't have to be exhaustive to be brute-force.

[1] Or a good enough solution.

replies(1): >>40717996 #
1. naasking ◴[] No.40717996{3}[source]
A brute force search can be expected to find a solution after a more thorough search of the space of possibilities. If it really is only searching 0.000001% of that space before finding solutions, then some structure of the problem is guiding the search and it's no longer brute force.