←back to thread

Getting 50% (SoTA) on Arc-AGI with GPT-4o

(redwoodresearch.substack.com)
394 points tomduncalf | 1 comments | | HN request time: 0s | source
Show context
trott ◴[] No.40712635[source]
François Chollet says LLMs do not learn in-context. But Geoff Hinton says LLMs' few-shot learning compares quite favorably with people!

https://www.youtube.com/watch?v=QWWgr2rN45o&t=46m20s

The truth is in the middle, I think. They learn in-context, but not as well as humans.

The approach in the article hides the unreliability of current LLMs by generating thousands of programs, and still the results aren't human-level. (This is impressive work though -- I'm not criticizing it.)

replies(1): >>40714144 #
1. hackpert ◴[] No.40714144[source]
I'm not sure how to quantify how quickly or well humans learn in-context (if you know of any work on this I'd love to read it!)

In general, there is too much fluff and confusion floating around about what these models are and are not capable of (regardless of the training mechanism.) I think more people need to read Song Mei's lovely slides[1] and related work by others. These slides are the best exposition I've found of neat ideas around ICL that researchers have been aware of for a while.

[1] https://www.stat.berkeley.edu/~songmei/Presentation/Algorith...