←back to thread

248 points rishicomplex | 3 comments | | HN request time: 0.98s | source
Show context
chompychop ◴[] No.42170989[source]
Is it currently possible to reliably limit the cut-off knowledge of an LLM (either during training or inference)? An interesting experiment would be to feed an LLM mathematical knowledge only up to the year of proving a theorem, and then see if it can actually come up with the novel techniques used in the proof. For example, having only access to papers prior to 1993, can an LLM come up with Wiles' proof of FLT?
replies(2): >>42171207 #>>42171222 #
1. n4r9 ◴[] No.42171222[source]
There's the Frontier Math benchmarks [0] demonstrating that AI is currently quite far from human performance at research-level mathematics.

[0] https://arxiv.org/abs/2411.04872

replies(1): >>42177617 #
2. data_maan ◴[] No.42177617[source]
They didn't demonstrate anything. They haven't even released their dataset, nor mentioned how big it is.

It's just hot air, just like the AlphaProof announcement, where very little is know about their system.

replies(1): >>42181992 #
3. n4r9 ◴[] No.42181992[source]
They won't publish the problem set for obvious reasons. And I doubt it's hot air, given the mathematicians involved in creating it.