Ability to win a gold medal as if they were scored similarly to how humans are scored?
or
Ability to win a gold medal as determined by getting the "correct answer" to all the questions?
These are subtly two very different questions. In these kinds of math exams how you get to the answer matters more than the answer itself. i.e. You could not get high marks through divination. To add some clarity, the latter would be like testing someone's ability to code by only looking at their results to some test functions (oh wait... that's how we evaluate LLMs...). It's a good signal but it is far from a complete answer. It very much matters how the code generates the answer. Certainly you wouldn't accept code if it does a bunch of random computations before divining an answer.
The paper's answer to your question (assuming scored similarly to humans) is "Don’t count on it". Not a definitive "no" but they strongly suspect not.
> Any discrete-time computation (including backtracking search) becomes Markov if you define the state as the full machine configuration. Thus “Markov ⇒ no reasoning/backtracking” is a non sequitur. Moreover, LLMs can simulate backtracking in their reasoning chains. -- GPT-5
> The observable reality is that LLMs can do mathematical reasoning
I still can't get these machines to reliably perform basic subtraction[0]. The result is stochastic, so I can get the right answer, but have yet to reproduce one where the actual logic is correct[1,2]. Both [1,2] perform the same mistake and in [2] you see it just say "fuck it, skip to the answer" > You cannot counter observable reality
I'd call [0,1,2] "observable". These types of errors are quite common, so maybe I'm not the one with lying eyes.[0] https://chatgpt.com/share/68b95bf5-562c-8013-8535-b61a80bada...
[1] https://chatgpt.com/share/68b95c95-808c-8013-b4ae-87a3a5a42b...
[2] https://chatgpt.com/share/68b95cae-0414-8013-aaf0-11acd0edeb...