←back to thread

AI agent benchmarks are broken

(ddkang.substack.com)
181 points neehao | 1 comments | | HN request time: 0.207s | source
Show context
camdenreslink ◴[] No.44532235[source]
The current benchmarks are good for comparing between models, but not for measuring absolute ability.
replies(3): >>44532298 #>>44532615 #>>44533085 #
1. rsynnott ◴[] No.44533085[source]
I don't really buy that they're even necessarily useful for comparing models. In the example from the article, if model A says "48 + 6 minutes" and gets marked correct, and model B says "63 minutes" (the correct answer) and gets marked correct, the test will say that they're equivalent on that axis when in fact one gave a completely nonsense answer.