Most active commenters
  • alextheparrot(5)
  • brookst(5)
  • jerf(3)

←back to thread

AI agent benchmarks are broken

(ddkang.substack.com)
181 points neehao | 47 comments | | HN request time: 1.394s | source | bottom
1. jerf ◴[] No.44532037[source]
When I was being a bad HN reader and just reacting to the title, my initial impulse was to be placating, and observe that they are probably just immature. After all, for all that has happened, this is still only a couple year's worth of development, and it does tend to take a long time to develop good benchmarks.

However the article does seem to be pointing out some fundamental issues. I'm particularly annoyed by using LLMs to evaluate the output of LLMs. Anyone with enough experience to be writing benchmarks of this sort in the first place ought to know that's a no-go. It isn't even just using "AI to evaluate AI" per se, but using a judge of the same architecture as the thing being judged maximizes the probability of fundamental failure of the benchmark to be valid due to the judge having the exact same blind spots as the thing under test. As we, at the moment, lack a diversity of AI architectures that can play on the same level as LLMs, it is simply necessary for the only other known intelligence architecture, human brains, to be in the loop for now, however many other difficulties that may introduce into the testing procedures.

Tests that a "do nothing" AI can pass aren't intrinsically invalid but they should certainly be only a very small number of the tests. I'd go with low-single-digit percentage, not 38%. But I would say it should be above zero; we do want to test for the AI being excessively biased in the direction of "doing something", which is a valid failure state.

replies(9): >>44532155 #>>44532406 #>>44532411 #>>44532530 #>>44532765 #>>44532967 #>>44533182 #>>44533517 #>>44535537 #
2. potatolicious ◴[] No.44532155[source]
> "I'm particularly annoyed by using LLMs to evaluate the output of LLMs."

+1, and IMO part of a general trend where we're just not serious about making sure this shit works. Higher scores make stonks go up, who cares if it actually leads to reliably working products.

But also more importantly it's starting to expose the fact that we haven't solved one of ML's core challenges: data collection and curation. On the training side we have obviated this somewhat (by ingesting the whole internet, for example), but on the eval side it feels like we're increasing just going "actually constructing rigorous evaluation data, especially at this scale, would be very expensive... so let's not".

I was at a local tech meetup recently where a recruiting firm was proudly showing off the LLM-based system they're using to screen candidates. They... did not evaluate the end-to-end efficacy of their system. At all. This seems like a theme within our industry - we're deploying these systems based purely on vibes without any real quantification of efficacy.

Or in this case, we're quantifying efficacy... poorly.

replies(1): >>44533045 #
3. alextheparrot ◴[] No.44532406[source]
LLMs evaluating LLM outputs really isn’t that dire…

Discriminating good answers is easier than generating them. Good evaluations write test sets for the discriminators to show when this is or isn’t true. Evaluating the outputs as the user might see them are more representative than having your generator do multiple tasks (e.g. solve a math query and format the output as a multiple choice answer).

Also, human labels are good but have problems of their own, it isn’t like by using a “different intelligence architecture” we elide all the possible errors. Good instructions to the evaluation model often translate directly to better human results, showing a correlation between these two sources of sampling intelligence.

replies(5): >>44532598 #>>44533069 #>>44533673 #>>44533848 #>>44534579 #
4. sdenton4 ◴[] No.44532411[source]
When I was working in audio compression, evaluation was very painful because we had no programmatic way to measure how good some reconstructed audio sounds to a human. Any metric you could come up with was gameable, and direct optimization would lead to artifacts.

As a result, we always had a two-step evaluation process. We would use a suite of metrics to guide development progress (validation), but the final evaluation reported in a paper always involved subjective human listening experiments. This was expensive, but the only way to show that the codecs were actually improving.

Similarly, here it seems fine to use LLMs to judge your work in progress, but we should be requiring human evaluation for 'final' results.

replies(2): >>44532690 #>>44533710 #
5. BoiledCabbage ◴[] No.44532530[source]
> Tests that a "do nothing" AI can pass aren't intrinsically invalid but they should certainly be only a very small number of the tests. I'd go with low-single-digit percentage, not 38%. But I would say it should be above zero; we do want to test for the AI being excessively biased in the direction of "doing something", which is a valid failure state.

There is a simple improvement here: give the agent a "do nothing" button. That way it at least needs to understand the task well enough to know it should press the do nothing button.

Now a default agent that always presses it still shouldn't score 38%, but that's better than a NOP agent scoring 38%.

6. suddenlybananas ◴[] No.44532598[source]
What's 45+8? Is it 63?
replies(2): >>44533329 #>>44533390 #
7. ttoinou ◴[] No.44532690[source]
Wouldn't that process avoid you finding a better subjective audio codec that doesn't reduce typical metrics (PSNR etc.) ? another process would rather be to first construct a metric software that tries to be similar to the subjective experience of humans, then use that to create audio codecs optimizing this metric
replies(2): >>44533413 #>>44534636 #
8. jstummbillig ◴[] No.44532765[source]
> using a judge of the same architecture as the thing being judged maximizes the probability of fundamental failure of the benchmark to be valid due to the judge having the exact same blind spots as the thing under test.

That's what humans do all the time. What's the fundamental difference? Or are you saying that's also broken?

replies(4): >>44532931 #>>44533017 #>>44533421 #>>44533789 #
9. qsort ◴[] No.44532931[source]
We want machines that are better than humans, otherwise what purpose do they serve?
replies(1): >>44533165 #
10. datpuz ◴[] No.44532967[source]
Benchmarks in software have always been bullshit. AI benchmarks are just even more bullshit since they're trying to measure something significantly more subjective and nuanced than most.
11. rsynnott ◴[] No.44533017[source]
... I mean, when evaluating "45 + 8 minutes" where the expected answer was "63 minutes", as in the article, a competent human reviewer does not go "hmm, yes, that seems plausible, it probably succeeded, give it the points".

I know LLM evangelists love this "humans make mistakes too" line, but, really, only an _exceptionally_ incompetent human evaluator would fall for that one.

replies(1): >>44536189 #
12. rsynnott ◴[] No.44533045[source]
> +1, and IMO part of a general trend where we're just not serious about making sure this shit works.

I suspect quite a lot of the industry is actively _opposed_ to that, because it could be damaging for the "this changes everything" narrative.

13. e1g ◴[] No.44533069[source]
Agree, current "thinking" models are effectively "re-run this question N times, and determine the best answer", and this LLM-evaluating-LLM loop demonstrably leads to higher quality answers against objective metrics (in math, etc).
replies(1): >>44536085 #
14. xnx ◴[] No.44533165{3}[source]
A machine with human level "AI" is still useful if it can run 24/7 and you can spin up 1M instances.
replies(2): >>44535311 #>>44536137 #
15. xnx ◴[] No.44533182[source]
> I'm particularly annoyed by using LLMs to evaluate the output of LLMs

This does seem a little crazy on its face, but it is yielding useful and improving tools.

replies(1): >>44533529 #
16. ◴[] No.44533329{3}[source]
17. alextheparrot ◴[] No.44533390{3}[source]
If this sort of error isn’t acceptable, it should be part of an evaluation set for your discriminator

Fundamentally I’m not disagreeing with the article, but also think most people who care take the above approach because if you do care you read samples, find the issues, and patch them to hill climb better

18. layer8 ◴[] No.44533413{3}[source]
You are describing psychoacoustic models, which work to a reasonable extent for lossy compression of audio (MP3 and successors are based on them), but I can see how it would be much more difficult/less helpful for reconstructing audio.
19. jerf ◴[] No.44533421[source]
Yes, humans evaluating humans also causes human foibles to be magnified.

I cite the entire current education system. Substantiating that claim would take more than an HN comment allows, though I think most people can probably get the drift of what I'm talking about, even if we'd disagree about the details. Absolutely humans are not immune to this.

I also cite the entire concept of "fallacies", many of which are things that both human brains tend to produce and then tend to evaluate poorly. An alien species might find some of our fallacies absolutely transparent, and have entirely different fallacies of their own that none of us would find convincing in the slightest, because of fundamentally different brain architectures.

I don't think AIs are ready for this yet and I don't expect LLMs ever will be, but in the future getting an outsider perspective from them in a sort of Mixture of Experts architecture could be valuable for life decisions. (I look to the future AI architectures in which LLMs are just a component but not the whole.)

20. DonHopkins ◴[] No.44533517[source]
It's like using steel to produce steel. What else are you going to use? Bamboo?
replies(2): >>44533642 #>>44534526 #
21. jerf ◴[] No.44533529[source]
It's not about it being crazy and it's not about personal opinions about AI. It's about chaos mathematics. Iterating with the same system like that has certain easy-to-understand failure states. It's why I phrased it specifically in terms of using the same architecture to validate itself. If we had two radically different AI architectures that were capable of evaluating each other, firing them at each other for evaluation purposes would be much, much less susceptible to this sort of problem than firing either of them at themselves. That will never be a good idea.

See also a cousin comment of mine observing that human brains are absolutely susceptible to the same effect. We're just so used to it that it is the water we swim through. (And arguably human brains are more diverse than current AI systems functioning at this level. No bet on how long that will be true for, though.)

Such composite systems would still have their own characteristics and certainly wouldn't be guaranteed to be perfect or anything, but at least they would not tend to iteratively magnify their own individual flaws.

Perhaps someday we will have such diverse architectures. We don't today have anything that can evaluate LLMs other than human brains, though.

22. dmbche ◴[] No.44533642[source]
I'm not sure if I'm dense, but we don't use steel to make steel (whether crucibles or "feed material").

The first person to make steel made it without steel didn't they?

Did I miss something?

Edit0: fun tidbit - Wootz steel was made with crucibles of clay with rice husks mixed in (husks would carbonize quickly and introduce air layers to better isolate) and many seemingly random objects (fruits, vegetation) were added to the crucible to control carbon content.

I higly recommend A Collection of Unmitigated Pedantry's series on steel (it's a blog, just search "ACOUP steel".

replies(1): >>44536194 #
23. majormajor ◴[] No.44533673[source]
> Discriminating good answers is easier than generating them.

I don't think this is true for many fields - especially outside of math/programming. Let's say the task is "find the ten most promising energy startups in Europe." (This is essentially the sort of work I see people frequently talk about using research modes of models for here or on LinkedIn.)

In ye olden days pre-LLM you'd be able to easily filter out a bunch of bad answers from lazy humans since they'd be short, contain no detail, have a bunch of typos, formatting inconsistencies from copy-paste, etc. You can't do that for LLM output.

So unless you're a domain expert on European energy startups you can't check for a good answer without doing a LOT of homework. And if you're using a model that usually only looks at, say, the top two pages of Google results to try to figure this out, how is the validator going to do better than the original generator?

And what about when the top two pages of Google results start turning into model-generated blogspam?

If your benchmark can't evaluate prospective real-world tasks like this, it's of limited use.

A larger issue is that once your benchmark, that used this task as a criteria, based on an expert's knowledge, is published, anyone making an AI Agent is incredibly incentivized to (intentionally or not!) to train specifically on this answer without necessarily actually getting better at the fundamental steps in the task.

IMO you can never use an AI agent benchmark that is published on the internet more than once.

replies(3): >>44534637 #>>44536077 #>>44536438 #
24. DonHopkins ◴[] No.44533710[source]
You gotta snag yourself one of those awesome KEMAR dummy head and torso simulators, preferably the fully accessorized luxury edition that comes with the heavy duty portable travel case with lots of room for extra ears and microphones and wigs, which is so much fun to take through airport security.

They were great for taking to Grateful Dead concerts to record the music directly in front of the Wall of Sound, and to measure the response so you can play back all your Dead tapes with that same front row psychoacoustic perspective. ;)

https://www.grasacoustics.com/industries/kemar/applications-...

https://www.grasacoustics.com/products/accessories/product/4...

25. jacobr1 ◴[] No.44533789[source]
The equivalent would be having the _same human_ review their own work. We require others with different experience and fresh eyes for secondary review and for the most important task multiple people.

To some extent the same llm with a new context history and different prompt is sorta like that ... but still is much weaker than using a different system entirely.

replies(1): >>44536172 #
26. tempfile ◴[] No.44533848[source]
> Discriminating good answers is easier than generating them.

This is actually very wrong. Consider for instance the fact that people who grade your tests in school are typically more talented, capable, trained than the people taking the test. This is true even when an answer key exists.

> Also, human labels are good but have problems of their own,

Granted, but...

> it isn’t like by using a “different intelligence architecture” we elide all the possible errors

nobody is claiming this. We elide the specific, obvious problem that using a system to test itself gives you no reliable information. You need a control.

replies(2): >>44536083 #>>44536481 #
27. AIPedant ◴[] No.44534526[source]
It's more like using a faulty and dangerous automated foundry to make steel when you could just hire steelworkers.

That's the real problem here - these companies are swimming in money and have armies of humans working around the clock training LLMs, there is no honest reason to nickel-and-dime the actual evaluation of benchmarks. It's like OpenAI using exact text search to identify benchmark contamination for the GPT-4 technical report. I am quite certain they had more sophisticated tools available.

28. diggan ◴[] No.44534579[source]
> Discriminating good answers is easier than generating them.

Lots of other good replies to this specific part, but also, lots of developers are struggling with the feeling that reviewing code is harder than writing code (something I personally not sure I agree with), seen that sentiment being shared here on HN a lot, and would directly go against that particular idea.

replies(1): >>44536401 #
29. sdenton4 ◴[] No.44534636{3}[source]
There's two answers to that....

The first is, how do you know the subjective optimization your making is actually any good? You're just moving the problem back one layer of abstraction.

The second is, we did that, eventually, by training models to predict subjective listening scores from the giant pile of subjective test data we had collected over the years. (ViSQoL) It's great, but we still don't trust it for end-of-the-day, cross codec comparison, because we don't want to reward overfit on the trained model.

https://arxiv.org/abs/2004.09584

replies(1): >>44535052 #
30. jgraettinger1 ◴[] No.44534637{3}[source]
> You can't do that for LLM output.

That's true if you're just evaluating the final answer. However, wouldn't you evaluate the context -- including internal tokens -- built by the LLM under test ?

In essence, the evaluator's job isn't to do separate fact-finding, but to evaluate whether the under-test LLM made good decisions given the facts at hand.

replies(1): >>44535398 #
31. ttoinou ◴[] No.44535052{4}[source]
Nice

Well yeah you would still need human testing

32. einrealist ◴[] No.44535311{4}[source]
And boil the planet.
33. majormajor ◴[] No.44535398{4}[source]
I would if I was the developer, but if I'm the user being sold the product, or a third-party benchmarker, I don't think I'd have full access to that if most of that is happening in the vendor's internal services.
34. szvsw ◴[] No.44535537[source]
> I'm particularly annoyed by using LLMs to evaluate the output of LLMs.

Even though I largely agree with parts of what you wrote, if you squint your eyes enough you can kind of see an argument along the lines of “difficult to solve but easy to verify.”

35. brookst ◴[] No.44536077{3}[source]
> IMO you can never use an AI agent benchmark that is published on the internet more than once.

This is a long-solved problem far predating AI.

You do it by releasing 90% of the benchmark publicly and holding back 10% for yourself or closely trusted partners.

Then benchmark performance can be independently evaluated to determine if performance on the 10% holdback matches the 90% public.

36. rf15 ◴[] No.44536083{3}[source]
Trading control for convenience has always been the tradeoff in the recent AI hype cycle and the reason why so many people like to use ChatGPT.
37. brookst ◴[] No.44536085{3}[source]
That’s… not how thinking models work. They tend to be iterative and serial, not parallel and then pick-one.
replies(1): >>44537876 #
38. fragmede ◴[] No.44536137{4}[source]
and they don't have family that gets sick or dies or come into work hungover or go off on political tangents and cause HR issues or want to take vacations or complain about bad working conditions.
39. brookst ◴[] No.44536172{3}[source]
How do you feel about o3 reviewing 4o-mini?
40. brookst ◴[] No.44536189{3}[source]
have you ever hired human evaluators at scale? They make all sorts of mistakes. Relatively low probability, so it’s a noise factor in, but I have yet to meet the human who is 100% accurate at simple tasks done thousands of times.
replies(1): >>44537202 #
41. dmbche ◴[] No.44536194{3}[source]
Second fun tidbit : Bamboo was used as the fuel source in some furnaces - they did indeed use bamboo like the parent comment mentionned.
42. alextheparrot ◴[] No.44536401{3}[source]
I wish the other replies and this would engage with the sentence right after it indicating that you should test this premise empirically.
43. alextheparrot ◴[] No.44536438{3}[source]
> Good evaluations write test sets for the discriminators to show when this is or isn’t true.

If they can’t write an evaluation for the discriminator I agree. All the input data issues you highlight also apply to generators.

44. alextheparrot ◴[] No.44536481{3}[source]
It isn’t actually very wrong. Your example is tangential as graders in school have multiple roles — teaching the content and grading. That’s an implementation detail, not a counter to the premise.

I don’t think we should assume answering a test would be easy for a Scantron machine just because it is very good at grading them, either.

45. Jensson ◴[] No.44537202{4}[source]
Which is why you hire them at scale as you say, then they are very reliable. LLM at scale are not.

The problem with these AI models is there is no such point where you can just scale them up and they can solve problems as accurately as a group of humans. They add too much noise and eventually go haywire when left to their own devices.

replies(1): >>44542808 #
46. e1g ◴[] No.44537876{4}[source]
Parallel test time compute is exactly what SOTA models do, including Claude 4 Opus extended, o3 Pro, Grok 4 Heavy, and Gemini 2.5 Pro.
47. brookst ◴[] No.44542808{5}[source]
I haven’t found that to be the case. Both LLMs and humans produce outputs that cannot be blindly trusted to be accurate.