←back to thread

340 points agomez314 | 1 comments | | HN request time: 0s | source
Show context
thwayunion ◴[] No.35245821[source]
Absolutely correct.

We already know this is about self-driving cars. Passing a driver's test was already possible in 2015 or so, but SDCs clearly aren't ready for L5 deployment even today.

There are also a lot of excellent examples of failure modes in object detection benchmarks.

Tests, such as driver's tests or standardized exams, are designed for humans. They make a lot of entirely implicit assumptions about failure modes and gaps in knowledge that are uniquely human. Automated systems work differently. They don't fail in the same way that humans fail, and therefore need different benchmarks.

Designing good benchmarks that probe GPT systems for common failure modes and weaknesses is actually quite difficult. Much more difficult than designing or training these systems, IME.

replies(12): >>35245981 #>>35246141 #>>35246208 #>>35246246 #>>35246355 #>>35246446 #>>35247376 #>>35249238 #>>35249439 #>>35250684 #>>35251205 #>>35252879 #
jstummbillig ◴[] No.35246246[source]
> Designing good benchmarks that probe GPT systems for common failure modes and weaknesses is actually quite difficult. Much more difficult than designing or training these systems, IME.

What do you think is the difficulty?

replies(1): >>35246300 #
thwayunion ◴[] No.35246300[source]
A good benchmark provides a strong quantitative or qualitative signal that a model has a specific capability, or does not have a specific flaw, within a given operating domain.

Each part of this difficult -- identifying/characterizing the operating domain, figuring out how the empirically characterize a general abstract capability, figuring out how to empirically characterize a specific type of flaw, and characterizing the degree of confidence that a benchmark result gives within the domain. To say nothing of the actual work of building the benchmark.

replies(1): >>35246375 #
jstummbillig ◴[] No.35246375[source]
Sure – but how does this specificially concern GPT like systems? Why not test them for concrete qualifications in the way we test humans, using the tests we already designed to test concrete qualifications in humans?
replies(3): >>35246479 #>>35246588 #>>35248793 #
1. sebzim4500 ◴[] No.35246479{4}[source]
The difference is the impact of contaminated datasets. Exam boards tend to reuse questions, either verbatim or slightly modified. This is not such a problem for assessing humans, because it is easier for a human to learn the material than to learn 25 years of prior exams. Clearly that is not the case for current LLMs.