←back to thread

340 points agomez314 | 1 comments | | HN request time: 0.209s | source
Show context
thwayunion ◴[] No.35245821[source]
Absolutely correct.

We already know this is about self-driving cars. Passing a driver's test was already possible in 2015 or so, but SDCs clearly aren't ready for L5 deployment even today.

There are also a lot of excellent examples of failure modes in object detection benchmarks.

Tests, such as driver's tests or standardized exams, are designed for humans. They make a lot of entirely implicit assumptions about failure modes and gaps in knowledge that are uniquely human. Automated systems work differently. They don't fail in the same way that humans fail, and therefore need different benchmarks.

Designing good benchmarks that probe GPT systems for common failure modes and weaknesses is actually quite difficult. Much more difficult than designing or training these systems, IME.

replies(12): >>35245981 #>>35246141 #>>35246208 #>>35246246 #>>35246355 #>>35246446 #>>35247376 #>>35249238 #>>35249439 #>>35250684 #>>35251205 #>>35252879 #
jstummbillig ◴[] No.35246246[source]
> Designing good benchmarks that probe GPT systems for common failure modes and weaknesses is actually quite difficult. Much more difficult than designing or training these systems, IME.

What do you think is the difficulty?

replies(1): >>35246300 #
thwayunion ◴[] No.35246300[source]
A good benchmark provides a strong quantitative or qualitative signal that a model has a specific capability, or does not have a specific flaw, within a given operating domain.

Each part of this difficult -- identifying/characterizing the operating domain, figuring out how the empirically characterize a general abstract capability, figuring out how to empirically characterize a specific type of flaw, and characterizing the degree of confidence that a benchmark result gives within the domain. To say nothing of the actual work of building the benchmark.

replies(1): >>35246375 #
jstummbillig ◴[] No.35246375[source]
Sure – but how does this specificially concern GPT like systems? Why not test them for concrete qualifications in the way we test humans, using the tests we already designed to test concrete qualifications in humans?
replies(3): >>35246479 #>>35246588 #>>35248793 #
simiones ◴[] No.35248793[source]
To take a simplistic example, because a human who can provide a long motivated solution to a math problem that you re-use every three years likely understands the math behind it, while an LLM providing the same solution is likely just copying it from the training set and would be fully unable to resolve a similar problem that did not appear in the training set.

Lots of exams are designed to prove certain knowledge given safe assumptions of the known limitations of humans, which are completely wrong for machines. The relative difficulty of rote memorization versus having an accurate domain model is perhaps the most obvious one, but there are others.

Also, the opposite problem will often exist - if the exam is provided in the wrong format to the AI, we may underestimate its abilities (i.e. a very similar prompt may elicit a significantly better response).

replies(2): >>35249704 #>>35251232 #
1. thwayunion ◴[] No.35249704[source]
> Lots of exams are designed to prove certain knowledge given safe assumptions of the known limitations of humans, which are completely wrong for machines. The relative difficulty of rote memorization versus having an accurate domain model is perhaps the most obvious one, but there are others.

This paragraph is a gem. Well said.