←back to thread

340 points agomez314 | 6 comments | | HN request time: 0.21s | source | bottom
Show context
thwayunion ◴[] No.35245821[source]
Absolutely correct.

We already know this is about self-driving cars. Passing a driver's test was already possible in 2015 or so, but SDCs clearly aren't ready for L5 deployment even today.

There are also a lot of excellent examples of failure modes in object detection benchmarks.

Tests, such as driver's tests or standardized exams, are designed for humans. They make a lot of entirely implicit assumptions about failure modes and gaps in knowledge that are uniquely human. Automated systems work differently. They don't fail in the same way that humans fail, and therefore need different benchmarks.

Designing good benchmarks that probe GPT systems for common failure modes and weaknesses is actually quite difficult. Much more difficult than designing or training these systems, IME.

replies(12): >>35245981 #>>35246141 #>>35246208 #>>35246246 #>>35246355 #>>35246446 #>>35247376 #>>35249238 #>>35249439 #>>35250684 #>>35251205 #>>35252879 #
Waterluvian ◴[] No.35246446[source]
On topic of the driver's test analogy: I've known people who have passed the test and still said, "I'm don't yet feel ready to drive during rush hour or in downtown Toronto." And then at some point in the future they then recognize that they are ready and wade into trickier situations.

I wonder how self-aware these systems can be? Could ChatGPT be expected to say things like, "I can pass a state bar exam but I'm not ready to be a lawyer because..."

replies(3): >>35246728 #>>35246735 #>>35246955 #
1. tsukikage ◴[] No.35246735[source]
The problem ChatGPT and the other language models currently in the zeitgeist are trying to solve is, "given this sequence of symbols, what is a symbol that is likely to come next, as rated by some random on fiverr.com?"

Turns out that this is sufficient to autocomplete things like written tests.

Such a system is also absolutely capable of coming up with sentences like "I can pass a state bar exam but I'm not ready to be a lawyer because..." - or, indeed, sentences with the opposite meaning.

It would, however, be a mistake to draw any conclusions about the system's actual capabilities and/or modes of failure from the things its outputs mean to the human reader; much the same way that if you have dice with a bunch of words on and you roll "I", "am", "sentient" in that order, this event is not yet evidence for the dice's sentience.

replies(2): >>35246804 #>>35259936 #
2. Waterluvian ◴[] No.35246804[source]
I generally agree. But I remain cautiously skeptical that perhaps our brains are also little more than that. Maybe we have no capacity for that kind of introspection but we demonstrate what looks like it, just because of how sections of our brains light up in relationship to other sections.
replies(2): >>35247203 #>>35247257 #
3. tsukikage ◴[] No.35247203[source]
I don't believe that AI models can become introspective without such a capability either being explicitly designed in (difficult, since we don't really know how our own brains accomplish this feat and we don't have any other examples to crib) or being implicitly trained in (difficult, because the random person on fiverr.com rating a given output during training doesn't really know much of anything about the model's internal state and therefore cannot rate the output based on how introspective it actually is; moreover, extracting information about a model's actual internal state in some manner humans can understand is an active area of research, which is to say we don't really know how to do this, and so we couldn't provide enough feedback to train the ability to introspect even if we were trying to).

I have no doubt that both these research areas can be improved on and that eventually either or both problems will be solved. However, the current generation of chatbots is not even trying for this.

4. marcosdumay ◴[] No.35247257[source]
> But I remain cautiously skeptical that perhaps our brains are also little more than that.

It's well known that our brains are nothing like the neural networks people run on computers today.

replies(1): >>35254113 #
5. TexanFeller ◴[] No.35254113{3}[source]
Just because neural nets aren't structured in the same way at a low level as the brain doesn't mean they might not end up implementing some of the same strategies.
6. IIAOPSW ◴[] No.35259936[source]
It is evidence, just not great evidence on its own. Now if you rolled the dice a few dozen times and it came out outrageously skewed towards "I" "am" "sentient", maybe its time to consider the possibility the dice are sentient.