←back to thread

336 points mooreds | 3 comments | | HN request time: 0.403s | source
Show context
Animats ◴[] No.44486868[source]
A really good point in that note:

"But the fundamental problem is that LLMs don’t get better over time the way a human would. The lack of continual learning is a huge huge problem. The LLM baseline at many tasks might be higher than an average human's. But there’s no way to give a model high level feedback. You’re stuck with the abilities you get out of the box."

That does seem to be a problem with neural nets.

There are AIish systems that don't have this problem. Waymo's Driver, for example. Waymo has a procedure where, every time their system has a disconnect or near-miss, they run simulations with lots of variants on the troublesome situation. Those are fed back into the Driver.

Somehow. They don't say how. But it's not an end to end neural net. Waymo tried that, as a sideline project, and it was worse than the existing system. Waymo has something else, but few know what it is.

replies(5): >>44487055 #>>44487352 #>>44487582 #>>44487686 #>>44487725 #
1. hnanon12341 ◴[] No.44487055[source]
Yeah since most AI is training on massive data, it also means that it will take a while before you get your next massive data set again.
replies(2): >>44487148 #>>44487153 #
2. ◴[] No.44487148[source]
3. Animats ◴[] No.44487153[source]
Worse, the massive data set may not help much with mistakes. Larger LLMs do not seem to hallucinate less.