←back to thread

334 points mooreds | 7 comments | | HN request time: 1.113s | source | bottom
Show context
Animats ◴[] No.44486868[source]
A really good point in that note:

"But the fundamental problem is that LLMs don’t get better over time the way a human would. The lack of continual learning is a huge huge problem. The LLM baseline at many tasks might be higher than an average human's. But there’s no way to give a model high level feedback. You’re stuck with the abilities you get out of the box."

That does seem to be a problem with neural nets.

There are AIish systems that don't have this problem. Waymo's Driver, for example. Waymo has a procedure where, every time their system has a disconnect or near-miss, they run simulations with lots of variants on the troublesome situation. Those are fed back into the Driver.

Somehow. They don't say how. But it's not an end to end neural net. Waymo tried that, as a sideline project, and it was worse than the existing system. Waymo has something else, but few know what it is.

replies(5): >>44487055 #>>44487352 #>>44487582 #>>44487686 #>>44487725 #
1. ei23 ◴[] No.44487352[source]
Why so few ask: Isnt it enough that current SOTA AI make us humans already so much better everyday? Exponential selfimprovement seems like a very scary thing to me and even if all went right, humans have to give up their pole position in the intelligence race. That will be very hard to swallow for many. If we really want self improvement, we should get used to beeing useless :)
replies(2): >>44487478 #>>44488119 #
2. otabdeveloper4 ◴[] No.44487478[source]
> current SOTA AI make us humans already so much better everyday?

Citation needed. I've seen the opposite effect. (And yes, it is supported by research.)

replies(1): >>44487523 #
3. ei23 ◴[] No.44487523[source]
> Citation needed. I've seen the opposite effect. (And yes, it is supported by research.)

Citation needed.

replies(3): >>44487668 #>>44487691 #>>44487819 #
4. ◴[] No.44487668{3}[source]
5. nkoren ◴[] No.44487691{3}[source]
https://www.media.mit.edu/publications/your-brain-on-chatgpt...
6. ◴[] No.44487819{3}[source]
7. hagbarth ◴[] No.44488119[source]
Sure, but that's what the post is about - AGI. You won't get to any reasonable definition of AGI without self improvement.