Most active commenters
  • Animats(3)
  • (3)

←back to thread

334 points mooreds | 15 comments | | HN request time: 0.615s | source | bottom
1. Animats ◴[] No.44486868[source]
A really good point in that note:

"But the fundamental problem is that LLMs don’t get better over time the way a human would. The lack of continual learning is a huge huge problem. The LLM baseline at many tasks might be higher than an average human's. But there’s no way to give a model high level feedback. You’re stuck with the abilities you get out of the box."

That does seem to be a problem with neural nets.

There are AIish systems that don't have this problem. Waymo's Driver, for example. Waymo has a procedure where, every time their system has a disconnect or near-miss, they run simulations with lots of variants on the troublesome situation. Those are fed back into the Driver.

Somehow. They don't say how. But it's not an end to end neural net. Waymo tried that, as a sideline project, and it was worse than the existing system. Waymo has something else, but few know what it is.

replies(5): >>44487055 #>>44487352 #>>44487582 #>>44487686 #>>44487725 #
2. hnanon12341 ◴[] No.44487055[source]
Yeah since most AI is training on massive data, it also means that it will take a while before you get your next massive data set again.
replies(2): >>44487148 #>>44487153 #
3. ◴[] No.44487148[source]
4. Animats ◴[] No.44487153[source]
Worse, the massive data set may not help much with mistakes. Larger LLMs do not seem to hallucinate less.
5. ei23 ◴[] No.44487352[source]
Why so few ask: Isnt it enough that current SOTA AI make us humans already so much better everyday? Exponential selfimprovement seems like a very scary thing to me and even if all went right, humans have to give up their pole position in the intelligence race. That will be very hard to swallow for many. If we really want self improvement, we should get used to beeing useless :)
replies(2): >>44487478 #>>44488119 #
6. otabdeveloper4 ◴[] No.44487478[source]
> current SOTA AI make us humans already so much better everyday?

Citation needed. I've seen the opposite effect. (And yes, it is supported by research.)

replies(1): >>44487523 #
7. ei23 ◴[] No.44487523{3}[source]
> Citation needed. I've seen the opposite effect. (And yes, it is supported by research.)

Citation needed.

replies(3): >>44487668 #>>44487691 #>>44487819 #
8. raspasov ◴[] No.44487582[source]
Correct me if I'm wrong, but from what I've heard, Waymo employs heuristics, rules, neural networks, and various other techniques that are combined and organized by humans into a system.

It's not an end-to-end neural network.

"AIish" is a good description. It is, by design, not AGI.

replies(1): >>44493137 #
9. ◴[] No.44487668{4}[source]
10. ozim ◴[] No.44487686[source]
If someone wants to get a feel how is this a problem with neural nets watch John Carmack recent talk:

https://www.youtube.com/watch?v=4epAfU1FCuQ

More specific part on this exact thing is around 30min mark.

11. nkoren ◴[] No.44487691{4}[source]
https://www.media.mit.edu/publications/your-brain-on-chatgpt...
12. jrank ◴[] No.44487725[source]
Yes, LLM don't improve as humans do, but they could use other tools for example designing programs in Prolog to expand their capabilities. I think that the next step in AI will be LLM being able to use better tools or strategies. For example, designing architectures in which logic rules, heuristic algorithms and small fine-tuned LLM agents are integrated as tools for LLMs. I think that new, more powerful architectures for helping LLMs are going to mature in the near future. Furthermore, there is the economic pushing force to develop AI application for warfare.

Edited: I should add, that a Prolog system could help the LLM to continue learning by adding facts to its database and inferring new relations, for example using heuristics to suggest new models or ways for exploration.

13. ◴[] No.44487819{4}[source]
14. hagbarth ◴[] No.44488119[source]
Sure, but that's what the post is about - AGI. You won't get to any reasonable definition of AGI without self improvement.
15. Animats ◴[] No.44493137[source]
Waymo's system generates a map of the environment, with obstacles, other road users, and predictions of what other road users will do. That map can be evaluated, both by humans and by later data about what actually happened. Passengers get to watch a simplified version of that map. Early papers from the Google self-driving project showed more detailed maps. The driving system runs off that map.

So there's an internal model. Much criticism of AI has been lack of an internal model. This problem has an internal model. It's specialized, but well matched to its task.

We see this in other robotics efforts, where there's a model and a plan before there's action. Other kinds of AI, especially "agentic" systems, may need that kind of explicit internal model. In a previous posting, about an AI system which was supposed to plan stocking for a vending machine, I suggested that there should be a spreadsheet maintained by the system, so it didn't make obvious business mistakes.