←back to thread

334 points mooreds | 1 comments | | HN request time: 0s | source
Show context
Animats ◴[] No.44486868[source]
A really good point in that note:

"But the fundamental problem is that LLMs don’t get better over time the way a human would. The lack of continual learning is a huge huge problem. The LLM baseline at many tasks might be higher than an average human's. But there’s no way to give a model high level feedback. You’re stuck with the abilities you get out of the box."

That does seem to be a problem with neural nets.

There are AIish systems that don't have this problem. Waymo's Driver, for example. Waymo has a procedure where, every time their system has a disconnect or near-miss, they run simulations with lots of variants on the troublesome situation. Those are fed back into the Driver.

Somehow. They don't say how. But it's not an end to end neural net. Waymo tried that, as a sideline project, and it was worse than the existing system. Waymo has something else, but few know what it is.

replies(5): >>44487055 #>>44487352 #>>44487582 #>>44487686 #>>44487725 #
1. jrank ◴[] No.44487725[source]
Yes, LLM don't improve as humans do, but they could use other tools for example designing programs in Prolog to expand their capabilities. I think that the next step in AI will be LLM being able to use better tools or strategies. For example, designing architectures in which logic rules, heuristic algorithms and small fine-tuned LLM agents are integrated as tools for LLMs. I think that new, more powerful architectures for helping LLMs are going to mature in the near future. Furthermore, there is the economic pushing force to develop AI application for warfare.

Edited: I should add, that a Prolog system could help the LLM to continue learning by adding facts to its database and inferring new relations, for example using heuristics to suggest new models or ways for exploration.