←back to thread

336 points mooreds | 2 comments | | HN request time: 0.404s | source
Show context
Animats ◴[] No.44486868[source]
A really good point in that note:

"But the fundamental problem is that LLMs don’t get better over time the way a human would. The lack of continual learning is a huge huge problem. The LLM baseline at many tasks might be higher than an average human's. But there’s no way to give a model high level feedback. You’re stuck with the abilities you get out of the box."

That does seem to be a problem with neural nets.

There are AIish systems that don't have this problem. Waymo's Driver, for example. Waymo has a procedure where, every time their system has a disconnect or near-miss, they run simulations with lots of variants on the troublesome situation. Those are fed back into the Driver.

Somehow. They don't say how. But it's not an end to end neural net. Waymo tried that, as a sideline project, and it was worse than the existing system. Waymo has something else, but few know what it is.

replies(5): >>44487055 #>>44487352 #>>44487582 #>>44487686 #>>44487725 #
1. raspasov ◴[] No.44487582[source]
Correct me if I'm wrong, but from what I've heard, Waymo employs heuristics, rules, neural networks, and various other techniques that are combined and organized by humans into a system.

It's not an end-to-end neural network.

"AIish" is a good description. It is, by design, not AGI.

replies(1): >>44493137 #
2. Animats ◴[] No.44493137[source]
Waymo's system generates a map of the environment, with obstacles, other road users, and predictions of what other road users will do. That map can be evaluated, both by humans and by later data about what actually happened. Passengers get to watch a simplified version of that map. Early papers from the Google self-driving project showed more detailed maps. The driving system runs off that map.

So there's an internal model. Much criticism of AI has been lack of an internal model. This problem has an internal model. It's specialized, but well matched to its task.

We see this in other robotics efforts, where there's a model and a plan before there's action. Other kinds of AI, especially "agentic" systems, may need that kind of explicit internal model. In a previous posting, about an AI system which was supposed to plan stocking for a vending machine, I suggested that there should be a spreadsheet maintained by the system, so it didn't make obvious business mistakes.