I wonder if what happens when we dream is similar to AIs. We start with some model of reality, generate a scenario, and extrapolate on it. It pretty much always goes "off the rails" at some point, dreams don't stay realistic for long.
When we're awake we have continual inputs from the outside world, these inputs help us keep our mental model of the world accurate to the world, since we're constantly observing the world.
Could it be that LLMs are essentially just dreaming? Could we add real-world inputs continually to allow them to "wake up"? I suspect more is needed, the separate training & inference phases of LLMs are quite unlike how humans work.
replies(1):