←back to thread

625 points lukebennett | 3 comments | | HN request time: 0.451s | source
1. gchamonlive ◴[] No.42142809[source]
We should put a model in an actual body and let it in the world to build from experiences. Inference is costly though, so the robot would interact during a period and update it's model during another period, flushing the context window (short term memory) into its training set (long term memory).
replies(2): >>42142858 #>>42142901 #
2. bbor ◴[] No.42142858[source]
There are people trying this, both in simulated spaces and real ones - look into the “embodiment” camp if interested to see how they’re doing! There’s many experts who think AGI is unreachable without this, and I think the unexpected intuitive capabilities of LLMs are great support for that thesis, albeit in a non-spatial way.

Kant describes two human “senses”: the intensive sense of time, and the extensive sense of space. In this paradigm, spatial experience would be inextricably tied to all forms of logic, because it helps train the cognitive faculties that are intrinsically tied to all complex (discriminative?) thought.

3. jfoster ◴[] No.42142901[source]
That seems to be what Tesla is planning to do with Optimus.