←back to thread

760 points MindBreaker2605 | 1 comments | | HN request time: 0s | source
Show context
numpy-thagoras ◴[] No.45897574[source]
Good. The world model is absolutely the right play in my opinion.

AI Agents like LLMs make great use of pre-computed information. Providing a comprehensive but efficient world model (one where more detail is available wherever one is paying more attention given a specific task) will definitely eke out new autonomous agents.

Swarms of these, acting in concert or with some hive mind, could be how we get to AGI.

I wish I could help, world models are something I am very passionate about.

replies(2): >>45897629 #>>45901238 #
sebmellen ◴[] No.45897629[source]
Can you explain this “world model” concept to me? How do you actually interface with a model like this?
replies(6): >>45898106 #>>45898143 #>>45899047 #>>45901655 #>>45902131 #>>45902333 #
natch ◴[] No.45898143[source]
He is one of these people who think that humans have a direct experience of reality not mediated by as Alan Kay put it three pounds of oatmeal. So he thinks a language model can not be a world model. Despite our own contact with reality being mediated through a myriad of filters and fun house mirror distortions. Our vision transposes left and right and delivers images to our nerves upside down, for gawd’s sake. He imagines none of that is the case and that if only he can build computers more like us then they will be in direct contact with the world and then he can (he thinks) make a model that is better at understanding the world
replies(7): >>45898364 #>>45898490 #>>45898733 #>>45898924 #>>45899674 #>>45899676 #>>45904464 #
1. dragochat ◴[] No.45899674[source]
the fact that a not-so-direct experience of reality produces "good enough results" (eg. human intelligence) doesn't mean that a more-direct experience of reality won't produce much better results, and it clearly doesn't mean it can't produce these better results in AI

your whole reasoning is neither here not there, and attacking a straw man - YLC for sure knows that human experience of reality is heavily modified and distorted

but he also knows, and I'd bet he's very right on this, that we don't "sip reality through a narrow straw of tokens/words", and that we don't learn "just from our/approved written down notes", and only under very specific and expensive circumstances (training runs)

anything closer to more-direct-world-models (as LLMs are ofc at a very indirect level world models) has very high likelihood of yielding lots of benefits