←back to thread

760 points MindBreaker2605 | 1 comments | | HN request time: 0.273s | source
Show context
numpy-thagoras ◴[] No.45897574[source]
Good. The world model is absolutely the right play in my opinion.

AI Agents like LLMs make great use of pre-computed information. Providing a comprehensive but efficient world model (one where more detail is available wherever one is paying more attention given a specific task) will definitely eke out new autonomous agents.

Swarms of these, acting in concert or with some hive mind, could be how we get to AGI.

I wish I could help, world models are something I am very passionate about.

replies(2): >>45897629 #>>45901238 #
1. kypro ◴[] No.45901238[source]
> Swarms of these, acting in concert or with some hive mind, could be how we get to AGI.

There's absolutely no reason to think this. In fact, all of the evidence we have to this point suggests that scaling intelligence horizontally doesn't increase capabilities – you have to scale vertically.

Additionally, as it stands I'd argue there's foundational architectural advancements needed before artificial neutral networks can learn and reason at the same level (or better) than humans across a wide variety of tasks. I suspect when we solve this for LLMs the same techniques could be applied to world models. Fundamentally, the question to ask here is whether AGI is io dependant, and I see no reason to believe this to be the case – if someone removes your eyes and cuts off your hands they don't make you any less generally intelligent.