←back to thread

226 points treesciencebot | 4 comments | | HN request time: 0.647s | source
Show context
quantumHazer ◴[] No.43799333[source]
Is this a solo/personal project? If it is is indeed very cool.

Is OP the blog’s author? Because in the post the author said that the purpose of the project is to show why NN are truly special and I wanted a more articulate view of why he/she thinks that? Good work anyway!

replies(2): >>43799344 #>>43799549 #
1. ollin ◴[] No.43799549[source]
Yes! This was a solo project done in my free time :) to learn about WMs and get more practice training GANs.

The special aspect of NNs (in the context of simulating worlds) is that NNs can mimic entire worlds from videos alone, without access to the source code (in the case of pokemon) or even without the source code having existed (as is the case for the real-world forest trail mimicked in this post). They mimic the entire interactive behavior of the world, not just the geometry (note e.g. the not-programmed-in autoexposure that appears when you look at the sky).

Although the neural world in the post is a toy project, and quite far from generating photorealistic frames with "trees that bend in the wind, lilypads that bob in the rain, birds that sing to each other", I think getting better results is mostly a matter of scale. See e.g. the GAIA-2 results (https://wayve.ai/wp-content/uploads/2025/03/generalisation_0..., https://wayve.ai/wp-content/uploads/2025/03/unsafe_ego_01_le...) for an example of what WMs can do without the realtime-rendering-in-a-browser constraints :)

replies(2): >>43799900 #>>43801326 #
2. janalsncm ◴[] No.43799900[source]
You mentioned it took 100 gpu hours, what gpu did you train on?
replies(1): >>43800142 #
3. ollin ◴[] No.43800142[source]
Mostly 1xA10 (though I switched to 1xGH200 briefly at the end, lambda has a sale going). The network used in the post is very tiny, but I had to train a really long time w/ large batch to get somewhat-stable results.
4. attilakun ◴[] No.43801326[source]
Amazing project. This has the same feel as Karpathy’s classic “The Unreasonable Effectiveness of Recurrent Neural Networks” blog post. I think in 10 years’ time we will look back and say “wow, this is how it started.”