←back to thread

142 points markisus | 1 comments | | HN request time: 0s | source

LiveSplat is a system for turning RGBD camera streams into Gaussian splat scenes in real-time. The system works by passing all the RGBD frames into a feed forward neural net that outputs the current scene as Gaussian splats. These splats are then rendered in real-time. I've put together a demo video at the link above.
Show context
metalrain ◴[] No.43995399[source]
How did you train this? I'm thinking there isn't reference output for live video frame to splats so supervised learning doesn't work.

Is there some temporal accumulation?

replies(1): >>43995438 #
1. markisus ◴[] No.43995438[source]
There is no temporal accumulation, but I think that's the next logical step.

Supervised learning actually does work. Suppose you have four cameras. You input the three of them into the net and use the fourth as the ground truth. The live video aspect just emerges from re-running the neural net every frame.