P.S.: Also, if that's indeed what they mean, I wonder why having google street view data isn't enough for that.
P.S.: Also, if that's indeed what they mean, I wonder why having google street view data isn't enough for that.
This, yes, based on how the backsides of similar buildings have looked in other learned areas.
But the other missing piece of what it is seems to be relativity and scale: I do 3D model generation at our game studio right now and the biggest want/need current models can't do is scale (and, specifically, relative scale) -- we can generate 3d models for entities in our game but we still need a person in the loop to scale them to a correct size relative to other models: trees are bigger than humans, and buildings are bigger still. Current generative 3d models just create a scale-less model for output; it looks like a "geospatial" model incorporates some form of relative scale, and would (could?) incorporate that into generated models (or, more likely, maps of models rather than individual models themselves).
Training data is people taking dedicated video of locations. Only ARCore supported devices can submit data as well. So I assume along with the video they're also collecting a good chunk of other data such as depth maps, accelerometer, gyrometer, magnetometer data, GPS, and more.