Can you share the technical background you've used for creating the 3D reconstruction? Like software packages, or algorithms used.
Are we looking at the result of packages like OpenSfM here, or COLMAP?
Unlike Egyptian pyramids, the Maya built their temples layer by layer outward, so to understand them, researchers tunneled into the structures to understand the earlier phases of construction. I arranged the guided versions of the virtual tours in a rough chronology, moving from the highest to the lowest and oldest areas: the hieroglyphic stairway composing the largest Maya inscription anywhere, the Rosalila temple that was buried fully intact, and finally the tomb of the Founder of the city, Yax Kʼukʼ Moʼ.
I've been working to build on top of the Matterport SDK with Three.js--and then reusing the data in Unreal for a desktop experience or rendering for film (coming soon to PBS).
Blog about process: https://blog.mused.com/what-lies-beneath-digitally-recording...
Major thanks to the Matterport team for providing support with data alignment and merging tunnels while I was living in the village near site.
So in the virtual tour, you're seeing 360 imagery from the cameras and a lower resolution version of the 3d capture data, optimized for web. The lower res mesh from the scanner is transparent in first-person view mode so users get cursor effects on top of the 360 image.
For film, PBS sent out a documentary crew, and they wanted me to render some footage of the full tunnel system, so I exported the e57 pointcloud data from Matterport and rendered the clips they needed in Unreal. It should be coming out soon with "In the Americas."