←back to thread

169 points thelastgallon | 2 comments | | HN request time: 0s | source
Show context
zmmmmm ◴[] No.45674918[source]
Kind of sad to see here on "hacker news" that 80% of the comments are low effort cheap shots.

The interesting thing here is the core of it, being Android XR and its deep AI integration, especially the spatial awareness. Devices will come and go, but the OS will be the core that stays and grows and evolves over time. I am very curious to know how much of this is all exposed as OS foundations to build on vs a monolithic app built to look like an OS by Google. This has been a large part of Meta's mistake, where the OS is not providing many of these fundamentals and any app you see doing it is mostly re-inventing it themselves or relying on 3rd party tools like Unity to do the heavy lifting.

The really impressive part of Vision Pro is actually how well thought out the OS is underneath it, exposing fundamentals of how 3D computing can work. Especially the part to do with compositing together multiple spatial apps living together in the same shared space and even interacting with each other (eg: one app can emit a lighting effect that will shade the other's rendering).

I am very curious if Google has done this kind of foundational work. Especially if that is designed (as they claim) from the ground up to interface with AI models - eg: a 3D vision language model can reason across everything in your shared space including your pass through reality and respond to it. This would be truly amazing but there's zero technical information I can see at this point to know if Google really built these foundations or not here.

replies(7): >>45675165 #>>45675237 #>>45675462 #>>45675980 #>>45676993 #>>45677007 #>>45677252 #
jayd16 ◴[] No.45675165[source]
> one app can emit a lighting effect that will shade the other's rendering

I always felt this was such an outrageous burden to developers. Its cute and all but really, who cares? I don't need one desktop window to emit light on another window. Is that really worth having to remake or modify every asset?

That said, all the work they did around laundering click and gaze information for privacy was nice to see.

replies(2): >>45675968 #>>45676078 #
1. zmmmmm ◴[] No.45675968[source]
> I always felt this was such an outrageous burden to developers

but the point is that it's not a burden? You get it for free. Unless you mean having accommodate in your app the fact that someone else's might be "shading" it or similar.

I think it's amazing: you can have a real world light source coloring a virtual object which is then a reflective light source that bounces off to affect rendering of a second app. And you don't have to do any of it, the OS is rendering all of this. It's fully analogous to say, your OS supporting transparency on a 2d window frame such that if I'm looking at one window I can see the one behind it. But in 3d and incorporating real world pass through it is so much more complex.

replies(1): >>45676515 #
2. jayd16 ◴[] No.45676515[source]
You need to use their shader and lighting model and yeah, you don't have full control of the lighting at that point.

If you have existing assets its really not trivial at all to port them and get them looking right. Not impossible but not trivial.