This thread lead me to Mentra's MIT-licensed https://github.com/AugmentOS-Community/AugmentOS & https://augmentos.org/ for Android and (soon) iOS:
> Smart glasses OS, with dozens of built-in apps. Users get AI assistant, notifications, translation, screen mirror, captions, and more. Devs get to write 1 app that runs on any pair of smart glases.
Where "any" means this HCL:
Vuzix Z100 ($500)
Mentra.glass Mach1 ($350) or Live ($220)
Even Realities G1 ($600)
(future) Meizu StarV
Apple iPhones have state-of-the-art hardware (lidar, UWB precise positioning) that could help millions of visually impaired humans, but iPhone hardware has been limited by amberware (software frozen with minimal updates). Apple poured billions into now-cancelled money pits like Apple Car and VisionOS, while teams failed forward into AI, smart glasses and humanoid robots. Meanwhile, Meta smart glasses are S-curving from 2M to 10M nodes of data acquisition, https://news.ycombinator.com/item?id=43088369On paper, Apple Magnifier with Live Descriptions audio could justify the purchase of an iPhone Pro for dedicated single-app usage, https://support.apple.com/guide/iphone/live-descriptions-vis.... But while it works for short demos, the software is not reliable for continuous use. UWB AirTags and Lidar 3D imaging could enable precise indoor navigation for vision impaired users, but 5+ years of shipping hardware has not lead to usable software workflows.
The economic tragedy is that 95% of the technology for helping vision/cognition impaired humans could be repurposed for humanoid robots, with R&D bonus that humans can provide more real-world feedback (RLHF!) than silent lab robots. Until Apple breaks vision stasis with new technical leadership, or Apple/Meta are regulated by the EU to unlock reluctant open-glass-platform innovation, the only hackable option is open glass hardware for future BigTech sherlocking or acquisition.