Show HN: Sheet Music in Smart Glasses - https://news.ycombinator.com/item?id=43906442 - May 2025 (25 comments)
The possibility of shooting ads directly into the retina is probably the main driving force behind smart glasses.
Having a camera or a mic on the glasses themselves seems like something I'd mostly want to avoid for privacy, and having a speaker just seems like gilding the lily when we already have a variety of headphones to choose from.
I have a pair and I've been experimenting a bit.
For iOS you can mirror display or use Stage Manager. For Android, at least with Samsung, DEX is pretty decent.
For audio, they're decent too, I like the convenience and comfort. The audio has good fidelity, but depth is mediocre (better than phone speakers though).
FWIW I say DEX is decent, having much of the same gripes as I do with Stage Manager. Dual screen, resizing windows, and full screen support is still a mixed bag on all mobile devices. It can be very frustrating at times. Application support on iOS and Android is about the same, which is disappointing. Supposedly iOS 26 fixes some of this, but I haven't tried the beta.
Many people walk around with a mobile device out, essentially carrying a device with (increasingly) close to a 360 camera view. Cameras are ubiquitous and targeting one niche device is a waste of time and effort.
You can send low-resolution images to them via Bluetooth. I just figured out how to read button presses. There are speakers and a mic, but I haven't figured out how to use them yet (they don't show up as regular audio devices on Linux).
You'd need to write custom stuff to generate the images, but with a little imagemagick scripting I've had some pretty usable results.
I have Rayban Metas and the hardware is great...but the software borders on being unhelpful. If they merely served a dumb camera and bluetooth headset to my phone they'd be an unbelievably good product.
Meta won't do this because they want to capture _everything_ going on, but I don't want to chat with Meta's AI, it is very bad, I want to chat with Gemini or ChatGPT and I can do so with their glasses but I must initiate that on my phone (Meta won't give you wakewords for OpenAI/Google of course).
So my suggestion here would be don't? There is no need for an app store or anything like that, just the thinest software layer you need to make the sunglass hardware work as a dumb bluetooth headset and remote camera for the user's phone.
This is not the place for
* Prioritize work-life balance
edit: found it https://www.reddit.com/r/arduino/comments/1n7r3vl/a_textbot_...
https://github.com/Mentra-Community/MentraOS/blob/main/glass...
Except it seems they only run on Mentra glasses. Not Meta Ray-Bans, Echo Frames, or any of the many other existing smart glasses platforms.
Being upfront about 996 (and jira hatred further down the page) is wild. I sort of love it.
https://github.com/Mentra-Community/MentraOS/blob/main/glass...
We're also looking to support the Brilliant Labs Halo glasses once they release later this year.
Regarding Ray-Bans: We'd love to support those, but the Ray-Bans are extremely locked down. Nobody has found a jailbreak yet. We're always open to support more glasses provided they're all-day wearable and have an SDK.
https://github.com/Mentra-Community/MentraOS/tree/main/mcu_c...
https://github.com/Mentra-Community/MentraOS/tree/main/cloud
Smart glasses inevitably cost in those ranges because the exotic displays used on them are costly to make and/or operate. Inkjet OLED on silicon or reflexive monochrome LCD with RGB sequential front lighting combined with a prism system or things of that nature.
IOW, those excessive feature sets isn't drawn from product concepts or user stories, they're drawn backwards from cumulative parts and engineering costs to justify MSRP. Same reasons as why almost all EVs are marketed as premium products, they can't make them cheaply so they're adding extra glitters in paint to justify price tags.
If anyone could make displays smaller than a pinky fingernail at $5 that can be driven with an Arduino... then there would be lots of smart glasses that are just Bluetooth picture frames.
Both of these companies make exactly that. I have Rokid Max, can't comment on the quality for the Xreal
This is definitely not the smart glasses operating system to converge on.
If there's anything worthwhile in it I'd advise interested people in forking it, and turning it into an actually open open-source operating system.
We're also working on a pair of HUD glasses that will release in 2026 using an NRF5340 MCU. The code for this is being developed in the `mcu_client` folder.
https://docs.mentra.glass/ubuntu-deployment
https://docs.mentra.glass/railway-deployment
In your opinion, what do you think should change to make this an "actual" open source OS?
Android XR is coming out with Moohan next month, if Visor ever comes out, it is believed that will eventually by on AXR. Apple still seems hobbled since Jobs left
It's awkward, battery life is a pittance, the display can be useful but only in select cases. Controls are always an issue. LLMs won't actually fix that - voice control is not the answer.
It's not that the OS is not open source, it's that it seems a privacy nightmare; the fact that the app also runs on the developers' servers just adds to the amount of parties you need to trust. That you and the people around you need to trust, actually.
And the company has strong connections to China, by the way.
The system is also not very open, if the users are forced to use your store.
It seems unlikely that there's much to be salvaged, given that you're using AOSP as the actual operating system.
Always-on access to an LLM via voice is a useful and novel way of interacting with computers.
From trivial things like asking it about a landmark I'm seeing or when I'm driving to tell me about some historical event (almost like an on tap podcast), to slightly more useful things like asking it to add stuff to my calendar/reminders when I'm biking home.
It certainly isn't a replacement for a more robust interface, but it is a very nice way of using a computer while I'm out and about and don't want to pull out my phone.
> Cayden 凯登 Pierce CEO/CTO/Founder
> Cayden leads Mentra, overseeing software, hardware, and operations across San Francisco and Shenzhen
> Nicolo Micheletti
> In late 2024, he dropped out of Tsinghua University in Beijing to work on MentraOS
> Thomas Tee
> Head of Hardware
> Thomas leads Mentra’s hardware team in Shenzhen
> Mentra Shenzhen
> Baoan, Shenzhen,
> 广东省深圳市宝安区
> 新安街道创业二路
> 中洲中央公寓1905室
Or how about dash cams in cars? CCTV cameras on ATMs as you walk down the street?
The microphone lets you pick up voice, which is critical. Captions, translation, note-taking, etc. all benefit from this.
Even Realities G1 is this. Mentra is releasing a pair in first half of 2026 like this. Display + microphone.
How do the glasses serve as a "dumb camera to your phone"? What protocol do they use to do this? It doesn't exist. It's something that must be solved at the OS layer.
What if you want to use multiple apps? Are you going to spend 2 minutes each time disconnecting Bluetooth from one phone app, connecting to another, and then using it? No, you need to runtime that lets multiple apps access the sensors as needed.
Do you want to make an app that accesses the microphone? If you want to have translation app running at the same time that you're taking notes, then again you need some way to allow multiple apps to run at once.
MentraOS solves those problems.
But as the tech progresses, so will MentraOS to support spatial experiences.
Ad blocking - we'll need a research team and 5 years https://xkcd.com/1425/
This is enabled by relay servers. You can use Mentra's relay server, or host your own.
This is the architecture that we use and recommend so multiple apps can run at once and access powerful AI, while saving your phones battery. If you need to run offline or on the edge, we're working on the Mentra Edge SDK so you can skip the cloud, but it has downsides - only 1 app at a time.
Remember, every app on your phone is communicating with its own backend - which you have to trust. This isn't different.
Users aren't forced to use the store. You can self host a relay server, self host a store, etc.
I sometimes wonder why people "synchronize" anything, since everything is in my self-hosted instance.
USB webcams have been a thing for years ;)
I have a pair of Xreal glasses and, while they don’t have a camera, they do have the other components. They are entirely dumb. You plug the USB cable into your phone/laptop/portable gaming device and that’s literally it.
The cable runs discreetly from the back of the ear and has the additional benefit that you don’t need a heavy battery built into the frame of the glasses.
So you definitely can have a XR glasses that are “dumb”.
In my book, AR is completely invisible, until you need it:
- A smart watch + strict notification filters (good idea either way), means you only see a notification when something worthy of your attention happens (you get a text, event reminder, etc). You can glance at the notification and decide if it requires your reaction, instead of reaching for your phone. (And likely, waste another 10min on it.)
- Wireless earbuds with turn-by-turn directions, especially for walking and public transport. Again, you don't need a screen, you can admire your surroundings or read a book.
- Pay for stuff. If you'd spend 10s on pulling a card out of your wallet, it saves you an hour of your life per year.
- Track your vitals. Overall not that important, until you notice and suspect something is wrong - you can compare things month over month, see trends, show it to your doctor, etc. I took a hard hit falling off a skateboard, my watch started a countdown to call for help - I stopped it, but I needed a minute to get up, so it was really reassuring to have this.
I'll cite the recent thread with Ollama's founders - they said the same thing of wanting to always be open. But if they truly cared about the FOSS community, they would not have moved to shadily fork llama.cpp instead of just contributing to that project and stating it proudly/clearly. This came to a head recently with their priorities becoming clear for the launch of gpt-oss to the point that even Georgi Gerganov himself spoke up.
> Meta Horizon OS, previously known informally as Meta Quest Platform or Meta Quest OS, is an Android-based extended reality operating system for the Meta Quest line of devices released by Meta Platforms.
"What if you want to use multiple apps?" for a headset that's a window to a phone, you see the phone screen, the phone handles multitasking. Want to switch between apps? Then switch between apps on your phone, and you see the result.
"Do you want to make an app that accesses the microphone?" again, the phone does it. What OS do my bluetooth earphones run to be accessible from my phone?
I agree with what the person you're responding to wants: just an screen/audio interface with my phone. MentraOS is obviously not* aiming to be that, otherwise it wouldn't have any apps at all, especially not things like a "notes" app or any other app I already have on my phone.
The issue is as soon as you start trying to build an app ecosystem, you inevitably create the sort of opportunities business loves to exploit, and then all of a sudden I've got another layer for big tech to try extract stuff from me, when all I wanted was to be able to see my phone screen without having my phone directly in front of me - as someone who uses apps rather than develops them, I don't need another app store or more apps!
*Edit: having read some of their work culture, and the people involved, this isn't a project that's intended to be owned by humans, this is going to become the worst kind of big tech, or nothing.
> Life at Mentra
> We're a squad of hardcore builders between San Francisco and Shenzhen working 996 to build the next personal computer. We're upgrading human intelligence with high bandwidth interfaces. We're transhumanist hackers.
> And we're not just here for a job. We're here for a mission.
This is the worst of SV VC bullshit right here and is antithetical to open sustainable software.
"We're here for a mission" - I'm sure all those VC firms involved are there for the the same mission too, right?
If this goes anywhere or becomes anything, it'll be rug-pulled out of open source.
USB cameras also aren't natively supported on iOS/Android. You need apps. With apps comes lock-in opportunities which are never not tapped.
So "just use USB" doesn't make technical sense at all.
That just confirms to me that they're in the same position as any of VC backed founder. That's why the VC firm backed them: because they saw an opportunity to profit from someone else's dream.
I would really, really love to revisit this discussion with you and the founders in 5 and 10 years time - I'd be happier if you proved me wrong:)
I get the trade off. Glasses may some not simply have enough space for hardware. I briefly debated attaching a relay ( some of the processing for what I had in mind could be done with a simple raspberry ).
What we have here is a rare thing - an unvarnished account by an insider of what it took to make a deal with the VC sharks during the dawn of the Internet Age and come out of it with something you could take to the bank. You wont find many other tech business books giving such a detailed account from start to finish. Though it is lacking on the technical side, the view from the 'money guys' is pretty detailed. It is pretty amazing to realize the whole arc of the story is only about two years.. To then leverage his proceeds into some constructive social documentaries like 'Inside Job' is a great second act..
I know what Xreal uses. As I said, I have a pair
> DP needs 10-40Gbps of bandwidth, doesn't work wireless.
And as I also said, having a cable is a feature, not a problem.
VR headsets are heavy and uncomfortable. USB powered XR glasses are not. And the reason for that is because you don’t need to make those XR glasses as literal portable computers with heavy batteries.
You might relish the idea of an ugly monstrosity that weighs as much as a laptop strapped to your head. Myself, I’d much rather have something that look and feel like sunglasses. If that means I need a discreet UsB cable behind my ear, then thats a small price to pay because they’d still look less stupid than wearing anything bulkier out in public.
> USB cameras also aren't natively supported on iOS/Android. You need apps. With apps comes lock-in opportunities which are never not tapped.
That’s not a limitation for all platforms though. And you’d have that problem on Android whatever solution you opted for. So it’s a moot point.
> So "just use USB" doesn't make technical sense at all.
It does and plenty of people, myself included, owning a pair of Xreal glasses are proof of that.
The problem here is not USB, it’s that you have very specific differing requirements and thus are dismissing the practical value myself and others have shared.
no it's lenses and chassis. Lenses work precisely because of their density difference against air, so the better they are, the heavier they are. Chassis weigh a lot because they use impact resistant ABS and don't make them in forged Al-Li or Ti or molded Mg, which they should consider for hilarity, but then the product will cost like a bad joke. The mobile computer part weighs nothing, they're like somewhat soggy potato crisps. Those 0.8mm PCBs, boy they feel like cardstocks. Batteries weigh a bit, but they're also usually lipo pouches, like 0.5kg/L. You're not putting dozen 18650 into a VR headset.
Especially VR lenses are heavy and bulky because they need short focal lenses with massive pupils for max FOV and max transmittance. The panels tend to be way bigger than that for smart glasses thanks to Palmer Luckey which he deserves credit for. Smart glasses tend to use way smaller panels and prisms with fractions of FOVs relative to VR, like 1/6th? 1/12th? They carry some amount of weight but not nearly as much, especially if it's waveguide or holographic or working as pure fresnels.
I'm not going into the second half of this response. I am sorry but I don't think it's worth anyone's time if I explained why DP Alt don't count as USB and all that stuffs.
Case in point: even if you took the lenses out, they’d still weigh more than a pare of sunglasses. You even admitted that yourself, but then you quickly brushed over that point.
So does it really matter that lenses are also heavy when we are talking specifically about the battery?
I also happen to know a thing or two about mobile computing hardware and there’s a bunch of stuff you’ve also neglected to mention that would add weight. But ultimately the battery alone is a compelling enough argument.
Let’s also remember that I wasn’t just talking about weight but bulk too. Even if you could get the weight down so it’s comparable to a pare of glasses (you couldn’t, but let’s assume for the moment that you do manage to break the laws of physics), it’s still going to be bulkier than a pair of glasses.
So even if you were right that the lenses were the only thing that matter about weight (which you’re not), it’s still just a moot point.
> I'm not going into the second half of this response. I am sorry but I don't think it's worth anyone's time if I explained why DP Alt don't count as USB and all that stuffs.
I’m not an idiot. I’m well aware that Xreal are using DP, and I’ve already pointed out before.
My point is all I need to do as an end user is plug in a USB-C cable and everything “just works”. The underlying protocol is largely irrelevant. It’s like saying “you’re not using wireless, you’re using 802.11ac…” literally zero consumers give a shit because it’s completely irrelevant to the UX of the device.
They're so unobtrusive for chatting, that it would be amazing if we could get a capable LLM on the other side. Too bad we can't because corporations like walled gardens.
You don't. You just don't. You haven't seriously thought about making an HMD. You haven't heard of those microdisplay vendors or had frantically searched how to cheat those electrical requirements. You haven't gutted a mobile device and held the shells and non-compute parts in hand. You haven't even torn apart a phone. You don't know how injection molded Mg chassis feel which don't sit right with your brain.
Your views and opinions and distributions of are based on user side stories and dopamine releasing qualities the elements offer, which is great for marketing existing or virtual products, but isn't well connected to underlying hardware. It's like human figures reconstituted from Penfield's Homunculus, made completely out of proportion.
I already brought up the shell. From the way you fixated on the battery after going through that part, it clearly didn't even come to your mind that the shells can be heavier than it, which I'm sure would be the case for a lot of battery powered HMDs.
You called DP Alt an underlying "protocol". It's DisplayPort. They're like one-way PCIe x1. Back of magazine Agilent vs Teledyne LeCroy stuffs. Besides you called XREAL displays "entirely dumb" when they have more than a 2D sprite engine on board. I know it does 3DoF warping when the host is not whispering the right command. It's a stupid feature but I'm doubtful PS2 can handle that.
And you keep insisting, what appear to boil down to, "use USB dumbass". Nothing about that make any sense.
Also in the US there is no legal expectation of privacy on public streets. Plenty of public facing webcams are available for viewing.
Passing a law regulating the shape of a camera body is just stupid. Outlawing camera glasses makes less sense than outlawing camera flowers.
> From the way you fixated on the battery after going through that part, it clearly didn't even come to your mind that the shells can be heavier than it, which I'm sure would be the case for a lot of battery powered HMDs.
I mentioned the battery because I was making a comparison between wired vs wireless.
It’s really that simple.
> You called DP Alt an underlying "protocol". It's DisplayPort. They're like one-way PCIe x1
I also called it a specification elsewhere. What you’re actually doing there is calling out my incompetence at the English language. And being dyslexic I’d agree with you on that. I guess you’re going to insult me over that now too?
> Besides you called XREAL displays "entirely dumb"
No I didn’t. I said ‘“dumb”’ not “entirely dumb” and the quotes are important too because it clearly demonstrates I wasn’t using the term literally. The context was that they’re dumb compared to “smart” devices. Like how people talk about smart phones and “dumb phone even though “dumb phones” still have more computing power than Apollo 11.
It was a contextual statement, not a literal one. Hence the quotes.
> when they have more than a 2D sprite engine on board.
That depends on the model of the glasses. Mine don’t really do much processing compared to the higher end ones. Xreal make several different XR glasses. Though they all are wired (from what I recall).
The higher end ones do feel a lot more like smart devices, from what I’ve seen. Whereas the lower end ones need a companion device if you want to do anything beyond screening mirroring and audio.
Again, talking about UX here rather than components.
> And you keep insisting, what appear to boil down to, "use USB dumbass". Nothing about that make any sense.
Once again you’re rephrasing my comments in intentionally unflattering ways to misrepresent what I was saying.
I made the point that XR glasses can work without a full blown smart OS when driven by USB-C. And the fact that I literally own a pair of XR glasses that do exactly that, proves what I said not only makes sense, but is also factually correct.
———
Now you can continue to build strawman arguments, misquote me, make unfounded assumptions about me, and generally show bad HN etiquette if you want. But I have a family were my time is better spent. So I’ll leave you to find some other chump to troll this weekend
No, Snapchat did this just fine in the software layer with their glasses looooooong ago.
The attachable endoscope for my Ulefone Armor would disagree. It works with the stock Android camera app.
Lo and behold, you just need DRIVERS.
https://github.com/Mentra-Community/MentraOS/tree/main/cloud
I don't really see why you'd need a cloud for multiple apps to access "context" at the same time (or to be able to run more than one).
Being able to run apps on the cloud is nice, for the ones that are too heavy for the device, but I don't see why it would have to be a requirement.
You'll also have unnecessary lag in some situations, where a simple local app would be enough, and you'll be unable to do anything without an internet connection.
> This is enabled by relay servers. You can use Mentra's relay server, or host your own.
> This is the architecture that we use and recommend so multiple apps can run at once and access powerful AI, while saving your phones battery. If you need to run offline or on the edge, we're working on the Mentra Edge SDK so you can skip the cloud, but it has downsides - only 1 app at a time.
You're able to run AOSP on those devices, I don't see why you couldn't run several lightweight apps at the same time; if you mention the sdk, besides, I imagine that only apps that explicitly add support to run locally will be able to do so.
And, for lightweight software a constant data exchange will easily consume more battery than running everything locally.
> Remember, every app on your phone is communicating with its own backend - which you have to trust. This isn't different.
Every app on my phone?? You've never heard of local, offline apps?
And exchanging some data with a server is not the same as running entirely remotely, which necessarily requires transmitting all the data the app uses.
it's wild that you think that a "cloud app" has no privacy downsides compared to normal software.
> Users aren't forced to use the store. You can self host a relay server, self host a store, etc.
That's good.
But for yours, default store (that would likely become the dominant one, if your system were successful) you decided to require an account, reducing privacy further.