Most active commenters

    ←back to thread

    520 points OlympicMarmoto | 14 comments | | HN request time: 0.927s | source | bottom
    1. armchairhacker ◴[] No.45067092[source]
    What would be the real advantage of a custom OS over a Linux distribution?

    The OS does process scheduling, program management, etc. Ok, you don’t want a VR headset to run certain things slowly or crash. But some Linux distributions are battle-tested and stable, and fast, so can’t you write ordinary programs that are fast and reliable (e.g. the camera movement and passthrough use RTLinux and have a failsafe that has been formally verified or extensively tested) and that’s enough?

    replies(6): >>45067184 #>>45067419 #>>45067428 #>>45069530 #>>45072115 #>>45072878 #
    2. jamboca ◴[] No.45067184[source]
    Think you answered your own question. No real differences except more articles, $, and hype
    replies(1): >>45070030 #
    3. Nuthen ◴[] No.45067419[source]
    Based on the latter tweet in the chain, I'm wondering if Carmack is hinting that Foveated Rendering (more processing power is diverted towards the specific part of the screen you're looking at) was one advantage envisioned for it. But perhaps he's saying that he's not so sure if the performance gains from it actually justify building a custom OS instead of just overclocking the GPU along with an existing OS?
    replies(2): >>45070034 #>>45072122 #
    4. v9v ◴[] No.45067428[source]
    Maybe not applicable for the XR platform here, but you could add introspection capabilities not present in Linux, a la Genera letting the developer hotpatch driver-level code, or get all processes running on a shared address space which lets processes pass pointers around instead of the Unix model of serializing/deserializing data for communication (http://metamodular.com/Common-Lisp/lispos.html)
    replies(1): >>45072456 #
    5. mikepurvis ◴[] No.45069530[source]
    I think the proper comparison point here is probably what game consoles have done since the Xbox 360, which is basically run a hypervisor on the metal with the app/game and management planes in separate VMs. That gives the game a bare metal-ish experience and doesn't throw away resources on true multitasking where it isn't really needed. At the same time it still lets the console run a dashboard plus background tasks like downloading and so on.
    replies(1): >>45080232 #
    6. const_cast ◴[] No.45070030[source]
    And, let's be real here: engineering prestige.

    Everyone wants to make an OS because that's super cool and technical and hard. I mean, that's just resume gold.

    Using Linux is boring and easy. Yawwwwn. But nobody makes an OS from scratch, only crazy greybeard developers do that!

    The problem is, you're not crazy greybeard developers working out of your basement for the advancement of humanity. No. Youre paid employees of a mega corporation. You have no principles, no vision. You're not Linus Trovalds.

    7. mook ◴[] No.45070034[source]
    Wouldn't that be an application (or at most system library) concern though? The OS is just there to sling pixels, it wouldn't have any idea whether those pixels are blurry… well for VR it would all be OpenGL or equivalent so the OS just did hardware access permissions.
    replies(1): >>45071302 #
    8. hedgehog ◴[] No.45071302{3}[source]
    I think the context is that foveated rendering ties sensor input (measuring gaze direction) to the rendering pipeline in a way that requires very low latency. Past a certain point reducing latency requires optimizations that break normal abstractions made by user land, so you end up with something more custom. I'm not sure why that would require a whole new OS, the obvious path would be to put the latency-sensitive code onto dedicated hardware and leave the rest managed by Linux. If a bunch of smart people thought XROS was a good idea there's probably something there though, even if it didn't pan out.
    9. raggi ◴[] No.45072115[source]
    For this use case a major one would be better models for carved up shared memory with safe/secure mappings in and out of specialized hardware like the gpu. Android uses binder for this and there are a good number of practical pains with it being shoved into that shape. Some other teams at Google doing similar stuff at least briefly had a path with another kernel module to expose a lot more and it apparently enabled them to fix a lot of problems with contention and so on. So it’s possible to solve this kind of stuff, just painful to be missing the primitives.
    10. raggi ◴[] No.45072122[source]
    Just overclock (more) the system that’s already in a severe struggle to meet power, thermal and fidelity budgets?
    11. nolist_policy ◴[] No.45072456[source]
    You can do that on Linux today with vfork.
    12. sulam ◴[] No.45072878[source]
    I stated this elsewhere, but at least six years ago a major justification was a better security model. At least that’s what Michael Abrash told me when I asked.
    13. ksec ◴[] No.45080232[source]
    Hold on a sec, is that the same on PS5? I am pretty sure that wasn't the case two generations ago. Is that the norm now, running on hypervisor ?
    replies(1): >>45082986 #
    14. mikepurvis ◴[] No.45082986{3}[source]
    It's been the case since the PS3: https://www.psdevwiki.com/ps5/Hypervisor