Most active commenters

    ←back to thread

    224 points azhenley | 13 comments | | HN request time: 0.215s | source | bottom
    1. nhod ◴[] No.45075076[source]
    A million years ago in AI time, AKA yesterday, there was a HN post from John Carmack talking about how Meta wasted a ton of time and money making XROS and how nowadays it doesn’t make any sense to write a new OS [1].

    And then this post today which makes a very strong case for it. (Yes, a VM isn’t an entire OS, Yes, it would be lighter weight than a complete OS. Yes, it would be industry-wide. Yes, we’d likely use an existing OS or codebase to start. Yes, nuance.)

    [1] https://news.ycombinator.com/item?id=45066395

    replies(4): >>45075289 #>>45075402 #>>45075695 #>>45078397 #
    2. ijk ◴[] No.45075289[source]
    I think the main difference is that sandboxing and simplifying the LLM's access to tools and data tends to be core functionality, whereas for XR it is more about performance and developer experience.

    I'm going to put a lot of work in anyway to keep the LLM from accidentally overwriting the code running it or messing with customer data incorrectly or not being overwhelmed with implementation details; having a standard for this makes it much easier and lets me rely on other people's model training.

    If it's merely that I have to train a dev on an XR SDK, I can pay them a salary or encourage schools to teach it. AI needs an team for an R&D project and compute time, which can get a lot more expensive at the high end.

    3. charcircuit ◴[] No.45075402[source]
    This article does not make a case for writing a new operating system. Building an execution environment for an AI to operate in is completely different from creating a new operating system from scratch designed to be optimized for an AI use case.
    replies(1): >>45075567 #
    4. ◴[] No.45075567[source]
    5. 7373737373 ◴[] No.45075695[source]
    WebAssembly with its sandboxing-by-default paradigm is pretty much halfway there, just need a well defined interface to transfer data and access rights between instances, and creating new instances from others.
    replies(3): >>45075718 #>>45075919 #>>45081985 #
    6. spankalee ◴[] No.45075718[source]
    That's what WASI components already are.
    7. cpuguy83 ◴[] No.45075919[source]
    https://microsoft.github.io/wassette/ does just this, using wasi components.
    8. raincole ◴[] No.45078397[source]
    Two completely different things. And you knew they're completely unrelated but still forced the comparison for unknown reason...

    > And then this post today which makes a very strong case for it

    After reading this post I saw nothing making a very strong case for a VM, let alone a new OS. They just want access controls.

    replies(1): >>45078725 #
    9. conradev ◴[] No.45078725[source]
    Yes. It’s just access controls. When people advocate for greater separation I remind them that separation is just one part of the cycle:

    https://xkcd.com/2044/

    We’re in the “connect everything” phase (MCP), and are about to enter into the “wow that’s a mess” phase

    10. saagarjha ◴[] No.45081985[source]
    This is a technical solution to a social problem
    replies(1): >>45082543 #
    11. 7373737373 ◴[] No.45082543{3}[source]
    This is NOT (just) a social problem! See: supply chain attacks and https://en.wikipedia.org/wiki/Confused_deputy_problem

    No number of signature schemes and trust networks will be able to prevent the effects of actual misuse of security breaches and problems arising from programming errors, only a technical solution can!

    It's stupid to rely on trust when one doesn't have to, to grant programs, imported modules or even individual functions more permissions than they need. Technical systems should give the best guarantees they can, and not risk the security of the entire system by default just because something failed at the social layer, or some component somewhere in the system misused (perhaps even by accident!) its https://en.wikipedia.org/wiki/Ambient_authority

    The attack surface of even individual programs can, and therefore SHOULD be minimizable. It's just that contemporary popular programming languages do not give programmers any methods (high level or even primitive) to achieve interior compartmentalization and utilize https://en.wikipedia.org/wiki/Capability-based_security in order to implement the https://en.wikipedia.org/wiki/Principle_of_least_privilege

    A program does what it does, and it always potentially could do everything it is allowed to. Especially at scale, when you use code from thousands of developers, along the depth and breadth of your tech stack, social trust doesn't scale. Reifying and making explicit the access rights the components of a program have does. Then, ill effects are limited to the rights that have been explicitly given, and the effects of the results that are further processed by other components of the program.

    Social assurances are practically worthless because they may be misinterpreted, bypassed, subverted or coerced. Technical guarantees instead can be formalized and verifiable.

    replies(2): >>45090871 #>>45092119 #
    12. lucketone ◴[] No.45090871{4}[source]
    > Technical systems should give the best guarantees they can, and not risk the security of the entire system by default

    True and at the same time this has the social aspect - somebody needs to list all the required capabilities/accesses, and developer might opt for requesting for too many permissions and casual user might allow that (caused by mix of incompetence and lack of interest)

    13. saagarjha ◴[] No.45092119{4}[source]
    I view a breach to be a socially-determined outcome, though. Yes, your library might be sandboxed, but for it accessing the internet might be OK and for you that means you are leaking PII. This is a difficult problem to solve.