←back to thread

229 points modinfo | 1 comments | | HN request time: 1.35s | source
1. saurik ◴[] No.40834849[source]
YES!!! Back when Opera was adding a local AI to their browser UI, I had explained how I wanted it to be exposed as an API, as it seems like one of the few ACTUAL good uses for a user agent API: letting me choose which model I am using and where my data is going, rather than the website I am using (which inherently will require standardizing an API surface in the browser websites can use instead of trying to compete for scant memory resources by bringing their own local model or shipping my data off to some remote API).

https://news.ycombinator.com/item?id=39920803

> So while I am usually the person who would much rather the browser do almost nothing that isn't a hardware interface, requiring all software (including rendering) to be distributed as code by the website via the end-to-end principal--making the browser easy to implement and easy to secure / sandbox, as it is simply too important of an attack surface to have a billion file format parsing algorithms embedded within it--I actually would love (and I realize this isn't what Opera is doing, at least yet) to have the browser provide a way to get access to a user-selected LLM: the API surface for them--opaque text streaming in both directions--is sufficiently universal that I don't feel bad about the semantic lock-in and I just don't see any reasonable way to do this via the end-to-end principal that preserves user control over tradeoffs in privacy, functionality, and cost... if I go to a website that uses an LLM I should be the one choosing which LLM it is using, NOT the website!!, and if I want it to use some local model or the world's most powerful cloud model, I 1) should be in control of that selection and 2) pretty much have to be for local models to be feasible at all as I can't sit around downloading and caching gigabytes of data, separately, from every service that might make use of an LLM. (edit: Ok, in thinking about it a lot more maybe it makes more sense for this to be a separate daemon run next to the web browser--even if it comes with the web browser--which merely provides a localhost HTTP interface to the LLM, so it can also be shared by native apps... though, I am then unsure how web applications would be able to access them securely due to all of the security restrictions on cross-origin insecure port access.)