←back to thread

168 points selvan | 6 comments | | HN request time: 0.714s | source | bottom
1. TekMol ◴[] No.44461758[source]
It seems the only code that runs in the browser here is the code that talks to LLMs on servers.

Why would you need WASM for this?

replies(3): >>44461907 #>>44462433 #>>44465568 #
2. politelemon ◴[] No.44461907[source]
They're using some python libraries like openai-agents so presumably it's to save on development efforts of calling/prompting/managing the HTTP endpoints. But yes this could just be done in regular JS in the browser, they'd have to write a lot of boilerplate for an ecosystem which is mainly Python.
replies(1): >>44462115 #
3. yjftsjthsd-h ◴[] No.44462115[source]
> But yes this could just be done in regular JS in the browser, they'd have to write a lot of boilerplate for an ecosystem which is mainly Python.

Surely that's a prime use for AI?

4. m13rar ◴[] No.44462433[source]
From a quick gander. WASM is not to talk to the servers. WASM can be utilized to run AI Agents to talk to local LLMs from a sandboxed environment through the browser.

For example in the next few years if Operating System companies and PC producers make small local models stock standards to improve the operating system functions and other services. This local LLM engine layer can be used by browser applications too and that being done through WASM without having to write Javascript and using WASM sandboxed layer to safely expose the this system LLM Engine Layer.

replies(1): >>44463502 #
5. TekMol ◴[] No.44463502[source]
No matter if the LLM is on the same machine or elsewhere, why would you need WASM to talk to it and not just JS?
6. lgas ◴[] No.44465568[source]
You never need WASM (or any other language, bytecode format, etc) to talk to LLMs. But WASM provides things people might like for agents, eg. strict sandboxing by default.