←back to thread

439 points david927 | 5 comments | | HN request time: 1.006s | source

What are you working on? Any new ideas which you're thinking about?
1. mkw5053 ◴[] No.44417574[source]
I’m building an open-source project (with a hosted option) that lets web and mobile devs add LLM-powered features with zero backend code. Current platforms like Vercel still require at least a backend serverless function even for basic LLM integrations. This handles key management, access control, usage tracking, rate limiting, message conversation state, etc so devs can focus on frontend.
replies(1): >>44418613 #
2. ianbicking ◴[] No.44418613[source]
Where does stuff like the prompts go? If you put them in the frontend then you have a bit of a security, monitoring/etc concern. If you don't put them in the frontend... then you have a backend. (But maybe a simpler backend for devs to work with.)
replies(1): >>44422448 #
3. mkw5053 ◴[] No.44422448[source]
We provide a fully managed, secure, and ready to use backend. You don’t have to develop, deploy, host, scale. It’s essentially “backend-in-a-box” for AI apps.
replies(1): >>44423570 #
4. ianbicking ◴[] No.44423570{3}[source]
Yeah but you didn't actually answer the question...?
replies(1): >>44425665 #
5. mkw5053 ◴[] No.44425665{4}[source]
Prompts (like system instructions) are stored and secured entirely on the backend we've built (and optionally host/manage). Your frontend never holds sensitive prompts or API keys, only the dynamic user inputs are sent to our backend, which then constructs and forwards the complete request to the LLM.