←back to thread

1481 points sandslash | 2 comments | | HN request time: 0.823s | source
Show context
mkw5053 ◴[] No.44322386[source]
This DevOps friction is exactly why I'm building an open-source "Firebase for LLMs." The moment you want to add AI to an app, you're forced to build a backend just to securely proxy API calls—you can't expose LLM API keys client-side. So developers who could previously build entire apps backend-free suddenly need servers, key management, rate limiting, logging, deployment... all just to make a single OpenAI call. Anyone else hit this wall? The gap between "AI-first" and "backend-free" development feels very solvable.
replies(5): >>44322896 #>>44323157 #>>44323224 #>>44323300 #>>44323451 #
1. androng ◴[] No.44323451[source]
I think the way the friction could be reduced to almost zero was through OpenAI "custom GPTs" https://help.openai.com/en/articles/8554397-creating-a-gpt or "Alexa skills". how much easier can it get than the user using their own OpenAI account? Of course I'd rather have them on my own website but if were talking complete ease of use then I think that is a contender
replies(1): >>44323988 #
2. mkw5053 ◴[] No.44323988[source]
Fair point. I'm no expert in custom GPTs, I wonder what limitations there would be beyond the obvious branding and UI/UX control. Like, how far can someone customize a custom GPT (ha). I imagine any multi-step/agentic flows might be a challenge or impossible as it currently exists. It also seems like custom GPTs have been completely forgotten, but I very well could be wrong and OpenAI announced a big investment in them and new features tomorrow.