←back to thread

2127 points bakugo | 6 comments | | HN request time: 0.001s | source | bottom
Show context
bcherny ◴[] No.43163488[source]
Hi everyone! Boris from the Claude Code team here. @eschluntz, @catherinewu, @wolffiex, @bdr and I will be around for the next hour or so and we'll do our best to answer your questions about the product.
replies(82): >>43163527 #>>43163532 #>>43163549 #>>43163554 #>>43163555 #>>43163576 #>>43163585 #>>43163588 #>>43163589 #>>43163592 #>>43163593 #>>43163632 #>>43163642 #>>43163664 #>>43163677 #>>43163733 #>>43163758 #>>43163789 #>>43163803 #>>43163813 #>>43163821 #>>43163893 #>>43163909 #>>43163915 #>>43163921 #>>43163957 #>>43163958 #>>43163992 #>>43164069 #>>43164089 #>>43164102 #>>43164103 #>>43164104 #>>43164111 #>>43164127 #>>43164158 #>>43164329 #>>43164353 #>>43164424 #>>43164482 #>>43164514 #>>43164585 #>>43164616 #>>43164768 #>>43164797 #>>43164819 #>>43164899 #>>43165002 #>>43165057 #>>43165065 #>>43165088 #>>43165091 #>>43165187 #>>43165308 #>>43165355 #>>43165409 #>>43165468 #>>43165499 #>>43165516 #>>43165570 #>>43165578 #>>43165592 #>>43165836 #>>43165884 #>>43165965 #>>43165976 #>>43165995 #>>43166183 #>>43166711 #>>43166748 #>>43167130 #>>43167804 #>>43168626 #>>43168836 #>>43169047 #>>43169107 #>>43169119 #>>43169294 #>>43169310 #>>43173097 #>>43174353 #>>43192161 #
pookieinc ◴[] No.43163554[source]
The biggest complaint I (and several others) have is that we continuously hit the limit via the UI after even just a few intensive queries. Of course, we can use the console API, but then we lose ability to have things like Projects, etc.

Do you foresee these limitations increasing anytime soon?

Quick Edit: Just wanted to also say thank you for all your hard work, Claude has been phenomenal.

replies(4): >>43163771 #>>43163889 #>>43164021 #>>43167940 #
punkpeye ◴[] No.43164021[source]
If you are open to alternatives, try https://glama.ai/gateway

We currently serve ~10bn tokens per day (across all models). OpenAI compatible API. No rate limits. Built in logging and tracing.

I work with LLMs every day, so I am always on top of adding models. 3.7 is also already available.

https://glama.ai/models/claude-3-7-sonnet-20250219

The gateway is integrated directly into our chat (https://glama.ai/chat). So you can use most of the things that you are used to having with Claude. And if anything is missing, just let me know and I will prioritize it. If you check our Discord, I have a decent track record of being receptive to feedback and quickly turning around features.

Long term, Glama's focus is predominantly on MCPs, but chat, gateway and LLM routing is integral to the greater vision.

I would love feedback if you are going to give a try frank@glama.ai

replies(5): >>43164075 #>>43164764 #>>43167057 #>>43173593 #>>43174149 #
airstrike ◴[] No.43164075[source]
The issue isn't API limits, but web UI limits. We can always get around the web interface's limits by using the claude API directly but then you need to have some other interface...
replies(1): >>43164415 #
1. punkpeye ◴[] No.43164415{3}[source]
The API still has limits. Even if you are on the highest tier, you will quickly run into those limits when using coding assistants.

The value proposition of Glama is that it combines UI and API.

While everyone focuses on either one or the other, I've been splitting my time equally working on both.

Glama UI would not win against Anthropic if we were to compare them by the number of features. However, the components that I developed were created with craft and love.

You have access to:

* Switch models between OpenAI/Anthropic, etc.

* Side-by-side conversations

* Full-text search of all your conversations

* Integration of LaTeX, Mermaid, rich-text editing

* Vision (uploading images)

* Response personalizations

* MCP

* Every action has a shortcut via cmd+k (ctrl+k)

replies(3): >>43164969 #>>43165930 #>>43166283 #
2. airstrike ◴[] No.43164969[source]
Ok, but that's not the issue the parent was mentioning. I've never hit API limits but, like the original comment mentioned, I too constantly hit the web interface limits particularly when discussing relatively large modules.
replies(1): >>43165299 #
3. glenstein ◴[] No.43165299[source]
Right, that's how I read it also. It's not that there's no limits with the API, but that they're appreciably different.
4. Aeolun ◴[] No.43165930[source]
> Even if you are on the highest tier, you will quickly run into those limits when using coding assistants.

Even heavy coding sessions never run into Claude limits, and I’m nowhere near the highest tier.

replies(1): >>43167484 #
5. m_kos ◴[] No.43166283[source]
Your chat idea is a little similar to Abacus AI. I wish you had a similarly affordable monthly plan for chat only, but your UI seems much better. I may give it a try!
6. smokeydoe ◴[] No.43167484[source]
I think it’s based on the tools you’re using. If I’m using Cline I don't have to try very hard to hit limits. I’m on the second tier.