←back to thread

2127 points bakugo | 1 comments | | HN request time: 0s | source
Show context
bcherny ◴[] No.43163488[source]
Hi everyone! Boris from the Claude Code team here. @eschluntz, @catherinewu, @wolffiex, @bdr and I will be around for the next hour or so and we'll do our best to answer your questions about the product.
replies(82): >>43163527 #>>43163532 #>>43163549 #>>43163554 #>>43163555 #>>43163576 #>>43163585 #>>43163588 #>>43163589 #>>43163592 #>>43163593 #>>43163632 #>>43163642 #>>43163664 #>>43163677 #>>43163733 #>>43163758 #>>43163789 #>>43163803 #>>43163813 #>>43163821 #>>43163893 #>>43163909 #>>43163915 #>>43163921 #>>43163957 #>>43163958 #>>43163992 #>>43164069 #>>43164089 #>>43164102 #>>43164103 #>>43164104 #>>43164111 #>>43164127 #>>43164158 #>>43164329 #>>43164353 #>>43164424 #>>43164482 #>>43164514 #>>43164585 #>>43164616 #>>43164768 #>>43164797 #>>43164819 #>>43164899 #>>43165002 #>>43165057 #>>43165065 #>>43165088 #>>43165091 #>>43165187 #>>43165308 #>>43165355 #>>43165409 #>>43165468 #>>43165499 #>>43165516 #>>43165570 #>>43165578 #>>43165592 #>>43165836 #>>43165884 #>>43165965 #>>43165976 #>>43165995 #>>43166183 #>>43166711 #>>43166748 #>>43167130 #>>43167804 #>>43168626 #>>43168836 #>>43169047 #>>43169107 #>>43169119 #>>43169294 #>>43169310 #>>43173097 #>>43174353 #>>43192161 #
pookieinc ◴[] No.43163554[source]
The biggest complaint I (and several others) have is that we continuously hit the limit via the UI after even just a few intensive queries. Of course, we can use the console API, but then we lose ability to have things like Projects, etc.

Do you foresee these limitations increasing anytime soon?

Quick Edit: Just wanted to also say thank you for all your hard work, Claude has been phenomenal.

replies(4): >>43163771 #>>43163889 #>>43164021 #>>43167940 #
punkpeye ◴[] No.43164021[source]
If you are open to alternatives, try https://glama.ai/gateway

We currently serve ~10bn tokens per day (across all models). OpenAI compatible API. No rate limits. Built in logging and tracing.

I work with LLMs every day, so I am always on top of adding models. 3.7 is also already available.

https://glama.ai/models/claude-3-7-sonnet-20250219

The gateway is integrated directly into our chat (https://glama.ai/chat). So you can use most of the things that you are used to having with Claude. And if anything is missing, just let me know and I will prioritize it. If you check our Discord, I have a decent track record of being receptive to feedback and quickly turning around features.

Long term, Glama's focus is predominantly on MCPs, but chat, gateway and LLM routing is integral to the greater vision.

I would love feedback if you are going to give a try frank@glama.ai

replies(5): >>43164075 #>>43164764 #>>43167057 #>>43173593 #>>43174149 #
cmdtab ◴[] No.43164764[source]
Do you have deepseek r1 support? I need it for a current product I’m working on.
replies(2): >>43165021 #>>43165209 #
1. pclmulqdq ◴[] No.43165021[source]
They are just selling a frontend wrapper on other people's services, so if someone else offers deepseek, I'm sure they will integrate it.