←back to thread

152 points GavinAnderegg | 8 comments | | HN request time: 0.227s | source | bottom
Show context
rogerkirkness ◴[] No.44456199[source]
Early stage founder here. You have no idea how worth it $200/month is as a multiple on what compensation is required to fund good engineers. Absolutely the highest ROI thing I have done in the life of the company so far.
replies(2): >>44456327 #>>44458993 #
1. lvl155 ◴[] No.44456327[source]
At this point, question is when does Amazon tell Anthropic to stop because it’s gotta be running up a huge bill. I don’t think they can continue offering the $200 plan for too long even with Amazon’s deep pocket.
replies(1): >>44456501 #
2. fragmede ◴[] No.44456501[source]
Inference is cheap to run though, and how many people do you think are getting their $200 worth of it?
replies(2): >>44457216 #>>44457422 #
3. anonzzzies ◴[] No.44457216[source]
I don't know, I have to figure out another way to count money I guess, but that $200 gives me a lot of worth, far more than 200. I guess if you like sleeping and do other stuff than drive Claude Code all the time, you might have a different feeling. For us it works well.
replies(1): >>44457395 #
4. fragmede ◴[] No.44457395{3}[source]
My question wasn't if the $200 was worth it to the buyer. Renting an H100 for a month is gonna cost around $1000 ($1.33+/hr). Pretend the use isn't bursty (but really it is). If you could get 6 people on one, the company is making money selling inference.
replies(1): >>44458136 #
5. lvl155 ◴[] No.44457422[source]
Based on people around me and anecdotal evidence of when Claude struggles, a lot more than you think. I’ve done some analysis on personal use between Openrouter, Amp, Claude API and $200 subscription, I probably save around $40-50/day. And I am a “light” user. I don’t run things in parallel too much.
6. lvl155 ◴[] No.44458136{4}[source]
Let me know when you can run Opus on H100.
replies(1): >>44459249 #
7. fragmede ◴[] No.44459249{5}[source]
I don't understand. Obviously I can't run Opus on an H100, only Anthropic can do that since they are the only ones with the model. I am assuming they are using H100s, and that an all-in cost for an H100 comes to less then $1000/month, and doing some back of the envelope math to say if they had a fleet of H100s at their disposal, that it would take six people running it flat out, for the $200/month plan to be profitable.
replies(1): >>44459562 #
8. WXLCKNO ◴[] No.44459562{6}[source]
Right but it probably takes like 8-10 H100s to run Claude Opus for inference just memory wise? I'm far from an expert just asking.

Does "one" Claude Opus instance count as the full model being loaded onto however many GPUs it takes ?