←back to thread

507 points martinald | 1 comments | | HN request time: 0s | source
Show context
_sword ◴[] No.45055003[source]
I've done the modeling on this a few times and I always get to a place where inference can run at 50%+ gross margins, depending mostly on GPU depreciation and how good the host is at optimizing utilization. The challenge for the margins is whether or not you consider model training costs as part of the calculation. If model training isn't capitalized + amortized, margins are great. If they are amortized and need to be considered... yikes
replies(7): >>45055030 #>>45055275 #>>45055536 #>>45055820 #>>45055835 #>>45056242 #>>45056523 #
trilogic ◴[] No.45055536[source]
I have to disagree. The biggest cost is still energy consumption, water and maintenance. Not to mention, to keep up with the rivals in incredibly high tempo (so offering billions like Meta recently). Then the cost of hardware that is equal to Nvidia skyrocketing shares :) No one should dare to talk about profit yet. Now is time to grab the market, invest a lot and work hard, hopping for a future profit. The equation is still work on progress.
replies(3): >>45055568 #>>45055976 #>>45058036 #
DoesntMatter22 ◴[] No.45055976[source]
Is that not baked into the h100 rental costs?
replies(1): >>45056069 #
1. tptacek ◴[] No.45056069{3}[source]
It is.