Most active commenters

    ←back to thread

    281 points GabrielBianconi | 23 comments | | HN request time: 1.298s | source | bottom
    1. brilee ◴[] No.45065876[source]
    For those commenting on cost per token:

    This throughput assumes 100% utilizations. A bunch of things raise the cost at scale:

    - There are no on-demand GPUs at this scale. You have to rent them for multi-year contracts. So you have to lock in some number of GPUs for your maximum throughput (or some sufficiently high percentile), not your average throughput. Your peak throughput at west coast business hours is probably 2-3x higher than the throughput at tail hours (east coast morning, west coast evenings)

    - GPUs are often regionally locked due to data processing issues + latency issues. Thus, it's difficult to utilize these GPUs overnight because Asia doesn't want their data sent to the US and the US doesn't want their data sent to Asia.

    These two factors mean that GPU utilization comes in at 10-20%. Now, if you're a massive company that spends a lot of money on training new models, you could conceivably slot in RL inference or model training to happen in these off-peak hours, maximizing utilization.

    But for those companies purely specializing in inference, I would _not_ assume that these 90% margins are real. I would guess that even when it seems "10x cheaper", you're only seeing margins of 50%.

    replies(7): >>45067585 #>>45067903 #>>45067926 #>>45068175 #>>45068222 #>>45072198 #>>45073200 #
    2. jerrygenser ◴[] No.45067585[source]
    Re the overnight that's why some providers are offering there are batch tier jobs that are 50% off which return over up to 12 or 24 hours for non-interactive use cases.
    3. lbhdc ◴[] No.45067903[source]
    If you are willing to spread your workload out over a few regions getting that many GPUs on demand can be doable. You can use something like compute classes on gcp to fallback to different machine types if you do hit stockouts. That doesn't make you impervious from stock outs, but makes it a lot more resilient.

    You can also use duty cycle metrics to scale down your gpu workloads to get rid of some of the slack.

    4. empiko ◴[] No.45067926[source]
    You also need to consider that the field is moving really fast and you cannot really rely on being able to have the same margins in a year or two.
    5. derefr ◴[] No.45068175[source]
    > There are no on-demand GPUs at this scale.

    > These two factors mean that GPU utilization comes in at 10-20%.

    Why don't these two factors cancel out? Why wouldn't a company building a private GPU cluster for their own use, also sit a workload scheduler (e.g. Slurm) in front of it, enable credit accounting + usage-based-billing on it, and then let validated customer partners of theirs push batch jobs to their cluster — where each such job will receive huge spot resource allocations in what would otherwise be the cluster's low-duty point, to run to completion as quickly as possible?

    Just a few such companies (and universities) deciding to rent their excess inference capacity out to local SMEs, would mean that there would then be "on-demand GPUs at this scale." (You'd have to go through a few meetings to get access to it, but no more than is required to e.g. get a mortgage on a house. Certainly nothing as bad as getting VC investment.)

    This has always been precisely how the commercial market for HPC compute works: the validated customers of an HPC cluster sending off their flights of independent "wide but short" jobs, that get resource-packed + fair-scheduled between other clients' jobs into a 2D (nodes, time) matrix, with everything getting executed overnight, just a few wide jobs at a time.

    So why don't we see a similar commercial "GPU HPC" market?

    I can only assume that the companies building such clusters are either:

    - investor-funded, and therefore not concerned with dedicating effort to invent ways to minimize the TCO of their GPUs, when they could instead put all their engineering+operational labor into grabbing market share

    - bigcorps so big that they have contracts with one big overriding "customer" that can suck up 100% of their spare GPU-hours: their state's military / intelligence apparatus

    ...or, if not, then it must turn out that these clusters are being 100% utilized by their owners themselves — however unlikely that may seem.

    Because if none of these statements are true, then there's just a proverbial $20 bill sitting on the ground here. (And the best kind of $20 bill, too, from a company's perspective: rent extraction.)

    replies(3): >>45068564 #>>45071087 #>>45074128 #
    6. parhamn ◴[] No.45068222[source]
    Do we know how big the "batch processing" market is? I know the major providers offer 50%+ off for off-peak processing.

    I assumed it was to slightly correct this problem and on the surface it seems like it'd be useful for big data places where process-eventually is enough, e.g. it could be a relatively big market. Is it?

    replies(1): >>45069433 #
    7. thenewwazoo ◴[] No.45068564[source]
    > Why wouldn't a company ... let validated customer partners of theirs push batch jobs

    A company standing up this infrastructure is presumably not in the business of selling time-shares of infrastructure, they're busy doing AI B2B pet food marketing or whatever. In order to make that sale, someone has to connect their underutilized assets with interested customers, which is outside of their core competency. Who's going to do that?

    There's obviously an opportunity here for another company to be a market maker, but that's hard, and is its own speciality.

    replies(3): >>45069211 #>>45069323 #>>45070325 #
    8. loocorez ◴[] No.45069211{3}[source]
    Sounds like prime intellect
    9. mistrial9 ◴[] No.45069323{3}[source]
    Snowflake ?
    10. sdesol ◴[] No.45069433[source]
    I don't think you need to be big data to benefit.

    A major issue we have right now is, we want the coding process to be more "Agentic", but we don't have an easy way for LLMs to determine what to pull into context to solve a problem. This is a problem that I am working on with my personal AI search assistant, which I talk about below:

    https://github.com/gitsense/chat/blob/main/packages/chat/wid...

    Analyzers are the "Brains" for my search, but generating the analysis is both tedious and can be costly. I'm working on the tedious part and with batch processing, you can probably process thousands of files for under 5 dollars with Gemini 2.5 Flash.

    With batch processing and the ability to continuously analyze 10s of thousands of files, I can see companies wanting to make "Agentic" coding smarter, which should help with GPU utilization and drive down the cost of software development.

    replies(1): >>45073205 #
    11. quacksilver ◴[] No.45070325{3}[source]
    There are services like vast.ai that act as marketplaces.

    You don't know who owns the GPUs / if or when your job will complete and if the owner is sniffing what you are processing though

    12. fooker ◴[] No.45071087[source]
    The software stack for doing what you suggest would cost about a hundred million to develop over five-ten years.
    replies(1): >>45072220 #
    13. koliber ◴[] No.45072198[source]
    These are great points.

    However, I don’t think these companies provision capacity for peak usage and let it idle during off peak. I think they provision it at something a bit above average, and aim at 100% utilization for the max number of hours in the day. When there is not enough capacity to meet demand they utilize various service degradation methods and/or load shedding.

    replies(1): >>45072278 #
    14. appreciatorBus ◴[] No.45072220{3}[source]
    But I was assured that this sort of stack could simply be vibed into existence?
    15. mcny ◴[] No.45072278[source]
    Is this why I get anthropic/Claude emails every single day since I signed up for their status updates? I just assumed they were working hard with production bugs but in light of this comment, if you don't hit capacity constraints every day, you are wasting money?
    replies(2): >>45072607 #>>45073590 #
    16. chii ◴[] No.45072607{3}[source]
    This is true for all capital equipment - whether it's a GPU, a bore drill, or an earth mover.

    You want to make use of it at as close to 100% as possible.

    replies(1): >>45072945 #
    17. hvb2 ◴[] No.45072945{4}[source]
    With the caveat that GPUs depreciate a bit faster obviously. A drill is still a drill next year or a decade from now.
    replies(1): >>45073934 #
    18. senko ◴[] No.45073200[source]
    You're not wrong.

    However, this all assumes realtime requirements. For batching, you can smooth over the demand curve, and you don't care about latency.

    19. saagarjha ◴[] No.45073205{3}[source]
    You sound like you are talking about something completely different.
    replies(2): >>45073477 #>>45075958 #
    20. koliber ◴[] No.45073590{3}[source]
    Just like at an all-you-can eat buffet.
    21. apetrov ◴[] No.45073934{5}[source]
    yes, but the capital is still tied to it. you want it to Have a meaningful ROI, not sitting in a warehouse.
    22. reachableceo ◴[] No.45074128[source]
    That is what I’m doing with my excess compute , excess fabrication , CNC, laser , 3d printing , reflow oven etc capacity in between hardware revs for my main product. I also bill out my trusted sub contractors.

    I validate the compute renters because ITAR. Lots of hostile foreign powers trying to access compute .

    My main business is ITAR related , so I have incredibly high security in place already.

    We are multi tenant from day zero and have slurm etc in place for accounting reasons for federal contracts etc. we actually are spinning up federal contracting as a service and will do a ShowHN when that launches.

    Riches in the niches and the business of business :)

    23. sdesol ◴[] No.45075958{4}[source]
    No what I am saying is there are more applications for batch processing that will help with utilization. I can see developers and companies using off hour processing to prep their data for agentic coding.