←back to thread

167 points xnx | 3 comments | | HN request time: 0.532s | source
Show context
dsjoerg ◴[] No.44527801[source]
We used the previous version of this batch mode, which went through BigQuery. It didn't work well for us at the time because we were in development mode and we needed faster cycle time to iterate and learn. Sometimes the response would come back much faster than 24 hours, but sometimes not. There was no visibility offered into what response time you would get; just submit and wait.

You have to be pretty darn sure that your job is going to do exactly what you want to be able to wait 24 hours for a response. It's like going back to the punched-card era. If I could get even 1% of the batch in a quicker response and then the rest more slowly, that would have made a big difference.

replies(4): >>44527819 #>>44528277 #>>44528385 #>>44530651 #
cpard ◴[] No.44527819[source]
It seems that the 24h SLA is standard for batch inference among the vendors and I wonder how useful it can be when you have no visibility on when the job will be delivered.

I wonder why they do that and who is actually getting value out of these batch APIs.

Thanks for sharing your experience!

replies(5): >>44527850 #>>44527911 #>>44528102 #>>44528329 #>>44530652 #
1. YetAnotherNick ◴[] No.44528102[source]
Contrary to other comments it's likely not because of queue or general batch reasons. I think it is because that LLMs are unique in the sense that it requires lot of fixed nodes because of vRAM requirements and hence it is harder to autoscale. So likely the batch jobs are executed when they have free resources from interactive servers.
replies(2): >>44528956 #>>44536031 #
2. cpard ◴[] No.44528956[source]
that makes total sense and what it entails is that interactive inference >>> batch inference in the market today in terms of demand.
3. dekhn ◴[] No.44536031[source]
Yes, almost certainly in this case Google sees traffic die off when a data center is in the dark. Specifically, there is a diurnal cycle of traffic, and Google usually routes users to close-by resources. So, late at night, all those backends which were running hot doing low-latency replies to users in near-real-time can instead switch over to processing batches. When I built an idle cycle harvester at google, I thought most of hte free cycles would come from low-usage periods, but it turned out that some clusters were just massively underutilized and had free resources all 24 hours.