←back to thread

167 points xnx | 1 comments | | HN request time: 0.204s | source
Show context
anupj ◴[] No.44530084[source]
Batch Mode for the Gemini API feels like Google’s way of asking, “What if we made AI more affordable and slower, but at massive scale?” Now you can process 10,000 prompts like “Summarize each customer review in one line” for half the cost, provided you’re willing to wait until tomorrow for the results.
replies(4): >>44530624 #>>44531272 #>>44533342 #>>44534982 #
1. diggan ◴[] No.44531272[source]
> Now you can process 10,000 prompts like “Summarize each customer review in one line” for half the cost, provided you’re willing to wait until tomorrow for the results.

Sounds like a great option to have available? Not every task I use LLMs for need immediate responses, and if I wasn't using local models for those things, getting a 50% discount and having to wait a day sounds like a fine tradeoff.